We are looking for new people to join our core team

Boosting F# Libraries with Automated Agentic AI

Microsoft / GitHub

In this talk, Don explores how GitHub Agentic Workflows - a framework developed at GitHub Next - can be used to augment F# library development through automated performance and test improvements. The approach introduces “Continuous Test Improvement” and “Continuous Performance Improvement” where AI agents automatically research, measure, optimise, and re-measure code performance in a continuous loop, all while maintaining human oversight through pull request reviews and goal-setting.

This semi-automatic engineering approach represents a fundamental shift in software development: from manual coding to AI-assisted completions, to task-oriented programming, and now to event-triggered agentic workflows.

Don will demonstrate practical applications in F# libraries, showing how these workflows can identify performance bottlenecks, generate benchmarks, implement optimisations, and verify improvements - all while preserving code correctness through automated testing.

Learn how this emerging technology could transform how we maintain and optimise F# libraries, making high-performance code more accessible to the entire F# community.

Links