Sign in to confirm you’re not a bot
This helps protect our community. Learn more
.NET AI Community Standup: Evaluate the Quality of Your AI Applications
Bruno, Peter, and Shyam for an exciting online event where we’ll dive into Microsoft.Extensions.AI.Evaluation, a powerful open-source .NET library designed to assess the quality and efficacy of LLM responses in AI applications. 🎯 What You’ll Learn: ✅ How to measure AI response quality using built-in evaluators (coherence, relevance, equivalence, groundedness, etc.). ✅ How to build your own custom evaluators and metrics for your specific AI use case. ✅ How to leverage LLM response caching, evaluation result storage, and report generation to streamline AI evaluations. ✅ Live demos and real-world examples to show you how to integrate these tools into your AI-powered .NET apps. 🔎 Chapters: 00:00 Countdown 01:45 Welcome and Intros 03:30 What is Microsoft.Extensions.AI.Evaluation 13:33 How this works 17:46 Libraries 19:44 Resources and Contacts 20:14 Demo 53:58 Closing comments 59:41 Wrap 🔗 Links: Microsoft.Extensions.AI.Evaluation: https://devblogs.microsoft.com/dotnet... Collection: https://learn.microsoft.com/en-us/col... 🔥 Whether you're an AI developer, .NET enthusiast, or just exploring AI quality metrics, this session is for you! Don’t miss out! 🎙️ Featuring: Bruno Capuano (@elbruno) Peter Waldschmidt (  / peterwaldschmidt  , Shyam Namboodiripad Connect with Bruno:   / elbruno   #AI #DotNET #LLM #AIEvals

Follow along using the transcript.

dotnet

325K subscribers