Sign in to confirm youā€™re not a bot
This helps protect our community. Learn more
RAGChat: Evaluating RAG answer quality
In this series, we dive deep into our most popular, fully-featured, and open-source RAG solution: https://aka.ms/ragchat How can you be sure that the RAG chat app answers are accurate, clear, and well formatted? Evaluation! In this session, we'll show you how to generate synthetic data and run bulk evaluations on your RAG app, using the azure-ai-evaluation SDK. Learn about GPT metrics like groundedness and fluency, and custom metrics like citation matching. Plus, discover how you can run evaluations on CI/CD, to easily verify that new changes don't introduce quality regressions. This session is a part of a series! To learn more, click here: https://aka.ms/RAGDeepDive šŸ“Œ Get more RAG resources: https://aka.ms/thesource/github/yt #MicrosoftReactor #learnconnectbuild #RAGDeepDive [eventID:24578]

Follow along using the transcript.

Microsoft Reactor

106K subscribers
Live chat replay is not available for this video.