Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Evaluating your RAG Chat App
RAG (Retrieval Augmented Generation) is the most popular approach used to get LLMs to answer user questions grounded in a domain. How can you be sure that the answers are accurate, clear, and well formatted? *Evaluation!* In this session, we'll show you how to use Azure AI Studio and the Promptflow SDK to generate synthetic data and run bulk evaluations on your RAG app. Learn about different GPT metrics like groundedness and fluency, and consider other ways you can measure the quality of your RAG app answers. Presented by Pamela Fox, Python Advocate ** Part of RAGHack, a free global hackathon to develop RAG applications. Join at https://aka.ms/raghack ** 📌Check out the RAGHack 2024 series here! https://aka.ms/RAGHack2024 #MicrosoftReactor #RAGHack [eventID:23414]

Follow along using the transcript.

Microsoft Reactor

106K subscribers
Live chat replay is not available for this video.