Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Generative AI application stack and providing long term memory to LLMs | ODFP612
15Likes
816Views
May 222024
Learn about the role of long-term memory for Large Language Models (LLMs) in building highly performant and cost-effective Generative AI applications, like Semantic Search, Retrieval Augment Generation (RAG), and AI-agent-powered applications. Learn how Microsoft Semantic Kernel, MongoDB Atlas vector search, and search nodes running on Microsoft Cloud can streamline the process for developers to build enterprise-grade LLM-powered applications. š—¦š—½š—²š—®š—øš—²š—æš˜€:
  • Prakul Agarwal
š—¦š—²š˜€š˜€š—¶š—¼š—» š—œš—»š—³š—¼š—æš—ŗš—®š˜š—¶š—¼š—»: This video is one of many sessions delivered for the Microsoft Build 2024 event. View the full session schedule and learn more about Microsoft Build at https://build.microsoft.com ODFP612 | English (US) #MSBuild

Follow along using the transcript.

Microsoft Developer

589K subscribers