Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Comments are turned off. Learn more
Scott and Mark learn responsible AI | BRK329
134Likes
5,122Views
Nov 252024
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks. To learn more, please check out these resources: š—¦š—½š—²š—®š—øš—²š—æš˜€:
  • Mark Russinovich
  • Scott Hanselman
š—¦š—²š˜€š˜€š—¶š—¼š—» š—œš—»š—³š—¼š—æš—ŗš—®š˜š—¶š—¼š—»: This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com BRK329 | English (US) | Security #MSIgnite

Follow along using the transcript.

Microsoft Events

135K subscribers