If playback doesn't begin shortly, try restarting your device.
•
You're signed out
Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.
CancelConfirm
Share
An error occurred while retrieving sharing information. Please try again later.
914 views • Feb 27, 2024 • #ArmchairArchitects #AzureEnablementShow
Show less
In this second part of a two-part episode on the dangers of AI and how to deal them in the context of large language models (LLMs), David is joined by our #ArmchairArchitects, Uli and Eric (@mougue) for a conversation that touches on how LLMs are different from traditional models, the challenge of trusting the model architecture, ethical considerations including bias, privacy, governance, and accountability, plus a look at some practical steps, such as reporting, analytics, and data visualization.
Be sure to watch Armchair Architects: The Danger Zone (Part 1) https://aka.ms/azenable/146 before watching this episode of the #AzureEnablementShow.
Resources
• Microsoft Azure AI Fundamentals: Generative AI https://learn.microsoft.com/training/...
• Responsible and trusted AI https://learn.microsoft.com/azure/clo...
• Architectural approaches for AI and ML in multitenant solutions https://learn.microsoft.com/azure/arc...
• Training: AI engine…...more