Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Explainable AI (XAI) Course: Explainable AI in practice
7Likes
1,462Views
2023Apr 18
The XAI course provides a comprehensive overview of explainable AI, covering both theory and practice, and exploring various use cases for explainability. Participants will learn not only how to generate explanations, but also how to evaluate and effectively communicate these explanations to diverse stakeholders. The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant. This course is designed for data scientists that have at least two years in industry of hands-on experience with machine learning and Python and a basic background in deep learning. Some of the sessions will be held in-person at the Microsoft Reactor in Tel Aviv, while others will be conducted virtually. Course Leaders: Bitya Neuhof, DataNights Yasmin Bokobza, Microsoft What is this session about? How to properly incorporate explanations in machine learning projects and what aspects should you keep in mind? Over the past few years the need to explain the output of machine learning models has received growing attention. Explanations not only reveal the reasons behind models predictions and increase users' trust in the model, but they can be used for different purposes. To fully utilize explanations and incorporate them into machine learning projects the following aspects of explanations should taken into consideration: explanation goals, the explanation method, and explanations’ quality. In this talk, we will discuss how to select the appropriate explanation method based on the intended purpose of the explanation. Then, we will present two approaches for evaluating explanations, including practical examples of evaluation metrics, while highlighting the importance of assessing explanation quality. Next, we will examine the various purposes explanation can serve, along with the stage of the machine learning pipeline the explanation should be incorporated in. Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context. [eventID:18410]

Microsoft Reactor

115K subscribers