Sign in to confirm you’re not a bot
This helps protect our community. Learn more
Triton Inference Server in Azure ML Speeds Up Model Serving | #MVPConnect
Triton Inference Server from NVIDIA is a production-ready deep learning inference server in Azure Machine Learning. What will you learn from the session? How you can deploy a deep learning model using Triton server using Azure Machine Learning Managed Endpoints. Speaker Bio - Ayyanar Jeyakrishnan Ayyanar Jeyakrishnan has around 17+ years of IT experience. He is passionate about learning and sharing knowledge on Azure, AWS, Kubernetes, DevOps, Machine Learning, Well Architected Framework in Cloud. He actively contributes to the Cloudnloud Tech community which helps cancer childhood. He has 50+ Technical Certification in AWS, Azure, IBM Cloud, CKA/CKAD, TOGAF Level 2 Certified. Social Handle LinkedIn -   / jayyanar   Twitter -   / jayyanar1987   Pre-requisites: https://github.com/triton-inference-s... [eventID:18997]

Follow along using the transcript.

Microsoft Reactor

115K subscribers
Live chat replay is not available for this video.