🎧 openpodme

KategorierSøk Podcast
AI Explained

AI Explained

Teknologi

AI Explained is a series hosted by Fiddler AI featuring industry experts on the most pressing issues facing AI and machine learning teams. Learn more about Fiddler AI: www.fiddler.ai

Siste episoder av AI Explained podcast

Side 1 av 1
  1. Building Agents at Scale: Lessons from the Front Lines With Gary Stafford (00:58:14)

    In this episode of AI Explained, we are joined by Gary Stafford, Principal Solutions Architect at AWS Strands Agents. He delves into how enterprises choose between AI/ML and agentic approaches, patterns for multi-agent systems, and the role of MCP. Gary also shares real-world use cases and practical guidance on safety, scaling, and delivering enterprise-ready agent systems.

  2. Lessons Learned from Building Agentic Systems With Jayeeta Putatunda (00:46:53)

    In this episode of AI Explained, we are joined by Jayeeta Putatunda, Director of AI Center of Excellence at Fitch Group.  She discusses essential lessons learned from building and deploying AI agent systems, including challenges in moving from concept to production, key evaluation metrics, and the importance of observability and guardrails in ensuring reliable AI systems.

  3. Agent Wars: The Hype, Hope, and Hidden Risks with Nate B. Jones (00:56:06)

    In this episode of AI Explained, we are joined by Nate B. Jones, AI strategist. He explores high-level advice for organizations, technical ideas such as prompting and application architecture, and the current state of agent adoption. Key topics include challenges in building production-ready agents, architectural decisions, and ensuring ROI from these agents.

  4. AI Observability and Security for Agentic Workflows with Karthik Bharathy (00:44:01)

    In this episode of AI Explained, we are joined by Karthik Bharathy, General Manager, AI Ops & Governance for Amazon SageMaker AI at AWS.  He discusses the critical aspects of AI security and observability for agentic workflows. He covers the evolution of AI Ops, end-to-end observability, human oversight, the current state of AI in enterprises, and the ways agentic AI systems are transforming business operations. He also dives into the challenges of implementing AI security, evaluating AI decisions, and ensuring transparency and compliance.

  5. GenAI Use Cases and Challenges in Healthcare with Dr. Girish Nadkarni (00:39:47)

    In this episode of AI Explained, Dr. Girish Nadkarni from Icahn School of Medicine at Mount Sinai.  He discusses the implementation and impact of AI, specifically generative AI, in healthcare. He covers topics such as clinical implementation, risk prediction, the interplay between predictive and generative AI, the importance of governance and ethical considerations in AI deployment, and the future of personalized medicine.

  6. GRC in Generative AI with Navrina Singh (00:56:43)

    In this episode of AI Explained, we are joined by Navrina Singh, Founder and CEO at Credo AI. We will discuss the comprehensive need for AI governance beyond regulated industries, the core principles of responsible AI, and the importance of AI governance in accelerating business innovation. The conversation also covers the challenges companies face when implementing responsible AI practices and dives into the latest regulations like the EU AI Act and state-specific laws in the U.S.

  7. Inference, Guardrails, and Observability for LLMs with Jonathan Cohen (00:53:10)

    In this episode of AI Explained, we are joined by Jonathan Cohen, VP of Applied Research at NVIDIA.  We will explore the intricacies of NVIDIA's NeMo platform and its components like NeMo Guardrails and NIMS. Jonathan explains how these tools help in deploying and managing AI models with a focus on observability, security, and efficiency. They also explore topics such as the evolving role of AI agents, the importance of guardrails in maintaining responsible AI, and real-world examples of successful AI deployments in enterprises like Amdocs. Listeners will gain insights into NVIDIA's AI strategy and the practical aspects of deploying large language models in various industries.

  8. What the EU AI Act Really Means with Kevin Schawinski (00:45:47)

    On this episode, we’re joined by Kevin Schawinski, CEO and Co-Founder at Modulos AG The EU AI Act was passed to redefine the landscape for AI development and deployment in Europe. But what does it really mean for enterprises, AI innovators, and industry leaders?  Schawinski will share actionable insights to help organizations stay ahead of the EU AI Act, and discuss risk implications to meeting transparency requirements, while advancing responsible AI practices.

  9. Productionizing GenAI at Scale with Robert Nishihara (00:48:29)

    In this episode, we’re joined by Robert Nishihara, Co-founder and CEO at Anyscale. Enterprises are harnessing the full potential of GenAI across various facets of their operations for enhancing productivity, driving innovation, and gaining a competitive edge. However, scaling production GenAI deployments can be challenging due to the need for evolving AI infrastructure, approaches, and processes that can support advanced GenAI use cases. Nishihara will discuss reliability challenges, building the right AI infrastructure, and implementing the latest practices in productionizing GenAI at scale.

  10. Metrics to Detect Hallucinations with Pradeep Javangula (00:58:39)

    In this episode, we’re joined by Pradeep Javangula, Chief AI Officer at RagaAI Deploying LLM applications for real-world use cases requires a comprehensive workflow to ensure LLM applications generate high-quality and accurate content. Testing, fixing issues, and measuring impact are critical steps of the workflow to help LLM applications deliver value.  Pradeep Javangula, Chief AI Officer at RagaAI will discuss strategies and practical approaches organizations can follow to maintain high performing, correct, and safe LLM applications.

  11. AI Safety and Alignment with Amal Iyer (00:57:17)

    In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.  Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

  12. Managing the Risks of Generative AI with Kathy Baxter (00:57:21)

    On this episode, we’re joined by Kathy Baxter, Principal Architect of Responsible AI & Tech at Salesforce. Generative AI has become widely popular with organizations finding ways to drive innovation and business growth. The adoption of generative AI, however, remains low due to ethical implications and unintended consequences that negatively impact the organization and its consumers.  Baxter will discuss ethical AI practices organizations can follow to minimize potential harms and maximize the social benefits of AI.

  13. Legal Frontiers of AI with Patrick Hall (00:58:49)

    On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI. We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.

  14. Building Generative AI Applications for Production with Chaoyu Yang (00:59:07)

    On this episode, we’re joined by Chaoyu Yang, Founder and CEO at BentoML. AI-forward enterprises across industries are building generative AI applications to transform their businesses. While AI teams need to consider several factors ranging from ethical and social considerations to overall AI strategy, technical challenges remain to deploy these applications into production. Yang, will explore key aspects of generative AI application development and deployment.

  15. Graph Neural Networks and Generative AI with Jure Leskovec (00:52:05)

    On this episode, we’re joined by Jure Leskovec, Stanford professor and co-founder at Kumo.ai. Graph neural networks (GNNs) are gaining popularity in the AI community, helping ML teams build advanced AI applications that provide deep insights to tackle real-world problems. Stanford professor and co-founder at Kumo.AI, Jure Leskovec, whose work is at the intersection of graph neural networks, knowledge graphs, and generative AI, will explore how organizations can incorporate GNNs in their generative AI initiatives.

  16. Machine Learning for High Risk Applications with Parul Pandey (00:54:40)

    On this episode, we’re joined by Parul Pandey, Principal Data Scientist at H2O.ai and co-author of Machine Learning for High-Risk Applications. Although AI is being widely adopted, it poses several adversarial risks that can be harmful to organizations and users. Listen to this episode to learn how data scientists and ML practitioners can improve AI outcomes with proper model risk management techniques.

  17. AI Safety in Generative AI with Peter Norvig (00:39:58)

    On this episode, we’re joined by Peter Norvig, a Distinguished Education Fellow at the Stanford Institute for Human-Centered AI and co-author of popular books on AI, including Artificial Intelligence: A Modern Approach and more recently, Data Science in Context. AI has the potential to improve humanity’s quality of life and day-to-day decisions. However, these advancements come with their own challenges that can cause harm. Listen to this episode to learn considerations and best practices organizations can take to preserve human control and ensure transparent and equitable AI.

Side 1 av 1
Se podcasten hos PodMe