Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, priva...
Dec 22, 2023•36 min•Ep 662•Transcript available on Metacast Today we’re joined by Mike Miller, director of product at AWS responsible for the company’s “edutainment” products. In our conversation with Mike, we explore AWS PartyRock, a no-code generative AI app builder that allows users to easily create fun and shareable AI applications by selecting a model, chaining prompts together, and linking different text, image, and chatbot widgets together. Additionally, we discuss some of the previous tools Mike’s team has delivered at the intersection of develop...
Dec 18, 2023•30 min•Ep 661•Transcript available on Metacast Today we’re joined by Cody Coleman, co-founder and CEO of Coactive AI. In our conversation with Cody, we discuss how Coactive has leveraged modern data, systems, and machine learning techniques to deliver its multimodal asset platform and visual search tools. Cody shares his expertise in the area of data-centric AI, and we dig into techniques like active learning and core set selection, and how they can drive greater efficiency throughout the machine learning lifecycle. We explore the various wa...
Dec 14, 2023•38 min•Ep 660•Transcript available on Metacast Today we’re joined by Kyle Roche, founder and CEO of Griptape to discuss patterns and middleware for LLM applications. We dive into the emerging patterns for developing LLM applications, such as off prompt data—which allows data retrieval without compromising the chain of thought within language models—and pipelines, which are sequential tasks that are given to LLMs that can involve different models for each task or step in the pipeline. We also explore Griptape, an open-source, Python-based mid...
Dec 11, 2023•36 min•Ep 659•Transcript available on Metacast Today we’re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team’s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team’s work on foundation models for financial data cur...
Dec 04, 2023•42 min•Ep 658•Transcript available on Metacast Today we’re joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approac...
Nov 28, 2023•43 min•Ep 657•Transcript available on Metacast Today we’re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI application...
Nov 20, 2023•41 min•Ep 656•Transcript available on Metacast Today we’re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are mo...
Nov 13, 2023•39 min•Ep 655•Transcript available on Metacast Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understand...
Nov 06, 2023•48 min•Ep 654•Transcript available on Metacast Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incent...
Oct 30, 2023•44 min•Ep 653•Transcript available on Metacast Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages ...
Oct 23, 2023•40 min•Ep 652•Transcript available on Metacast Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardw...
Oct 16, 2023•1 hr 19 min•Ep 651•Transcript available on Metacast Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, w...
Oct 09, 2023•39 min•Ep 650•Transcript available on Metacast Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We ...
Oct 02, 2023•49 min•Ep 649•Transcript available on Metacast Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nat...
Sep 25, 2023•44 min•Ep 648•Transcript available on Metacast Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be ...
Sep 18, 2023•41 min•Ep 647•Transcript available on Metacast Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a...
Sep 11, 2023•59 min•Ep 646•Transcript available on Metacast Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also ...
Sep 04, 2023•42 min•Ep 645•Transcript available on Metacast Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementatio...
Aug 28, 2023•45 min•Ep 644•Transcript available on Metacast Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Compl...
Aug 21, 2023•34 min•Ep 643•Transcript available on Metacast Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and ...
Aug 14, 2023•38 min•Ep 642•Transcript available on Metacast Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Trans...
Aug 07, 2023•39 min•Ep 641•Transcript available on Metacast Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us a...
Jul 31, 2023•37 min•Ep 640•Transcript available on Metacast Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we dis...
Jul 24, 2023•37 min•Ep 639•Transcript available on Metacast Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and a...
Jul 17, 2023•48 min•Ep 638•Transcript available on Metacast Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased A...
Jul 10, 2023•38 min•Ep 637•Transcript available on Metacast Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks whi...
Jul 03, 2023•48 min•Ep 636•Transcript available on Metacast Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language ...
Jun 26, 2023•53 min•Ep 635•Transcript available on Metacast Today we’re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the rel...
Jun 19, 2023•57 min•Ep 634•Transcript available on Metacast Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we ...
Jun 12, 2023•40 min•Ep 633•Transcript available on Metacast