AI Blindspot - podcast cover

AI Blindspot

Yogendra Mirajepodcasters.spotify.com

AI blindspot is a podcast that explores the uncharted territories of AI by focusing on its cutting-edge research and frontiers 

This podcast is for researchers, developers, curious minds, and anyone fascinated by the quest to close the gap between human intelligence and machines.

As AI is advancing at Godspeed, it has become increasingly difficult to keep up with the progress. This  is a human-in-loop AI-hosted podcast. 

Episodes

DeepSeek-V3 Technical Deep Dive

DeepSeek-V3, is a open-weights large language model. DeepSeek-V3's key features include its remarkably low development cost, achieved through innovative techniques like inference-time computing and an auxiliary-loss-free load balancing strategy. The model's architecture utilizes Mixture-of-Experts (MoE) and Multi-head Latent Attention (MLA) for efficiency. Extensive testing on various benchmarks demonstrates strong performance comparable to, and in some cases exceeding, leading closed-so...

Feb 05, 202519 min

Agentic Design Pattern IV - Multi-Agent Collaboration

In today's episode, we are discussing two research papers describing the two distinct approaches to building multi-agent collaboration : MetaGPT is a meta-programming framework using SOPs and defined roles for software development. https://arxiv.org/pdf/2308.00352 AutoGen uses customizable, conversable agents interacting via natural language or code to build applications. https://arxiv.org/pdf/2308.08155...

Jan 03, 202519 min

Agentic Design Pattern III - Tool Use

This episode discusses agentic design pattern Tool Use. Tool use is essential for enhancing the capabilities of LLMs and allowing them to interact effectively with the real world. We discuss following papers. Gorilla: Large Language Model Connected with Massive APIs https://arxiv.org/pdf/2305.15334 MM-REACT : Prompting ChatGPT for Multimodal Reasoning and Action https://arxiv.org/pdf/2303.11381...

Dec 20, 202413 min

Agentic Design Pattern II - Reflection

This episode discussed AI agentic design pattern "Reflection"📝 𝗦𝗘𝗟𝗙-𝗥𝗘𝗙𝗜𝗡𝗘 SELF-REFINE is an approach where the LLM generates an initial output, then iteratively reviews and refines it, providing feedback on its own work until the output reaches a desired quality. This self-loop allows the LLM to act as both the creator and critic, enhancing its output step by step.🔍 𝗖𝗥𝗜𝗧𝗜𝗖 CRITIC leverages external tools—like search engines and code interpreters—to fact-check and refine LLM-ge...

Dec 02, 202415 min

Agentic design pattern I - Planning

In this episode, we discuss following agent architectures:ReAct (Reason + Act): A method that alternates reasoning and actions, creating a powerful feedback loop for decision-making.Plan and Execute: Breaks down tasks into smaller steps before executing them sequentially, improving reasoning accuracy and efficiency. However, it may face higher latency due to the lack of parallel processing.ReWOO: Separates reasoning from observations, improving efficiency, reducing token consumption, and maki......

Nov 04, 202414 min

AI Agents

🤖 AI Agents Uncovered! 🤖In our latest episode, we're diving deep into the fascinating world of AI agents, focusing specifically on agents powered by Large Language Models (LLMs). These agents are shaping how AI systems can perceive, decide, and act – bringing us closer to the vision of highly adaptable, intelligent assistants.Key HighlightsAI agents started in philosophy before migrating to computer science and AI. From simple task-specific tools to adaptable LLM-powered agents, their evoluti....

Oct 29, 202415 min

AI Utopia

Dario Amodei's essay, "Machines of Loving Grace," envisions the upside of AI if everything goes right. Could we be on the verge of an AI utopia where technology radically improves the world? Let's find out! 🌍✨𝗪𝗵𝘆 𝗱𝗶𝘀𝗰𝘂𝘀𝘀 𝗔𝗜 𝗨𝘁𝗼𝗽𝗶𝗮?While many discussions around AI focus on risks, it's equally important to highlight its positive potential. The goal is to balance the narrative by focusing on best-case scenarios while acknowledging the importance of managing risks. It's about stri...

Oct 20, 202417 min

AI winning Nobel and Alphafold Deep Dive

💡 𝗡𝗼𝗯𝗲𝗹 𝗣𝗿𝗶𝘇𝗲𝘀 - 𝗔𝗜 𝗛𝘆𝗽𝗲 𝗼𝗿 𝗴𝗹𝗶𝗺𝗽𝘀𝗲 𝗶𝗻𝘁𝗼 𝗦𝗶𝗻𝗴𝘂𝗹𝗮𝗿𝗶𝘁𝘆? 💡One of the biggest moments from this year's Nobel announcements was AI's double win!𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗣𝗵𝘆𝘀𝗶𝗰𝘀Geoffrey Hinton and John Hopfield: Awarded for their pioneering work on neural networks, integrating physics principles like energy-based models and statistical physics into machine learning.𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗖𝗵𝗲𝗺𝗶𝘀𝘁𝗿𝘆John Jumper and Demis Hassabis: Recognized for AlphaFold, which...

Oct 14, 202416 min

o1-preview : The dawn of AGI ?

This episode covers Open AI Dev Day Updates and a 280-page research paper on o1 evaluation model. Realtime API: Build fast speech-to-speech experiences in applications.Vision Fine-Tuning: Fine-tune GPT-4 with images and text to enhance vision capabilities.Prompt Caching: Receive automatic discounts on inputs recently seen by the model.Distillation: Fine-tune cost-efficient models using outputs from larger models.The research paper we discussed is "Evaluation of OpenAI o1: Opportunities and Ch......

Oct 07, 202424 min

Finetuning vs RAG

Large language models (LLMs) excel at various tasks due to their vast training datasets, but their knowledge can be static and lack domain-specific nuance. Researchers have explored methods like fine-tuning and retrieval-augmented generation (RAG) to address these limitations. Fine-tuning involves adjusting a pre-trained model on a narrower dataset to enhance its performance in a specific domain. RAG, on the other hand, expands LLMs' capabilities, especially in knowledge-intensive tasks,...

Sep 30, 20249 min

Fixing LLM Hallucinations with Facts

This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained to generate natural language queries to fetch data from Data Commons and Retrieval Augmented Generation (RAG), where relevant data tables from Data......

Sep 23, 202412 min