Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere - podcast episode cover

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

Apr 20, 202352 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

On this episode, we’re joined by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.

We discuss:

- What “attention” means in the context of ML.

- Aidan’s role in the “Attention Is All You Need” paper.

- What state-space models (SSMs) are, and how they could be an alternative to transformers. 

- What it means for an ML architecture to saturate compute.

- Details around data constraints for when LLMs scale.

- Challenges of measuring LLM performance.

- How Cohere is positioned within the LLM development space.

- Insights around scaling down an LLM into a more domain-specific one.

- Concerns around synthetic content and AI changing public discourse.

- The importance of raising money at healthy milestones for AI development.

Aidan Gomez - https://www.linkedin.com/in/aidangomez/

Cohere - https://www.linkedin.com/company/cohere-ai/



Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.


Resources:

- https://cohere.ai/

- “Attention Is All You Need”




#OCR #DeepLearning #AI #Modeling #ML

For the best experience, listen in Metacast app for iOS or Android
Open in Metacast