This week Dr. Tim Scarfe, Dr. Keith Duggar and Connor Leahy chat with Prof. Karl Friston. Professor Friston is a British neuroscientist at University College London and an authority on brain imaging. In 2016 he was ranked the most influential neuroscientist on Semantic Scholar. His main contribution to theoretical neurobiology is the variational Free energy principle, also known as active inference in the Bayesian brain. The FEP is a formal statement that the existential imperative for any...
Dec 13, 2020•2 hr 51 min•Season 1Ep. 32
This week Dr. Tim Scarfe, Sayak Paul and Yannic Kilcher speak with Dr. Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover "Do Wide and Deep Networks learn the same things?", "Whats in a Loss function for Image Classification?", and "Big Self-supervised models are strong semi-supervised learners". Simon used to be a neuroscientist and also gives us ...
Dec 06, 2020•2 hr 30 min•Season 1Ep. 32
In this special edition, Dr. Tim Scarfe, Yannic Kilcher and Keith Duggar speak with Gary Marcus and Connor Leahy about GPT-3. We have all had a significant amount of time to experiment with GPT-3 and show you demos of it in use and the considerations. Note that this podcast version is significantly truncated, watch the youtube version for the TOC and experiments with GPT-3 https://www.youtube.com/watch?v=iccd86vOz3w
Nov 28, 2020•3 hr 44 min•Season 1Ep. 31
This week Dr. Tim Scarfe, Dr. Keith Duggar and Yannic Kilcher discuss multi-arm bandits and pure exploration with Dr. Wouter M. Koolen, Senior Researcher, Machine Learning group, Centrum Wiskunde & Informatica. Wouter specialises in machine learning theory, game theory, information theory, statistics and optimisation. Wouter is currently interested in pure exploration in multi-armed bandit models, game tree search, and accelerated learning in sequential decision problems. His research has be...
Nov 20, 2020•2 hr 48 min•Season 1Ep. 30
This week Dr. Tim Scarfe, Dr. Keith Duggar, Yannic Kilcher and Connor Leahy cover a broad range of topics, ranging from academia, GPT-3 and whether prompt engineering could be the next in-demand skill, markets and economics including trading and whether you can predict the stock market, AI alignment, utilitarian philosophy, randomness and intelligence and even whether the universe is infinite! 00:00:00 Show Introduction 00:12:49 Academia and doing a Ph.D 00:15:49 From academia ...
Nov 08, 2020•2 hr 51 min•Season 1Ep. 29
#machinelearning This week Dr. Tim Scarfe, Dr. Keith Duggar and Yannic Kilcher speak with veteran NLU expert Dr. Walid Saba. Walid is an old-school AI expert. He is a polymath, a neuroscientist, psychologist, linguist, philosopher, statistician, and logician. He thinks the missing information problem and lack of a typed ontology is the key issue with NLU, not sample efficiency or generalisation. He is a big critic of the deep learning movement and BERTology. We also cover GPT-3 in so...
Nov 04, 2020•2 hr 21 min•Season 1Ep. 28
This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI. Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder...
Nov 01, 2020•2 hr 5 min•Season 1Ep. 27
Join Dr Tim Scarfe, Sayak Paul, Yannic Kilcher, and Alex Stenlake have a conversation with Mr. Chai Time Data Science; Sanyam Bhutani! 00:00:00 Introduction 00:03:42 Show kick off 00:06:34 How did Sanyam get started into ML 00:07:46 Being a content creator 00:09:01 Can you be self taught without a formal education in ML? 00:22:54 Kaggle 00:33:41 H20 product / job 00:40:58 Intepretability / bias / engineering skills 00:43:22 Get that first job in DS...
Oct 28, 2020•1 hr 27 min•Season 1Ep. 26
Dr. Tim Scarfe, Yannic Kilcher and Sayak Paul chat with Sara Hooker from the Google Brain team! We discuss her recent hardware lottery paper, pruning / sparsity, bias mitigation and intepretability. The hardware lottery -- what causes inertia or friction in the marketplace of ideas? Is there a meritocracy of ideas or do the previous decisions we have made enslave us? Sara Hooker calls this a lottery because she feels that machine learning progress is entirely beholdant to the hardware and ...
Oct 20, 2020•2 hr 31 min•Season 1Ep. 25
This week join Dr. Tim Scarfe, Yannic Kilcher, and Keith Duggar have a conversation with Dr. Rebecca Roache in the last of our 3-part series on the social dilemma Netflix film. Rebecca is a senior lecturer in philosophy at Royal Holloway, university of London and has written extensively about the future of friendship. People claim that friendships are not what they used to be. People are always staring at their phones, even when in public Social media has turned us into narcissists w...
Oct 11, 2020•1 hr 16 min•Season 1Ep. 23
In this first part of our three part series on the Social Dilemma Netflix film, Dr. Tim Scarfe, Yannic "Lightspeed" Kilcher and Zak Jost gang up with Cybersecurity expert Andy Smith. We give you our take on the film. We are super excited to get your feedback on this one! Hope you enjoy. 00:00:00 Introduction 00:06:11 Moral hypocrisy 00:12:38 Road to hell is paved with good intentions, attention economy 00:15:04 They know everything about you 00:18:02 Addiction 00:21:22 Differ...
Oct 03, 2020•1 hr 7 min•Season 1Ep. 21
In today's episode, Dr. Keith Duggar, Alex Stenlake and Dr. Tim Scarfe chat about the education chapter in Kenneth Stanley's "Greatness cannot be planned" book, and we relate it to our Algoshambes conversation a few weeks ago. We debate whether objectives in education are a good thing and whether they cause perverse incentives and stifle creativity and innovation. Next up we dissect capsule networks from the top down! We finish off talking about fast algorithms and quantum computing. 00:00:00 In...
Sep 29, 2020•1 hr 24 min•Season 1Ep. 20
This week Dr. Tim Scarfe, Dr. Keith Duggar, Yannic "Lightspeed" Kilcher have a conversation with Microsoft Senior Software Engineer Sachin Kundu. We speak about programming languages including which our favourites are and functional programming vs OOP. Next we speak about software engineering and the intersection of software engineering and machine learning. We also talk about applications of ML and finally what makes an exceptional software engineer and tech lead. Sachin is an expert in this fi...
Sep 25, 2020•1 hr 24 min•Season 1Ep. 20
This week Dr. Keith Duggar, Alex Stenlake and Dr. Tim Scarfe discuss the theory of computation, intelligence, Bayesian model selection, the intelligence explosion and the the phenomenon of "interactive articles". 00:00:00 Intro 00:01:27 Kernels and context-free grammars 00:06:04 Theory of computation 00:18:41 Intelligence 00:22:03 Bayesian model selection 00:44:05 AI-IQ Measure / Intelligence explosion 00:52:09 Interactive articles 01:12:32 Outro...
Sep 22, 2020•1 hr 14 min•Season 1Ep. 19
Today Yannic Lightspeed Kilcher and I spoke with Alex Stenlake about Kernel Methods. What is a kernel? Do you remember those weird kernel things which everyone obsessed about before deep learning? What about Representer theorem and reproducible kernel hilbert spaces? SVMs and kernel ridge regression? Remember them?! Hope you enjoy the conversation! 00:00:00 Tim Intro 00:01:35 Yannic clever insight from this discussion 00:03:25 Street talk and Alex intro 00:05:06 How kernels are taugh...
Sep 18, 2020•2 hr 37 min•Season 1Ep. 18
This week Dr. Tim Scarfe and Dr. Keith Duggar discuss Explainability, Reasoning, Priors and GPT-3. We check out Christoph Molnar's book on intepretability, talk about priors vs experience in NNs, whether NNs are reasoning and also cover articles by Gary Marcus and Walid Saba critiquing deep learning. We finish with a brief discussion of Chollet's ARC challenge and intelligence paper. 00:00:00 Intro 00:01:17 Explainability and Christoph Molnars book on Intepretability 00:26:45 Explainabilit...
Sep 16, 2020•1 hr 26 min•Season 1Ep. 18
This week Dr. Tim Scarfe, Yannic Lightspeed Kicher, Sayak Paul and Ayush Takur interview Mathilde Caron from Facebook Research (FAIR). We discuss Mathilde's paper which she wrote with her collaborators "SWaV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments" @ https://arxiv.org/pdf/2006.09882.pdf This paper is the latest unsupervised contrastive visual representations algorithm and has a new data augmentation strategy and also a new online clustering strategy.&nb...
Sep 14, 2020•1 hr 28 min•Season 1Ep. 17
This week Dr. Tim Scarfe, Dr. Keith Duggar and Yannic "Lightspeed" Kilcher respond to the "Algoshambles" exam fiasco in the UK where the government were forced to step in to standardise the grades which were grossly inflated by the schools. The schools and teachers are all paid on metrics related to the grades received by students, what could possibly go wrong?! The result is that we end up with grades which have lost all their value and students are coached for the exams and don't actuall...
Sep 07, 2020•2 hr 35 min•Season 1Ep. 16
This week we spoke with Sayak Paul , who is extremely active in the machine learning community. We discussed the AI landscape in India, unsupervised representation learning, data augmentation and contrastive learning, explainability, abstract scene representations and finally pruning and the recent super positions paper. I really enjoyed this conversation and I hope you folks do too! 00:00:00 Intro to Sayak 00:17:50 AI landscape in India 00:24:20 Unsupervised representation learning 00:26:11 DAT...
Jul 17, 2020•2 hr 36 min•Season 1Ep. 15
We speak with Robert Lange! Robert is a PhD student at the Technical University Berlin. His research combines Deep Multi-Agent Reinforcement Learning and Cognitive Science to study the learning dynamics of large collectives. He has a brilliant blog where he distils and explains cutting edge ML research. We spoke about his story, economics, multi-agent RL, intelligence and AGI, and his recent article summarising the state of the art in neural network pruning. Robert's article on pruning in ...
Jul 08, 2020•2 hr 46 min•Season 1Ep. 14
We welcome Zak Jost from the WelcomeAIOverlords channel. Zak is an ML research scientist at Amazon. He has a great blog at http://blog.zakjost.com and also a Discord channel at https://discord.gg/xh2chKX WelcomeAIOverlords: https://www.youtube.com/channel/UCxw9_WYmLqlj5PyXu2AWU_g 00:00:00 INTRO START 00:01:07 MAIN SHOW START 00:01:59 ZAK'S STORY 00:05:06 YOUTUBE DISCUSSION 00:24:12 UNDERSTANDING PAPERS 00:29:53 CONTRASTIVE LEARNING INTRO 00:33:00 BRING YOUR OWN LATENT PAPER 01:03:13 GRAPHS...
Jun 30, 2020•2 hr 58 min•Season 1Ep. 14
In this episode of Machine Learning Street Talk Dr. Tim Scarfe, Yannic Kilcher and Connor Shorten spoke with Marie-Anne Lachaux, Baptiste Roziere and Dr. Guillaume Lample from Facebook Research (FAIR) in Paris. They recently released the paper "Unsupervised Translation of Programming Languages" which was an exciting new approach to learned translation of programming languages (learned transcoder) using an unsupervised encoder trained on individual monolingual corpora i.e. no parallel language da...
Jun 24, 2020•1 hr 3 min•Season 1Ep. 12
We cover Francois Chollet's recent paper. Abstract; To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We ...
Jun 19, 2020•3 hr 34 min•Season 1Ep. 11
In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten discuss their takeaways from OpenAI’s GPT-3 language model. With the help of Microsoft’s ZeRO-2 / DeepSpeed optimiser, OpenAI trained an 175 BILLION parameter autoregressive language model. The paper demonstrates how self-supervised language modelling at this scale can perform many downstream tasks without fine-tuning. 00:00:00 Intro 00:00:54 ZeRO1+2 (model + Data parallelism) (Connor) 00:03:17 Recent ...
Jun 06, 2020•2 hr 52 min•Season 1Ep. 9
This week we had a super insightful conversation with Jordan Edwards, Principal Program Manager for the AzureML team! Jordan is on the coalface of turning machine learning software engineering into a reality for some of Microsoft's largest customers. ML DevOps is all about increasing the velocity of- and orchastrating the non-interactive phase of- software deployments for ML. We cover ML DevOps and Microsoft Azure ML. We discuss model governance, testing, intepretability, tooli...
Jun 03, 2020•1 hr 13 min•Season 1Ep. 9
*Note this is an episode from Tim's Machine Learning Dojo YouTube channel. Join Eric Craeymeersch on a wonderful discussion all about ML engineering, computer vision, siamese networks, contrastive loss, one shot learning and metric learning. 00:00:00 Introduction 00:11:47 ML Engineering Discussion 00:35:59 Intro to the main topic 00:42:13 Siamese Networks 00:48:36 Mining strategies 00:51:15 Contrastive Loss 00:57:44 Trip loss paper 01:09:35 Quad loss paper 01:25:49 Eric's Quadl...
Jun 02, 2020•2 hr 29 min•Season 1Ep. 8
In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten interviewed Harri Valpola, CEO and Founder of Curious AI. We continued our discussion of System 1 and System 2 thinking in Deep Learning, as well as miscellaneous topics around Model-based Reinforcement Learning. Dr. Valpola describes some of the challenges of modelling industrial control processes such as water sewage filters and paper mills with the use of model-based RL. Dr. Valpola and his collabor...
May 25, 2020•2 hr 38 min•Season 1Ep. 7
In this episode of Machine Learning Street Talk, Tim Scarfe, Connor Shorten and Yannic Kilcher react to Yoshua Bengio’s ICLR 2020 Keynote “Deep Learning Priors Associated with Conscious Processing”. Bengio takes on many future directions for research in Deep Learning such as the role of attention in consciousness, sparse factor graphs and causality, and the study of systematic generalization. Bengio also presents big ideas in Intelligence that border on the line of philosophy and practical machi...
May 22, 2020•3 hr 34 min•Season 1Ep. 6
This week Connor Shorten, Yannic Kilcher and Tim Scarfe reacted to Yann LeCun's keynote speech at this year's ICLR conference which just passed. ICLR is the number two ML conference and was completely open this year, with all the sessions publicly accessible via the internet. Yann spent most of his talk speaking about self-supervised learning, Energy-based models (EBMs) and manifold learning. Don't worry if you hadn't heard of EBMs before, neither had we! Thanks for watching! Please Subscribe! P...
May 19, 2020•2 hr 12 min•Season 1Ep. 5
In this episode of Machine Learning Street Talk, we chat with Jonathan Frankle, author of The Lottery Ticket Hypothesis. Frankle has continued researching Sparse Neural Networks, Pruning, and Lottery Tickets leading to some really exciting follow-on papers! This chat discusses some of these papers such as Linear Mode Connectivity, Comparing and Rewinding and Fine-tuning in Neural Network Pruning, and more (full list of papers linked below). We also chat about how Jonathan got into Deep Learning ...
May 19, 2020•1 hr 27 min