Machine Learning Street Talk (MLST) - podcast cover

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)podcasters.spotify.com
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

Episodes

#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them. Panel: Dr. Tim Scarfe Dr. Keith...

Feb 22, 20222 hr 33 minSeason 1Ep. 63

#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics

Dr. Guy Emerson is a computational linguist and obtained his Ph.D from Cambridge university where he is now a research fellow and lecturer. On panel we also have myself, Dr. Tim Scarfe, as well as Dr. Keith Duggar and the veritable Dr. Walid Saba. We dive into distributional semantics, probability theory, fuzzy logic, grounding, vagueness and the grammar/cognition connection. The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of wo...

Feb 03, 20222 hr 30 minSeason 1Ep. 62

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Patreon: https://www.patreon.com/mlst Yann LeCun thinks that it's specious to say neural network models are interpolating because in high dimensions, everything is extrapolation. Recently Dr. Randall Balestriero , Dr. Jerome Pesente and prof. Yann LeCun released their paper learning in high dimensions always amounts to extrapolation. This discussion has completely changed how we think about neural net...

Jan 04, 20223 hr 20 minSeason 1Ep. 61

#60 Geometric Deep Learning Blueprint (Special Edition)

Patreon: https://www.patreon.com/mlst The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or fea...

Sep 19, 20214 hr 33 minSeason 1Ep. 60

#59 - Jeff Hawkins (Thousand Brains Theory)

Patreon: https://www.patreon.com/mlst The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our b...

Sep 03, 20213 hr 35 minSeason 1Ep. 59

#58 Dr. Ben Goertzel - Artificial General Intelligence

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots. Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of...

Aug 11, 20212 hr 28 minSeason 1Ep. 58

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell t...

Jul 25, 20213 hr 31 minSeason 1Ep. 57

#56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

It has been over three decades since the statistical revolution overtook AI by a storm and over two  decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by ...

Jul 08, 20211 hr 11 minSeason 1Ep. 56

#55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

Dr. Ishan Misra is a Research Scientist at Facebook AI Research where he works on Computer Vision and Machine Learning. His main research interest is reducing the need for human supervision, and indeed, human knowledge in visual learning systems. He finished his PhD at the Robotics Institute at Carnegie Mellon. He has done stints at Microsoft Research, INRIA and Yale. His bachelors is in computer science where he achieved the highest GPA in his cohort.  Ishan is fast becoming a prolific sci...

Jun 21, 20212 hr 36 minSeason 1Ep. 55

#54 Gary Marcus and Luis Lamb - Neurosymbolic models

Professor Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. Gary said in his recent next decade paper that — without us, or other creatures like us, the world would continue to exist, but it would not be described, distilled, or understood.  Human lives are filled with abstraction and causal description. This is so powerful. Franco...

Jun 04, 20212 hr 24 minSeason 1Ep. 54

#53 Quantum Natural Language Processing - Prof. Bob Coecke (Oxford)

Bob Coercke is a celebrated physicist, he's been a Physics and Quantum professor at Oxford University for the last 20 years. He is particularly interested in Structure which is to say, Logic, Order, and Category Theory. He is well known for work involving compositional distributional models of natural language meaning and he is also fascinated with understanding how our brains work. Bob was recently appointed as the Chief Scientist at Cambridge Quantum Computing. Bob thinks that interactions bet...

May 19, 20212 hr 18 minSeason 1Ep. 53

#52 - Unadversarial Examples (Hadi Salman, MIT)

Performing reliably on unseen or shifting data distributions is a difficult challenge for modern vision systems, even slight corruptions or transformations of images are enough to slash the accuracy of state-of-the-art classifiers. When an adversary is allowed to modify an input image directly, models can be manipulated into predicting anything even when there is no perceptible change, this is known an adversarial example. The ideal definition of an adversarial example is when humans consistentl...

May 01, 20212 hr 48 minSeason 1Ep. 52

#51 Francois Chollet - Intelligence and Generalisation

In today's show we are joined by Francois Chollet, I have been inspired by Francois ever since I read his Deep Learning with Python book and started using the Keras library which he invented many, many years ago. Francois has a clarity of thought that I've never seen in any other human being! He has extremely interesting views on intelligence as generalisation, abstraction and an information conversation ratio. He wrote on the measure of intelligence at the end of 2019 and it had a huge impact o...

Apr 16, 20212 hr 2 minSeason 1Ep. 51

#50 Christian Szegedy - Formal Reasoning, Program Synthesis

Dr. Christian Szegedy from Google Research is a deep learning heavyweight. He invented adversarial examples, one of the first object detection algorithms, the inceptionnet architecture, and co-invented batchnorm. He thinks that if you bet on computers and software in 1990 you would have been as right as if you bet on AI now. But he thinks that we have been programming computers the same way since the 1950s and there has been a huge stagnation ever since. Mathematics is the process of taking a fu...

Apr 04, 20212 hr 33 minSeason 1Ep. 50

#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)

The race is on, we are on a collective mission to understand and create artificial general intelligence. Dr. Tom Zahavy, a Research Scientist at DeepMind thinks that reinforcement learning is the most general learning framework that we have today, and in his opinion it could lead to artificial general intelligence. He thinks there are no tasks which could not be solved by simply maximising a reward.  Back in 2012 when Tom was an undergraduate, before the deep learning revolution he attended...

Mar 23, 20211 hr 25 minSeason 1Ep. 49

#48 Machine Learning Security - Andy Smith

First episode in a series we are doing on ML DevOps. Starting with the thing which nobody seems to be talking about enough, security! We chat with cyber security expert Andy Smith about threat modelling and trust boundaries for an ML DevOps system.  Intro [00:00:00] ML DevOps - a security perspective [00:00:50] Threat Modelling [00:03:03] Adversarial examples? [00:11:27] Nobody understands the whole stack [00:13:53] On the size of the state space, the element of unpredictability [00:18:32] ...

Mar 16, 202137 minSeason 1Ep. 48

047 Interpretable Machine Learning - Christoph Molnar

Christoph Molnar is one of the main people to know in the space of interpretable ML. In 2018 he released the first version of his incredible online book, interpretable machine learning. Interpretability is often a deciding factor when a machine learning (ML) model is used in a product, a decision process, or in research. Interpretability methods can be used to discover knowledge, to debug or justify the model and its predictions, and to control and improve the model, reason about potential bias ...

Mar 14, 20212 hr 40 minSeason 1Ep. 47

#046 The Great ML Stagnation (Mark Saroufim and Dr. Mathew Salvaris)

Academics think of themselves as trailblazers, explorers — seekers of the truth. Any fundamental discovery involves a significant degree of risk. If an idea is guaranteed to work then it moves from the realm of research to engineering. Unfortunately, this also means that most research careers will invariably be failures at least if failures are measured via “objective” metrics like citations. Today we discuss the recent article from Mark Saroufim called Machine Learning: the great stagnation. We...

Mar 06, 20212 hr 40 minSeason 1Ep. 46

#045 Microsoft's Platform for Reinforcement Learning (Bonsai)

Microsoft has an interesting strategy with their new “autonomous systems” technology also known as Project Bonsai. They want to create an interface to abstract away the complexity and esoterica of deep reinforcement learning. They want to fuse together expert knowledge and artificial intelligence all on one platform, so that complex problems can be decomposed into simpler ones. They want to take machine learning Ph.Ds out of the equation and make autonomous systems engineering look more like a t...

Feb 28, 20213 hr 30 minSeason 1Ep. 45

#044 - Data-efficient Image Transformers (Hugo Touvron)

Today we are going to talk about the *Data-efficient image Transformers paper or (DeiT) which Hugo is the primary author of. One of the recipes of success for vision models since the DL revolution began has been the availability of large training sets. CNNs have been optimized for almost a decade now, including through extensive architecture search which is prone to overfitting. Motivated by the success of transformers-based models in Natural Language Processing there has been increasing attenti...

Feb 25, 202152 minSeason 1Ep. 44

#043 Prof J. Mark Bishop - Artificial Intelligence Is Stupid and Causal Reasoning won't fix it.

Professor Mark Bishop does not think that computers can be conscious or have phenomenological states of consciousness unless we are willing to accept panpsychism which is idea that mentality is fundamental and ubiquitous in the natural world, or put simply, that your goldfish and everything else for that matter has a mind. Panpsychism postulates that distinctions between intelligences are largely arbitrary. Mark’s work in the ‘philosophy of AI’ led to an influential critique of computational app...

Feb 19, 20212 hr 35 minSeason 1Ep. 43

#042 - Pedro Domingos - Ethics and Cancel Culture

Today we have professor Pedro Domingos and we are going to talk about activism in machine learning, cancel culture, AI ethics and kernels. In Pedro's book the master algorithm, he segmented the AI community into 5 distinct tribes with 5 unique identities (and before you ask, no the irony of an anti-identitarian doing do was not lost on us!). Pedro recently published an article in Quillette called Beating Back Cancel Culture: A Case Study from the Field of Artificial Intelligence. Domingos has ra...

Feb 11, 20212 hr 34 minSeason 1Ep. 42

#041 - Biologically Plausible Neural Networks - Dr. Simon Stringer

Dr. Simon Stringer. Obtained his Ph.D in mathematical state space control theory and has been a Senior Research Fellow at Oxford University for over 27 years. Simon is the director of the the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, which is based within the Oxford University Department of Experimental Psychology. His department covers vision, spatial processing, motor function, language and consciousness -- in particular -- how the primate visual system learns to ...

Feb 03, 20211 hr 27 minSeason 1Ep. 41

#040 - Adversarial Examples (Dr. Nicholas Carlini, Dr. Wieland Brendel, Florian Tramèr)

Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. there's good reason to believe neural networks look at very different features than we would have expected.  As articulated in the 2019 "features not bugs" paper Adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle ...

Jan 31, 20212 hr 36 minSeason 1Ep. 40

#039 - Lena Voita - NLP

ena Voita is a Ph.D. student at the University of Edinburgh and University of Amsterdam. Previously, She was a research scientist at Yandex Research and worked closely with the Yandex Translate team. She still teaches NLP at the Yandex School of Data Analysis. She has created an exciting new NLP course on her website lena-voita.github.io which you folks need to check out! She has one of the most well presented blogs we have ever seen, where she discusses her research in an easily digestable mann...

Jan 23, 20212 hr 58 minSeason 1Ep. 39

#038 - Professor Kenneth Stanley - Why Greatness Cannot Be Planned

Professor Kenneth Stanley is currently a research science manager at OpenAI in San Fransisco. We've Been dreaming about getting Kenneth on the show since the very begininning of Machine Learning Street Talk. Some of you might recall that our first ever show was on the enhanced POET paper, of course Kenneth had his hands all over it. He's been cited over 16000 times, his most popular paper with over 3K citations was the NEAT algorithm. His interests are neuroevolution, open-endedness, NNs, artifi...

Jan 20, 20213 hr 46 minSeason 1Ep. 38

#037 - Tour De Bayesian with Connor Tann

Connor Tan is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, pratical utility, and unique capabilities of Bayesian methods. We also discuss the computational diffi...

Jan 11, 20212 hr 35 minSeason 1Ep. 37

#036 - Max Welling: Quantum, Manifolds & Symmetries in ML

Today we had a fantastic conversation with Professor Max Welling, VP of Technology, Qualcomm Technologies Netherlands B.V.  Max is a strong believer in the power of data and computation and its relevance to artificial intelligence. There is a fundamental blank slate paradgm in machine learning, experience and data alone currently rule the roost. Max wants to build a house of domain knowledge on top of that blank slate. Max thinks there are no predictions without assumptions, no generalizati...

Jan 03, 20212 hr 43 minSeason 1Ep. 36

#035 Christmas Community Edition!

Welcome to the Christmas special community edition of MLST! We discuss some recent and interesting papers from Pedro Domingos (are NNs kernel machines?), Deepmind (can NNs out-reason symbolic machines?), Anna Rodgers - When BERT Plays The Lottery, All Tickets Are Winning, Prof. Mark Bishop (even causal methods won't deliver understanding), We also cover our favourite bits from the recent Montreal AI event run by Prof. Gary Marcus (including Rich Sutton, Danny Kahneman and Christof Koch). We resp...

Dec 27, 20203 hr 56 minSeason 1Ep. 35

#034 Eray Özkural- AGI, Simulations & Safety

Dr. Eray Ozkural is an AGI researcher from Turkey, he is the founder of Celestial Intellect Cybernetics. Eray is extremely critical of Max Tegmark, Nick Bostrom and MIRI founder Elizier Yodokovsky and their views on AI safety. Eray thinks that these views represent a form of neoludditism and they are capturing valuable research budgets with doomsday fear-mongering and effectively want to prevent AI from being developed by those they don't agree with. Eray is also sceptical of the intelligence ex...

Dec 20, 20203 hr 39 minSeason 1Ep. 34