AI can't read the room - podcast episode cover

AI can't read the room

Apr 28, 20256 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Leyla Isik, a professor of cognitive science at Johns Hopkins University, is also a senior scientist on a new study looking at how good AI is at reading social cues. She and her research team took short videos of people doing things — two people chatting, two babies on a playmat, two people doing a synchronized skate routine — and showed them to human participants. After, they were asked them questions like, are these two communicating with each other? Are they communicating? Is it a positive or negative interaction? Then, they showed the same videos to over 350 open source AI models. (Which is a lot, though it didn't include all the latest and greatest ones out there.) Isik found that the AI models were a lot worse than humans at understanding what was going on. Marketplace’s Stephanie Hughes visited Isik at her lab in Johns Hopkins to discuss the findings.

For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
AI can't read the room | Marketplace All-in-One podcast - Listen or read transcript on Metacast