Meta's Harmful Content Chaos, Human-Like AI Reasoning & iPhone Tricks - podcast episode cover

Meta's Harmful Content Chaos, Human-Like AI Reasoning & iPhone Tricks

Mar 25, 20243 minTranscript available on Metacast
--:--
--:--
Listen in podcast apps:

Episode description

A psychologist resigned from Meta's expert group, accusing the company of prioritizing profits over user safety by failing to remove self-harm content from Instagram to keep young users engaged. A bunch of AI capabilities are already baked into our iPhones right now. A new AI model is attempting to enhance reasoning capabilities by mimicking human internal dialogue before providing answers.



Sources:

- https://www.theguardian.com/technology/2024/mar/16/instagram-meta-lotte-rubaek-adviser-quits-failure-to-remove-self-harm-content-

- https://bgr.com/tech/ai-features-that-are-already-hiding-on-your-iphone/

- https://futurism.com/the-byte/ai-inner-monologue