Hey everybody. Welcome back to the Elon Musk Podcast. This is a show where we discuss the critical crossroads that shape SpaceX, Tesla X, The Boring Company, and Neurolink. I'm your host, Will Walden. So what happens when a tech giant like Elon Musk encourages users to share their medical images with an artificial intelligence tool? Now, could Grok revolutionize diagnostics, or does it pose a serious privacy risk? And why are privacy experts raising alarms about this new
experiment on X? So over the past weeks, users of the X platform have been uploading personal medical scans, X-rays, Mris, and more to Grok, an AI chat bot developed by Musk's company X AI. They're asking for analysis, seemingly eager to test Musk's claim that Grok is poised to become extremely good in interpreting medical data. But this move has sparked sharp debates among privacy advocates, medical professionals, and AI experts.
Now Elon Musk introduced Grok as a cutting edge AI system designed to offer fast, witty and sometimes provocative answers leveraging real time knowledge from the X platform. It was first unveiled in late 2023 and by mid 2024, Grok had advanced to handling complex visual inputs, including medical images. And Musk's Reek sent encouragement to submit health scans has added a new dimension to its functionality, but also intensified scrutiny.
Now, unlike more conventional medical AI systems developed in collaboration with hospitals or researchers, Grok was trained on public social media data, a method that raises ethical concerns. And Musk pitched the chatbot as a tool for users to seek medical insights, either as a preliminary analysis or a second opinion. Now, Musk said, this is still early stage but is already quite accurate and will become extremely good.
This openness has attracted both laypeople and professionals curious to see if the AI can match human expertise. Now, reports from users show a range of outcomes. Some praise Grok for his ability to recognize medical conditions. One individual said, had it checked out my brain tumor, not bad at all. While others highlighted glaring errors, like mistaking a broken clavicle for a dislocated shoulder. Physicians have also tested the
tool. An immunologist from the Jackson Laboratory noted success after tweaking his input prompt. The response is very thorough and impressive, though, he said, describing Grok's diagnosis of a X-ray now. Others, like Doctor Laura Heacock, a radiologist and researcher at NYU Imaging, were less satisfied. Testing Grok with a set of breast radiology images, Heacock reported 0 accurate diagnosis. In one case, the AI mistook an obvious cancerous mass for calcifications.
For now, non generative AI methods continue to outperform in medical imaging, Heacock said. Now, privacy concerns are a big deal. Musk's invitation to share sensitive medical data has sparked outrage among privacy experts.
Medical information uploaded to Grok falls outside the protection of the Health Insurance Portability and Accountability Act HIPAA, and HIPAA governs data handled by healthcare professionals, providers, insurers and their partners, but it doesn't extend to social media platforms like X
or their affiliated AI tools. Brandy Malin, a biomechanical Sorry, a biomedical informatics professor at Vanderbilt University, criticized the casual nature of data sharing with Grok. He said posting personal information to Grok is more like, we let's throw this data out there and hope the company is going to do what I want them to do, he remarked. XS privacy policy states that while the company doesn't sell user data to third parties, it does share information with related entities.
This ambiguity fuels fears about how the data might be used. Matthew McCoy, an assistant professor of medical ethics at the University of Pennsylvania, expressed skepticism about the tool safeguards, saying, As an individual user, would I feel comfortable contributing health data? Absolutely not. You know, sharing health data on platforms like X risks creating a permanent digital footprint that could lead to unintended consequences. Consider a PET scan showing early signs of Alzheimer's
disease. If linked to a user's online identity, such data could be exploited by future employers, insurers, or even housing associations. While laws like the Genetic Non Discrimination Act offers some protection, loopholes remain for entities like life insurance providers. Moreover, there's the question of how Grok itself uses the data.
The company claims that users control whether their inputs are used to improve the AI, but experts warn that most users lack the technical literacy to fully understand or manage these settings. Now, even with safeguards in place, the risk of data breaches or misuse loom large. Beyond privacy, Grok's medical accuracy remains a pressing issue. Experts stress that training a robust AI for healthcare requires high quality, diverse data sets and deep collaboration with medical professionals.
Without these elements, AI systems risk producing unreliable or even dangerous outputs. SU Chi Surya, director of the Medicine, Learning and Healthcare Lab at John Hopkins Johns Hopkins University, compared untrained AI tools to a hobbyist chemist mixing ingredients in the kitchen sink. She said that the potential for harm if users act on inaccurate diagnosis leads to unnecessary
tests or treatments. Dr. Soraya also highlighted the significant challenges of integrating AI like Grok into clinical systems, where rigorous validation and adherence to ethical guidelines are paramount now. While tools like Grok might enhance workflows in the future, the current state lacks the recision required for critical
healthcare applications. Grok has already attracted regulatory attention in Europe. Privacy regulators have flagged the tool for potential violations of the General Data Protection Regulation, GDPR. The Irish Data Protection Commission recently petitioned the courts to stop X from using social media data to train Grok, citing concerns over data harvesting practices in the US. Grok faces scrutiny from government officials worried about a is role in spreading
misinformation. Minnesota Secretary of State even urged X to direct election related queries to verified sources like Can I vote.org Now? Musk's response to these criticisms has been dismissive. In August, XAI called European regulators actions unwarranted and defended its decision to let users opt in or out of data sharing. Unlike the rest of the AI industry, we choose to provide a simple control to all X users, the company stated, positioning itself as a defender of user autonomy.
Now, Musk's interest in AI extends beyond Grok, though. His other ventures, including Neurolink, have a broader ambition to merge human intelligence and technology. Neurolink recently received FDA approval for a brain computer interface aimed at restoring vision in blind individuals. In such developments demonstrate Musk's commitment to positioning AI at the forefront of healthcare innovation.
And despite these advancements, though Musk's approach has often clashed with ethical concerns, his long standing warnings about the existential risks of AI contrast sharply with his promotion of tools like Rock, which critics argue are being deployed without significant oversight now. For now, Grok offers intriguing possibilities but also
significant risks. Users should exercise caution before uploading sensitive medical data, especially given the uncertainties around privacy and accuracy. And as the tool evolves, though, it may become a more reliable resource. But for now, it remains an experimental platform with serious ethical and practical limitations. Hey, thank you so much for listening today. I really do appreciate your
support. If you could take a second and hit this subscribe or the follow button on whatever podcast platform that you're listening on right now, I greatly appreciate it. It helps out the show tremendously and you'll never miss an episode. And each episode is about 10 minutes or less to get you caught up quickly. And please, if you want to support the show even more, go to patreon.com/stagezero and please take care of yourselves and each other and I'll see you tomorrow.