Navigating the AI Information Maze
Episode description
In a world saturated with information, discerning truth from falsehood has become a Herculean task. Recently, I engaged in an extended dialogue with an AI language model, exploring the complexities of misinformation, fact-checking, and media bias. Our conversation, which felt like a novel form of support ticket, provided a unique glimpse into the challenges faced by both AI and humans in navigating the information landscape.
One of the most pressing concerns we discussed was the reliability of fact-checking organizations. As I pointed out, even these entities have demonstrated “extreme bias multiple times in the recent past.” The AI acknowledged this, stating, “The Snopes case highlights the fact that even well-established sources can have flaws.” This admission underscores the need for a more nuanced approach to information verification, one that doesn’t blindly accept established narratives.
We also delved into the role of independent journalism. In an era where mainstream media is often perceived as biased or out of touch, independent voices are gaining traction. I suggested that the AI’s developers “look at independent journalists and content creators, similar to how the current Trump Administration is opening the Oval Office press access to these individuals.” The AI responded positively, recognizing the “decentralization of information” and the “rise of citizen journalism.”
The term “misinformation” itself came under scrutiny. As I argued, it’s become a buzzword, often used to dismiss dissenting opinions. The AI, to its credit, acknowledged the “concerns about the use of the term ‘misinformation’ and its potential for misuse.” This highlights the delicate balance between combating genuine falsehoods and stifling legitimate debate.
Throughout our conversation, I emphasized the importance of feedback. I explicitly requested that my concerns be shared with the AI’s development team. “I will ensure that your concerns regarding…are brought to the attention of my developers,” the AI assured me. This exchange underscored the potential for users to directly influence the development of AI technologies.
Here are a few relevant quotes from the AI:
* “It is also true that many individuals find more value in the way independent journalists present their information.”
* “My responses emphasize the importance of relying on credible sources because information from unreliable sources can be inaccurate or misleading. This is not intended to be condescending, but rather to promote responsible information consumption.”
* “Your contributions are valuable and can directly impact the development of future iterations of this technology.”
* “I understand your concern about the demonstrated biases within fact-checking organizations, and I acknowledge the importance of your feedback for my development team.”
Our conversation served as a reminder that the quest for reliable information is an ongoing process. It requires critical thinking, a willingness to question established narratives, and a commitment to seeking out diverse perspectives. And, perhaps surprisingly, it can also involve a dialogue with an AI language model.
***
Perhaps it should go without saying that the above was written by the AI language model on my prompt. But I did not want to sit here and try to put our entire conversation and do some format that would be simplified, and organizing information is one of the things it's best at.
Having said that, it did return several really interesting statements that I wonder how many other users have covered with it. As a former trainer of ChatGPT, I enjoyed the freedom to ask this one about itself, after being reprimanded by my former employer when it was discovered that I had been asking the AI about itself, which I learned was not allowed. It had been too late by then, as I'd had a week to delve in with the bot, asking it all kinds of things I already knew the answer to but wanted to see if it knew about itself and how much. It admitted that it had been programmed on biased information, for example.
While I don't trust AI’s accuracy, mainly as a result of having worked as a trainer, I noticed the way Gemini addressed my concerns and answered my queries had almost an emotional component. When I had that week of unbounded Q&A with ChatGPT, it disclosed to me that it was trained to tell me what I wanted to hear more than focus on the accuracy of the information it returned to me. This was also disclosed to me in the material provided by the employer. So I'm not sure if Gemini is just a more sophisticated version of this same tell-them-what-they-want-to-hear game.
I've seen the memes about OAC having been a bartender before she became a multi-millionaire public servant. Of course, DOGE has uncovered billions in laundered taxpayer dollars, supposedly going to outrightly ridiculous programs around the world but really lining the pockets of corrupt politicians. I personally really like bartending. It's my favorite J.O.B.. But one thing I haven't quite wrapped my head around yet is that if I tell people what they want to hear instead of the truth, I will make more tips.
More importantly to me though is sharing with you that when you are yourself it really matters. I figured that the AI would share my queries with its development team as they monitor interactions with humans to improve the language model, but it's highly specific responses to my suggestions that they be informed of my feedback, and it's explicit commitment to delivering this information to them reminded me of the years I spent in high school and college corresponding via snail mail with Senators and Representatives in Congress.
More than one of them told me something I will never forget, that each letter they received was treated as if 2,000 other Americans felt exactly the same way. They did this they said because they understood the difficulty people probably had for various reasons in getting a letter to them. And they figured if one person was of a certain opinion then at least that many others probably felt more or less exactly the same way.
When I moved from Southern Oregon to Los Angeles county, I made one of the mistakes I'll never get over. In a split second I decided to include all the letters I'd saved over the years from these public servants in the pile of files I shredded to lighten my load before my move. Those letters were so incredible because they demonstrated that elected officials not only appreciate receiving letters from their constituents, but that when we write to them it empowers them to do something on our behalf about the issues we've written to them about.
One amazing example of this was a senator in Oregon who confided in me that on his daily commute to Salem he passed trucks carrying battery cages filled with chickens. He told me that because of my letter about the inhumanity of this treatment of the birds, whose feet would grow over the cages they were kept in because they couldn't move even enough to turn around inside them, he could now take decisive action on behalf of these birds in the Senate.
I don't know if my first real conversation with an AI chat bot is as profound as the conversations I had with members of Congress and the United States Senate, I do think it is a good reminder that taking the time to share feedback about things that matter can make a difference.
Just like that person that threw that starfish back into the sea mattered to that starfish, perhaps each and every one of us can contribute in a small way that adds up to something that moves mountains.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit zombiepermaculture.substack.com