Hey everybody, KMO here with episode number 19 of The KMO Show. Prepared for release on Wednesday, August 2nd, 2023. I don't have a guest this week. For the next hour, it will be just me and you, as I try and surely fail, to squeeze in all of my observations and thoughts on three seemingly unrelated topics. The Congressional UAP hearings, the Hollywood Writers Strike, and the tech CEO cult known as Effective Accelerationism.
Abbreviated as E-slash-A-C-C online, but that's more of a mouthful than just saying the full ten syllable phrase. I say that these are seemingly unrelated topics. The TLDR is that artificial intelligence is the thread tying them all together. The COVID pandemic, and I would argue the woke destruction of previously popular film franchises like Star Wars and the MCU, left the cinema multiplexes feeling like ghost towns these past few years.
Since I've been back in Berryville after my winter out west, I've only seen one film in the movie theater. Guardians of the Galaxy Vol. 3. I liked it well enough. I live in a small town with only one movie theater. It only has one screen and has one showing per day Friday through Sunday. The nearest multiplex is in Rogers, Arkansas, which is over an hour's drive from here.
Given all that, it should come as no surprise that I don't see a lot of movies in the theater, nor that I haven't seen either the Barbie movie starring Margot Robbie or Christopher Nolan's biopic of Robert Oppenheimer starring Killian Murphy.
Nor have I watched much of the congressional testimony of former intelligence officer David Grush, who claims to have substantial evidence that the US military has several craft of non-human origin in their possession, along with non-human biologics, which implies the corpses of alien pilots of those crashed vehicles. But I've done a fair bit of reading on the topic, and I have some opinions. So let's start with that. You're listening to the KMO Show. Let's go.
First, for the purposes of concision, I'm going to read a little bit from Wikipedia, just because it's got a lot of dense information here that I can just get it on the table quickly. There is a Wikipedia page entitled, David Grush UFO Whistleblower Claims. And I'll read just a couple paragraphs here.
In June 2023, United States Air Force officer and former intelligence official David Grush publicly claimed that unnamed officials told him that the US federal government maintains a highly secretive UFO or UAP recovery program and is in possession of quote, non-human, quote, spacecraft and quote, dead pilots, close quote.
In 2022, Grush filed a whistleblower complaint with the US Office of the Intelligence Committee Inspector General, ICIG, to support his plan to share classified information with the US Senate Select Committee on Intelligence. He also filed a complaint alleging retaliation by his superiors over a similar complaint he made in 2021.
Grush asserts that individuals with whom he has conversed shared the concern that American citizens have been killed as part of the government's efforts to cover up the information.
In response to his June 2023 claims, both the National Aeronautics and Space Administration, that's NASA, and the US Department of Defense, or DoD, issued statements saying respectively that there is no evidence of extraterrestrial life and there is no verifiable information about the possession and reverse engineering of any extraterrestrial materials.
During a congressional hearing on July 26, 2023, under the House Committee on Oversight and Accountability, Grush repeated his claims alongside testimony from US fighter pilots Ryan Graves and David Fravor on experiences related to UFOs. Grush testified that he could not elaborate publicly on some aspects of his claims, but offered to provide further details to representatives in a sensitive, compartmented information facility.
Okay, there's more, but I'm going to stop reading there, other than to say a little bit about David Grush's background. He is a veteran of the Air Force. He served in Afghanistan. And then later he worked for the NGA, or the National Geospatial Intelligence Agency, and the NRO, the National Reconnaissance Office. And he was representing the National Reconnaissance Office when he participated from 2019 to 2021 in the Unidentified Aerial Phenomena Task Force.
And that was the precursor investigation to the current office which investigates such things which is called AARO, or the All Domain Anomaly Resolution Office. The director of AARO is Sean M. Kirkpatrick. He's a laser and materials physicist.
And he, after David Grush's most recent high profile testimony to the Senate, issued not a statement, an official statement through official channels, but published a letter on his LinkedIn page basically saying, David Grush doesn't work for my organization, never has, neither do the two other witnesses.
And Kirkpatrick has testified in front of Congress himself saying that there is no evidence that the United States government is in possession of materials or technology or biological samples of non-terrestrial origin. Basically he's saying, we've looked into it. 94, I think it was, no, 96% of the cases that they looked into had a mundane explanation and 4% were genuinely anomalous. They didn't have the information necessary to determine the exact explanation or cause for what was reported.
Does that mean it was aliens? I'll put my cards on the table and say, I don't think so. I will be very, very surprised to learn that extraterrestrials in physical spaceships have crossed interstellar distances from other solar systems to ours to visit our planet. That seems entirely improbable to me, but I am determined to keep a realistically open mind. Not a completely open mind, but I'll hold my opinions as lightly as I can. Somebody who speaks my language on this issue is Dr. Adam Frank.
He is a professor of astrophysics at the University of Rochester. He has published many editorials on this topic. I'm looking at one. This is in the New York Times. It was from May 30th, 2021. Headline is, I'm a physicist who searches for aliens. UFOs don't impress me. Now, I'm not actually going to read from this. It's long and time is short.
I'll just say that he's looking for aliens by looking at the spectrographic data of light that passes through the atmosphere of distant planets, looking at the chemical composition of the atmosphere of the planet, and looking within that data for evidence of either life, which is to say biosignatures, or alien technology, or technosignatures. From my perspective, this is the sober adult way to go about looking for aliens. But I'm offering these opinions basically just out of honesty.
I'm telling you what side I'm on. I'm not a partisan for this position. I'm not arguing for it. I'm just letting you know where I stand as I come to this issue and go through the information. I am going to read something that Dr. Frank wrote, but it's something much more recent. This is from Big Think. The article is from August 2nd, 2023, so that's today. And the title is, Here's What a Scientist Makes of Congress's UFO Hearing. The X-Files Was Not a Documentary. Key takeaways.
The recent congressional hearing on Unidentified Aerial Phenomenon, UAPs, highlights the need for more empirical research, given that most UAPs can be explained by earthly phenomenon. Assertions about the recovery and reverse engineering of alien UAPs have drawn skepticism. Due to a glaring absence of tangible evidence, we need transparent investigations into UAPs. Ultimately, astrobiology is arguably more likely to provide reliable answers about the existence of alien life.
Now, before I start reading this, I'll just throw out this tidbit. Dr. Frank has been calling for NASA to investigate alien life for a long time. And now, NASA has gotten into the UAP game. They have a committee, I think it is, to investigate claims of unidentified aerial phenomenon, or unidentified anomalous phenomenon. That acronym morphs a little bit from time to time, depending on where you read it.
And his concern is that because NASA is getting into the game and imbuing this topic with the imprimatur of serious science and space exploration, that could do more harm than good. And I'm sympathetic to that concern. But Dr. Frank writes, Everybody's talking about aliens after last week's congressional hearing on UFOs, now officially rebranded as unidentified aerial phenomenon or UAPs.
As an astrophysicist working on the remarkable science of searching for life in the universe, I just finished the little book of aliens on this exact subject. I've been asked by lots of folks what I make of it all. How do I see the testimony? Is there anything in what we heard that speaks scientifically to the possibilities of life existing elsewhere in the universe? To answer that question, we have to parse out two different threads that emerged during the hearing. Extraordinary testimony.
This involves the Navy pilots and their stories of encounters with objects behaving in ways that defied their expectations. That's putting it mildly. I first will say that I think it's a good thing these pilots have been willing to come forward. Removing the stigma of making these kinds of reports is the first essential step in figuring out what's going on with UAPs. I also liked the way the pilots were pretty agnostic about what was happening.
Like the rest of us, they simply wanted a clear explanation of these encounters. Finally, I felt they were being entirely honest and forthright in recalling what they remembered about these encounters. So where do we go from there? The answer in this case is really simple. Do some good science. It's worth noting that last year, NASA, the folks who land robots on distant planets, convened a panel to begin a true scientific unpacking of UAPs.
At their first press conference held this summer, the team announced that only 6% of the many UAP cases they studied could not be explained. In other words, 94% of their UAP cases had earthly causes. This conclusion is consistent with other studies of this kind that have been done. So it's safe to say that the Earth is not suddenly awash in strange and unexplainable phenomena sailing through the skies. But what about the other 6%?
In some cases, no explanation could be found because there simply wasn't enough information – the all-important data that science lives on – to even propose an explanation. Still, some of the unexplained cases do fall into the truly strange and weird. This is the space where I would say the pilots' testimonies live. Their descriptions are definitely of the raise-the-hair-on-the-back-of-your-neck variety. What do we do in those cases? Here is the point where better science comes in.
While we could spend lots of time trying to figure out exactly what the pilots saw, I feel like it's a dead end. Science cannot do much with personal testimonies. One problem, as every cop and psychologist will tell you, is that human memory is not a photographic record. Instead, it's a reconstruction that can differ from the original event in many ways, no matter how earnest the reporters are.
But what's even more important, and as I describe more fully in the book, is that to really do science, you need hard data collected through rational search strategies from instruments you fully understand.
If my colleagues and I ever want to claim that we've detected signatures of life on alien planets light-years away, we will have to know everything about our instruments – how they respond to light when the telescope is at 40 degrees Fahrenheit, and how that response changes when the temperature goes up to 60 degrees. The exact same type of knowledge is required to determine whether a UAP accelerated in ways that no human technology could reproduce.
Personal testimony, onboard targeting cameras, and even military radars cannot do that. Ultimately, to really know if UAPs have anything to do with advanced technologies from alien life, we will need to set up a new kind of research program. And I'm all for it. As I've written before, an open, transparent, and scientific investigation of UAPs would be great. My personal opinion is that the alien explanation is a long, long, long shot. Peer-state adversaries is a much more likely explanation.
But my belief and $4.98 will get you a Starbucks coffee, so let's do the real science. At the very least, it will show people how science goes about its business – a business that gave us cell phones that work, jet planes that don't fall out of the sky, and medical procedures that heal. And then the heading for the next section is, show me the spaceship, Mulder. Dr. Frank continues. Now on to the second thread.
One of three witnesses at the hearing claimed that the U.S. had recovered downed UAPs of non-human origin, that non-human biologics, whatever that means, had also been recovered, and that technology from the possibly alien craft had been reverse-engineered. I am, to put it mildly, highly skeptical. First of all, such claims are nothing new. As I wrote about recently, and as I detail in the book, there have been ex-military officials making these kinds of claims going back 70 years.
What doesn't go back 70 years is actual hold-in-your-hand evidence of such claims. So what we heard in the hearing was old news. Somebody says they heard from somebody else in the know that we have alien spaceships in the garage, but once again we have no actual evidence of claims so extraordinary they sound like they come straight out of an episode of The X-Files. One of my rules of thumb is that if something sounds like the plot of a science fiction movie, it probably is.
There is no reason, based on any existing science, to take these truly stunning claims seriously. I am not changing this position until somebody ponies up some actual artifacts, which I will note are always promised to be coming, but never show up. In other words, show me the spaceship. Okay, that's not the end of the piece, but that's where I will stop reading. Now, I've said that the connecting thread between all of these topics is artificial intelligence.
How does artificial intelligence play into this story? Well, let's start with Dr. Frank's call. Show me the spaceship. Yeah, you can go to YouTube and you can watch hours and hours of clearly, obviously faked UFO videos. There's a YouTube channel that is a bunch of special effects artists who analyze special effects. They describe how they're done. It's called the Corridor Digital Crew. It's a lot of fun. I highly recommend watching that channel.
And they have a recurring segment that they do where they look at UFO footage and they can spot the artifacts of fakery. But something they've also documented is how good artificial intelligence is getting at doing special effects, doing animation, doing CGI basically that is way, way better than what we're used to seeing.
And I predict that in the coming months and years, like the next two or three years, we're going to see a flood of basically AI-generated evidence or proof that the extraterrestrial hypothesis is the actual explanation for unidentified aerial phenomenon or anomalous phenomenon.
This proof will not be released by the Pentagon or any government agency, but it will be leaked and I'm putting leaked in air quotes, because in the long run, it's not going to hold up to scrutiny, but in the near term, it's going to be convincing enough to add fuel to the fire.
It's not going to be released by any government agency because in the fullness of time, it will be revealed to be a hoax and no government agency is going to want to be on record as having propagated a hoax, but they will leak it deliberately just to keep this thing going. Now, what's the point of all of this? I suggest, I'm not saying this is a fact, this is just what seems likely to me, that the true audience for all of this is not the general public.
The Pentagon, which is to say the US military, receives a lot of money, a lot of money to do what they do. And every now and again, various agencies try to audit the Pentagon to see if the money that has been allocated to them has been spent for legitimate defense purposes. And there's a great phrase that I want to share with you here. This is from a 2016 article in the New York Magazine Intelligencer.
The title of the article is, The $6 Trillion Issue You Won't Be Hearing About at Tonight's Debate. So picking up in the middle of the article, I'm not even going to try to lay the context. I think the bit that I'm coming to, it stands on its own.
Paragraph begins, Trump's new defense demands, again, this is from 2016, Trump's new defense demands don't help us understand headlines like the one from mid-August that said, the Defense Department's Inspector General found more than $6.5 trillion, quote, wrongful adjustments to accounting entries, close quote, in the Army's general fund for 2015 alone. I just love that. Wrongful adjustments to accounting entries. $6.5 trillion unaccounted for. The military doesn't want to talk about that.
They claim that there's nothing to this UFO business, but they also act like they're covering it up. Why? I suspect it is to capture the imagination of people like Christian Gillibrand, who is one of the senators who is pushing hard for this UAP business. Now, maybe she's in on it. Maybe she's not really a UFO believer or UAP believer. Maybe she's just part of the kayfabe dance here. I don't know. But I think this is all theater.
This is all a distraction meant to prevent the reckoning of, where's the money? Where's the money, Pentagon? But anyway, lots of people have expressed their frustration with the low-quality evidence that we have for these extraordinary claims, and I think that the evidence is about to get a lot more compelling because AI is getting really, really good at faking such stuff. Which brings us to the next issue, Hollywood.
Screenwriters fear AI could be used to churn out a rough first draft with a few simple prompts and writers may then be hired after this initial step to punch such drafts up, albeit at a lower pay rate. These concerns expose the techno-optimist lie that AI will create more jobs than it destroys. Millions of background actors could be put out of work. How many coders will it take to program their likenesses into the background? Handsome, maybe?
And the job of writer might remain, but it will be degraded so that they will effectively be assistants to the bots cleaning up the drafts that AI churns out. And what's true for this industry is going to be true for many, many more because bosses are always going to look for ways to use fewer workers. Workers are expensive. They have rights, and there's at least some limitations on how much you're allowed to exploit them.
But they never talk back, they never need time off, and they require no humanity. If the country only cares about profits for the top, human beings could become truly disposable. That's to say nothing of the way that Hollywood has already been degraded and stripped of beauty, risk-taking, and creativity by the demand to place the safest, most market-palatable bet.
Now, you may not think that this fight has a lot to do with you other than creating an annoyance as your favorite show production is delayed. You may think that these Hollywood stars and star lists have nothing in common with you and are privileged to even have the ability to complain about all of this. And you know what? There's some truth to that.
After all, it is their prominence, combined with their union power, by the way, which is the only thing that even gives them a chance to push back on any of this. But this is just the beginning. Automation has already come for blue-collar America. Now it's coming for white-collar workers, too.
Everyone now has an interest in seeing the shared threat to their livelihoods and supporting one another in these struggles that will draw new lines in the sand of what is acceptable and what is immoral in this new landscape. Bottom line, technology should benefit human beings, not destroy their lives. Because in this future that we are just catching a glimpse of, it's not that people will become wholly irrelevant.
It's that the gulp between the haves and have-nots will become ever greater as the owner class separates more and more from the labor class. It's that every last sector of our lives will be colonized, commoditized, for profit. And if this brave new world can come for Hollywood stars and starlets, what chance do ordinary people ultimately stand? You probably recognize the voice of Crystal Ball. That was Crystal doing a monologue on the topic of the Hollywood strike.
Hollywood writers and actors are both on strike, and really what's at stake here is money. But there's a particular way that AI plays into this that is very relevant to both writers and actors. Back in the 20th century, way back, for example, a typical season of one of the Star Trek shows, say Star Trek The Next Generation or Star Trek Deep Space Nine, would have 24 episodes and sometimes more in a single season.
Flash forward to today, and a prestige TV show on one of the big streaming platforms might have 8 episodes, or 10, or maybe 12. Fewer episodes written means less money paid to writers, but that's not just a matter of shorter seasons. Writers would get a fee for doing the initial work, but then the real money was in the residuals. Every time an episode that you wrote got shown on TV again as a rerun, you got paid again. And so you could get paid year after year for something that you wrote.
Excellent. That's great. How does it work in streaming? There is no such thing as reruns in streaming. A piece of content gets put up on a streaming platform, and people watch it or they don't. But here's the thing, the streaming services are playing their cards really close to their chest. They're going to tell you exactly how many times a given show has been watched. They'll tell you, oh, this show was a hit. This show was a success. You know, lots of people watch this. How many?
Well, we're not going to say. That's proprietary data. Why would they be so coy about that? Well, here's a bit of speculation. This is from Chris Gore. He's reading a tweet, but you know, I'll play Chris's voice.
Chris Gore is an independent filmmaker, and he's also the publisher of a magazine that I was reading back in the 80s because it dealt with independent film and obscure films and films that I would never really be able to see in a movie theater, except maybe the Art House Theater, the Tivoli down in Westport in Kansas City, which is where I lived at the time.
But anyway, here's Chris Gore of Film Threat fame talking about why the streaming services might not want to reveal the viewership numbers and what might happen if they did. Hollywood is on strike. Everything is shut down. The only films being made now are from A24. Animation is moving ahead, but certain movies, for example, the Marvels and Dune Part II are on the bubble. If the strike continues past September, those movies may not release this year.
And things were looking up because Barbie and Oppenheimer have ignited the box office in a huge way, which is exciting for everybody. But I think it's important that we look at the long term big picture of all of this. This is Andrew Schultz on Instagram. And Andrew Schultz says, thoughts on the Hollywood strike. The real issue is that actors and writers want fair residual payments from the streamers.
In order to define what is fair, the streamers will need to share how many people are actually watching their shows. And here lies the problem. OK, number two, this is five parts. Number two, my suspicion is that the streamers are refusing to share the viewership numbers, not because they're being cheap, but because no one is watching and revealing extremely low viewership would kill the stock price.
So number three, if most of these streamers are losing money in an effort to gain market share, the only justification for their spending is their stock price being high. Once that stock price tanks with the real viewership numbers, the streamers will have to cut back on spending, which means for way less shows will be greenlit and the budgets for those shows will be severely reduced, which means way less acting gigs and writing gigs.
So essentially, if the actors and directors strike is successful by making the streamers release the real viewership, the strike will essentially force the streamers to hire less actors and directors. So they're striking themselves out of work. Just a hunch, though. Just a hunch. So what do you think of that possibility? That speculation that not that many people are watching these streaming services. How many do you subscribe to?
I can tell you right now that I almost always have an Amazon Prime video subscription because I subscribe to Amazon Prime for the free shipping, but I don't actually watch much on Prime. I do not have a Netflix subscription right now. I don't have a Disney Plus subscription. I do have a Paramount Plus subscription so I can watch new episodes of Star Trek Strange New Worlds.
And I do have an Apple Plus, you know, an Apple TV Plus subscription because I'm watching Foundation Season 2 and before that I was watching Silo. With Netflix, I will subscribe a couple of times a year for a month. You know, I'm waiting for a new season of certain shows, so I just subscribed for a month so that I could watch the new season of Black Mirror. And if there's a new season of Love, Death and Robots, well, then I'll subscribe again and I'll watch that.
After I've watched the thing that I subscribed for, then I'll poke around and see if there's anything else that interests me. But you know, then I unsubscribe again because I'm cheap. I watch a lot of YouTube. I'm sure you do this, but there'll be nights when I will go to the streaming services that I'm subscribed to and I'll flip through a bunch of different titles and I won't select any of them. Like I'll see a movie, I'll think, oh yeah, that looks good.
I've been meaning to watch that, but it's after nine. I'm not ready to commit to a two hour movie. I think I'll probably want to go to sleep before this thing would be over. So I just pop on over to YouTube and I eat away the night, you know, 15, 20 minutes at a time. But I'm not watching that much stuff on streaming and I don't watch anything, you know, on broadcast TV. Young people play games, young people are online, you know, doing social media stuff.
Boomers are, you know, spending their nights on Facebook. So I think people are just watching a lot less of filmed entertainment than they used to. In fact, the word filmed is anachronistic, you know, everything's shot on video these days. But we're talking about AI and you know, AI actors are in danger because studios want you to show up on set. They want to do a full scan of your body and your face.
They want you to adopt a variety of facial expressions, maybe enact a few sample scenes so that they can capture the range of your voice and then pay you for one day and say thank you very much and use that information to make media in perpetuity. That's the end of acting as a paid profession. I agree. That sucks for actors. Writing, you know, TV shows, typically there's a showrunner, somebody who has a vision for the show.
They're in charge of wrangling the writers and then they hire a bunch of writers. They get them all in a writer's room and they pitch ideas and they refine things and then they send people off on their own to do the actual writing. And while, you know, an episode of a TV show might have one or two writers credited, really everybody on the staff, everybody in the writers room had some input.
But even without AI, you know, seasons are getting shorter and showrunners are being tasked with more and more of the writing duties and they have smaller writers rooms. You know, instead of 12 people, they might have four. Saves money. Cheaper. So, let me play you another clip from a different monologue by Crystal Ball. Whatever you think of Barbenheimer, the explosion of cultural fascination with both films is basically a testament to our love affair with human creativity.
For once, studios took a risk on a few things that were truly new and different and they were rewarded with massive audiences and a flood of national discourse that has briefly recreated a monocultural event, the likes of which I really thought we might never see again. Ironically, this moment of delight in human imagination comes at a time when the very essence of creativity is actually under threat.
Big tech, in order to monopolize the new world of AI, is attempting to feed their models with the whole world of human ingenuity, scraping every bit of language, articulated vision and novel innovation that they can get their hands on so that their machines might impersonate a bastardized version of the human spark.
These so-called large language models can't create anything new, but by harvesting our musings, our pictures, our conversations, our stories, companies are hoping that the bots can be trained to mimic us well enough that we will accept their AI-derived products. Basically, they're trying to eat our souls and then sell them back to us. I would encourage you to go and listen to that entire monologue because Crystal Ball comes back again and again.
She flexes her creativity and her prowess as a writer by finding several different ways to say content created by AI is really just stolen from human creators. It's just a soulless, mechanistic, cut and paste rehash of something that was originally created by a human being. There is some truth to that, but keep in mind, this is the beginning of August 2023. That GPT first became available for anybody, any public person to use in November of 2022. It's less than a year old, this generative AI.
We've had generative AI in terms of the diffusion models, text to image generation, for more than a year, but not much more. For the LLM-powered, large language model-powered chatbots, it's less than a year that the public has had access to this stuff. This technology is very, very new. Crystal has assumed as a point of ideological convenience that this is it for AI. This is its peak capacity. It's never going to get any better than this.
And what it can do right now is not as good as a good human writer. Maybe she's right. Maybe today, August 2nd, 2023, the natural language processing abilities of large language models hit the wall. It'll never get any better after today. That's possible. I'm going to go out on a limb, though, and say that the technology is going to continue to improve for a good long time, and in fact, the rate of improvement is probably going to increase. Accelerate, you might say.
We'll come back to acceleration. AI? Not really AI. Capitalists, bosses, owners are using the changes in technology to claw back the gains of organized labor. But it didn't start yesterday. It didn't start with large language models. It didn't start with the gig economy. But you know, remember Uber, remember Amazon. There's lots of people doing online tasks for Amazon Mechanical Turk. They use Amazon as a platform to find these little micro gigs that they can do for micro payments.
But Amazon doesn't consider them employees. It doesn't take any responsibility for them. It doesn't provide any benefits. It just stands as an intermediary between the human being doing this little micro task for some client of Amazon's. It's an intermediary, but it's not the employer. Same with Uber. Uber drivers are not employees of Uber. How can that be? I mean, they drive the cars for this service, but the service is no, no, they're just contractors.
The technology has changed things such that the employers can step in and say, this situation doesn't exactly match the labor laws as written. So you know, we're going to interpret it to our maximum benefit. It's happened with drivers. It's happened with people doing these little micro tasks online. It's going to happen in a variety of industries.
It's happening with artists, you know, with diffusion models and text to image generation, and it's happening with writers, with large language models. The thing is, AI is coming for everybody's livelihood, everybody in the working class anyway. And again, when I say AI is coming for, I mean, the ownership class is using AI to come for everybody's livelihood, but not all at the same time and not all in the same way.
And if people in particular industries or particular job roles get together and push back, but only for the people in their industry, only for the people doing exactly what they do, that precludes a more working class wide solidarity. You know, if everybody were to lose their job to AI on the same day, we would understand as a society, we have to come to some new arrangement for provisioning people with the necessities of life.
But since it's happening at different paces in different industries and in different ways, and people are pushing back against it only from their small perspective, you know, only in their little domain, their pushback is far less effective than it would be if it was more systemic, if it was more widespread. And so the creeping nature of technological change, I mean, it's moving quickly now, but still from a day to day standpoint, things don't change that much.
They change dramatically over the course of a few months, you know, but from day to day, they don't change all that much. And so it's easy to ignore, it's easy to put off. And when you finally feel the pressure enough to act, you act as an individual or you act as a member of a small community or as an employee in a very specific field. That's not going to cut it. So I say, Crystal made an assumption about the nature of large language models and she made it for ideological convenience.
She's on the side of workers against the ownership class. And I, you know, I agree that she should be. But at the same time, I think that time and events will prove her wrong about the limitations on the capacity of AI and of her whole position is predicated on the idea that human creativity is this very special field that will never be replicated, much less surpassed by artificial intelligence. Well, I think her position is going to crumble.
I think she needs a better position, one that acknowledges that AI will in all likelihood continue to improve in its capabilities, even in areas that we used to think of as being the exclusive domain of creative human beings. And that brings us to the topic of acceleration in the capacities of artificial intelligence and people who want to slow things down and people who want to speed things up. All right. That brings us to accelerationism. The name itself is pretty self-explanatory.
It's an ism. So, you know, that means a belief system, a prescription of some kind. What's being prescribed? Acceleration. Acceleration of what, though? Typically acceleration has to do with technology, but also capitalist forces. And in fact, recently, people who advocate something called effective accelerationism have just mashed technology and capitalism into one thing that they call techno capital.
Capitalism grew out of this academic experiment in England at Warwick University called the Cybernetic Culture Research Unit. And this was formed by a guy named Nick Land, along with some other people. And at the time, Nick Land was a leftist. But at some point, he had a sudden and dramatic change of thinking and he became ultra right-wing.
He moved to China for some reason and he basically founded a whole school of right-wing thought, which combined the technophilia of the singularitarian movement, you know, the people who were looking for a technological singularity, and people who were pretty explicitly right-wing, possibly even authoritarian. And as much as I think the word gets overused, one might even say fascistic.
And Nick Land and his writing, which is famously impenetrable, I mean, he comes from that portion of the academic left, which basically makes their prose really florid and impenetrable because I think they, you know, what they have to say wouldn't take that many words to say if they just said it in plain English. And because they're inspired by, you know, mid-20th century French obscurantists, that's just how they write.
And Nick Land, as much as he's rejected the priorities of the left, has retained their writing style. So his writing on accelerationism is very difficult. It's dense. It is unnecessarily replete with $2 words. And because it is not very clear, you can read into it pretty much anything you want. And if you're a leftist and you want to hate on the right, then you can, I mean, it's a Rorschach test. You can read anything into it that you want.
So typically, if you say go to Google and you just, you know, very casually search for R slash ACC, which is to say right-wing accelerationism, it'll tell you that right-wing accelerationists want to use technology to bring back slavery and enslave women and, you know, reinstate the patriarchy. And surely there is somebody somewhere who fits that description, but it's not Nick Land.
And it's not most right accelerationists, but, you know, because it's just, it is a move of convenience rather than actually take the time to figure out what somebody you don't like actually means, just describe to them the worst possible thing they might mean and just assert that it's fact. I mean, that is just how the left operates. It's how part of the right operates as well, but yeah. Accelerationism just on its own without any sort of modifier just means we need to move faster.
Whatever we're doing now, we need to do more of it and faster because that'll get us to a better place. Now, some people think it'll get us to a better place just in a linear progression of betterness. You know, things are good now. We push it further. It gets better. We push it further than that. It gets even better.
But some people, and this is particularly true of left accelerationists, think that we need to accelerate and accentuate techno-capitalistic processes because that's going to break capitalism.
And if you remember earlier, I said, yeah, the AI, you know, not under its own volition because it doesn't want anything itself, but, you know, the ownership class using AI is coming for everybody's livelihood, everybody's secure living that they and people like them and people who came before them in their industry have fought for is being taken away via AI, among other things. But it's happening at different speeds, in different ways, in different industries.
And if it happened to everybody all at the same time, then we would understand. We need a new economic system. This economic system is not serving everybody. It is serving a few at the expense of everybody. Now, a lot of people say that already. They see that already, but a lot of other people don't see it. If everybody lost their job to AI on the same day, you couldn't help but see it. And so you could offer that as an incentive for accelerationism. Hey, this is bad.
Our resistance to it is ineffective. Let's just push it as fast and as far as it'll go in the direction it's already going so that the whole thing breaks. That's the left accelerationist viewpoint. There are two distinct right accelerationist viewpoints. One of them, and this is, I think, as I say, famously impenetrable writer, but this is what I think Nick Land means.
We should push techno capitalist processes because that's going to bring about the creation of a new form of intelligence that will transcend human intelligence. This is the singularity. Most people won't get a piece of it. Most people aren't going to get uploaded to the cloud. Most people are not going to enjoy biological immortality. Most people are just going to get thrown away and pushed off by the wayside, but that's okay because in the long run, all humans are going to die anyway.
And what we're looking for is our glorious technological destiny. Now, I imagine Nick Land would think that that's a caricature of his position, but I think it's a fairer caricature than you'll get from most people on the left. So that's kind of a dark exclusionary singularity vision of right accelerationism. The other right accelerationist viewpoint is the one articulated by Menchus Moldbug, aka Curtis Yarvin. Curtis Yarvin is a neo-reactionary.
He's also a monarchist, and what he would like to see is just a fragmentation of all of these big political powers, like the United States or the EU or China, into these tiny little fiefdoms, little city-states, each of which is ruled by a monarch of some sort, preferably a hereditary monarch so that the monarch has a long-term vision, has skin in the game that extends beyond his or her own life, much less beyond the current term to which they have been elected.
But ultimately, it's not a technological vision.
As far as I understand it, it really is kind of a pining for the past, an acceleration to a collapse of the current sprawling large integrated political systems that use technology to affect that integration into something which is fragmented, individualistic in not in terms of an individual human beings range of options in the world, but individualistic in the variety of cultural expression that you get if you take one country and break it up into 400 little city-states.
So that's accelerationism, both sort of neutral accelerationism, left acceleration, and two varieties of right accelerationism. If you're in tech, you certainly know the name Mark Andreessen, but if not, he is the guy that invented Netscape. He was the president of Netscape, but also integral in creating the first widely used web browser, Netscape Navigator. Now I say widely used. It wasn't the first web browser. That's not what I'm saying. It certainly came after Mosaic.
But Mark Andreessen, among other people, including Jeff Bezos, have now openly started calling themselves effective accelerationists. And that's abbreviated E slash ACC. And he's got it in his Twitter handle. I know, Elon Musk changed the name to X. Just let that go. So what does that mean, effective accelerationism? And what do effective accelerationists want? Well, remember Sam Bankman Fried and the FTX crypto exchange, which crashed catastrophically.
Fried is either in prison or under house arrest or awaiting trial. I don't know exactly where he is in that process. But he and his parents were advocates of something called effective altruism. Now maybe you've heard people have done audits. People have investigated different charitable organizations and discovered that most of the money that people donate to those organizations goes to the organization itself. It goes to pay executive salaries. It goes to pay for facilities.
It basically is upkeep for the organization. And not much of that money actually goes to help people. In terms of philanthropy, it's just not very effective. So number-driven people, data-driven people, wanted to bring the principles and the discipline of data science and mathematical thinking to the topic of charitable giving. They wanted to find the charitable giving opportunities that would create the most benefit for the people in need. They wanted effective altruism.
Well, because effective altruism is something that was championed by the Silicon Valley set by tech-centric people who tend to be easy targets for disingenuous attacks, because they often lack social skills and they lack the ability to discern when they're being attacked. So even before the crash of FTX and the downfall of Sam Bankman Fried, effective altruism, which is typically abbreviated online as EA, not Electronic Arts, but effective altruism, its reputation was already tarnished.
It was already starting to get a bad name. But then when FTX collapsed and its poster boy, who was also the poster boy for effective altruism, when his reputation took a nosedive, then the rat started abandoning the sinking ship of effective altruism. And I think that effective accelerationism is a play on that. It's saying, hey, this is the next step in the evolution of this concept.
But now we're taking something which is familiar, like Ray Kurzweil, who is probably the best known advocate for and articulator of the idea of the technological singularity. He explained the inevitability of the singularity using something he called the law of accelerating returns. It's a function of nature. It is a function of the universe that over time self-replicating structures take shape and they're slow in replication and slow in their evolution at first.
But each replicator creates a new style or a new form of replication that moves even faster than the one before, orders of magnitude faster. So we don't know exactly what came before DNA, but something did. And then DNA came along and it evolved at the pace of evolution via natural selection. But eventually some organisms, which evolved at that speed, they grew big brains and they started to think about things. They invented language or discovered language, however you want to think about it.
They started writing things down. They started transmitting information from one generation to the next. And that spiraled out into all kinds of cultural forms and cultural artifacts, many of which encoded information, transmitted information, and a very slow evolution by natural selection gave rise to culture and an accelerated sort of evolution in the realm of cultural replicators, otherwise known as memes. Back before the word meme came to mean a picture with a funny caption.
So when Ray Kurzweil talked about the law of accelerating returns and how biology gives rise to culture, which then gives rise to information technology, which moves even faster than culture, which in turn will give rise to something else, which moves even more quickly still, he didn't tie economics into it in any explicit fashion. His theory was that it's a law of nature. It applied everywhere.
It would apply in communist top-down authoritarian regimes as much as it would in a capitalist system. But the effective accelerationists have abandoned that. They have explicitly tied the benefits of accelerating technology with the benefits of capitalism, saying that one doesn't really work without the other, such that without blushing billionaires say, yeah, the process that made me a billionaire and the part that goes unspoken is and impoverished so many other people, that's good stuff.
We want more of that, more of it and faster. Now some of the people on Twitter and Twitter or X is where this is really taking shape and evolving. You can follow the evolution of effective accelerationism in real time. So I'm going to read to you a bit from a story in Business Insider. The title is Get the Lowdown on E slash ACC or Effective Accelerationism. And just for simplicity sake, E slash ACC appears many times in this article.
I'm just always going to read it as effective accelerationism. So the title is Get the Lowdown on Effective Accelerationism, Silicon Valley's favorite obscure theory about progress at all costs, which has been embraced by Marc Andreessen. There's an obscure theory doing the rounds in Silicon Valley as it quickly becomes the new ideological hobby of tech's power players. It's called effective accelerationism.
On Twitter, now rebranded to X, some of the tech community's most prominent figures, including veteran investors Marc Andreessen and Gary Tan, have decided to include the term effective acceleration in their usernames as a badge of allegiance to the vision. So what exactly are the underlying tenets of effective accelerationism and why is it having a moment right now? Let's start with the name.
It's a bit of a play on effective altruism, the social movement focused on an evidence-led form of philanthropy, which was infamously embraced by Sam Bankman Fried, the disgraced founder of crypto exchange FTX. The ideas of effective accelerationism appear to have their genesis in the theories of Nick Land, a British philosopher who lectured at the University of Warwick and who has come to be known as the father of the broader accelerationism movement.
The more formalized effective acceleration idea has taken shape on Twitter and through substack newsletters since 2022. The basic idea of the philosophy is this. In a technological age, the powers of innovation and capital should be exploited to their extremes to drive radical social change, even if that means completely upending today's social order.
The first effective accelerationism post co-authored by users named at Zestular, at Creatine Cycle, at Based Beth Jesus, and at Bazelord said technology and market forces, which they term techno-capital, are accelerating with a force that quote, cannot be stopped, close quote. Techno-capital can usher in the next evolution of consciousness, creating unthinkable next-generation life forms and silicon-based awareness, close quote, the post said.
In an effective accelerationist world, no idea that offers hypothetical value should be considered too absurd, too dangerous, too out there to make a reality. For effective accelerationist adherence, the path of progress at all costs would, in theory, make possible any imaginative idea with a purported benefit to humanity.
That could mean justifying the development of something as outlandish as Dyson's spheres, physicist Freeman Dyson's theoretical megastructures which would surround a star to harvest its energy, or something closer on the horizon, like artificial general intelligence. So I'm going to stop now. The next section is why is it happening now? Well, I think I've already telegraphed that.
One, it's taking the baton from effective altruism, and two, it is the rollout of the large language model chatbots which has got everybody so excited about artificial intelligence. Now I've mentioned before, although maybe this is the first time you're listening to one of my podcasts, in the 90s I went to grad school for philosophy, and my area of emphasis was the philosophy of science and the philosophy of mind, and I was laser-focused on artificial intelligence.
I was obsessed with the future of artificial intelligence back in the 90s. That kind of went away over the next few decades, and for a time I was explicitly a techno-doomer. I thought that the techno-industrial system was about to crash because of a lack of hydrocarbon energy. Well, the crash didn't come, and AI is here, and things are getting weird and moving fast now, and here's the thing. I don't know.
I don't have an intellectual conviction, whether we have hit the knee of the curve, the inflection point in this rate of increase where things are just going to move faster and faster from now on, or if we're experiencing a moment of punctuated equilibrium where things jumped quickly with the introduction of the large language models, but then they'll slow down as the entire culture learns to absorb this potentially disruptive innovation. And I don't even know what to hope for.
I've done a lot of things for money over the years, but the thing that I've done the longest is podcasting, which certainly would not be possible without the internet and without social media. I mean, I suppose podcasting just as an RSS feed, a way of delivering audio files to people, would be possible without social media, but how would you tell people about your podcast? How would they discover it?
Really, my livelihood has been built around the internet for my entire adult life, and yet I can easily entertain the idea that my life would have been better if the internet had never been developed or if I had been born earlier. So I don't really have to spell out how artificial intelligence is related to this particular topic as I'm stringing together talk of the UAP Senate hearings and the Hollywood writers and actors strike.
I mean, this particular topic is explicitly about Silicon Valley and Silicon Valley tech lords who think, yeah, everything that we've been doing has made the world better. We need to do a lot more of it a lot faster. And we cannot slow down regardless of who claims that they have been hurt or any claims that we are plunging ahead heedlessly into increasingly dangerous territory.
The idea that the singularitarian vision, the idea that we're headed for a technological singularity which will basically solve all problems and create a whole host of new ones, but solve all the problems that we currently face, that has been the sort of cult religion of the Silicon Valley tech elite for quite a while now, but it's morphing in this moment. It is morphing from a technological singularity to a techno capital singularity. So what do we do about it?
I hate to say it, but as far as I can see, there's nothing to do. Just watch, pay attention to what's happening or don't. Occupy yourself with whatever inspires you, whatever gives you energy, whatever motivates you to get off the couch and go interact with the real world and people. Do that. I mean, I'm clearly obsessing over these topics and making lots of media about them, thinking about them, talking about them, writing about them.
But the whole time I realized that I'm mostly just an observer here. I mean, I'm a citizen. I am a participant in the economy, but I seem to have very little leverage over these larger processes. Even with my interest and my background, I'm hard pressed to keep pace with developments in artificial intelligence and to understand what's happening. So what do we do about it? I mean, here's the call for feedback. What do you think we could possibly do about it? You could reject technology.
I mean, I certainly wouldn't be the first person to say, hey, don't upgrade your cell phone. Don't get a new one. When the current one breaks, don't replace it. You've heard people say that before. You may have nodded. You may have thought, yeah, my life would probably be better. And then you went and replaced your phone with a newer model. It's kind of like saying, wasn't the world better when we didn't drive so much, when we walked more or we rode horses?
Well, you still get in your car and you drive to work probably. Maybe you say, and I've heard people say this, I will never interact with an artificial intelligence. Yeah, you will. You certainly will. You probably already do and don't know it, but you certainly, certainly will. I mean, that's kind of like saying, I'll never show ID. I'll never get a driver's license. I'll never get a passport. I'll never knuckle under and be the subservient little citizen who shows ID upon command.
Yeah, you will. You may not like it. You may explicitly object to it. You might rail against it, but you'll knuckle under. You'll fall in line. And with AI, it won't be a matter of, you know, at least not at first and at least not only, it won't be a matter of you being coerced into using this thing. It'll just be that your life gets easier when you accept the blessings that are bestowed upon you by artificial intelligence.
And I hadn't planned to talk about this, so I'll keep it short and light. This sort of observer status that I'm describing, you know, that I occupy in terms of technology and culture, I explicitly apply it to electoral politics as well. I just refuse to get bent out of shape over, you know, which corporate party is in ascendance in a given moment. I don't live in a swing state. My vote does not matter.
And ultimately, nobody's vote matters because policy does not reflect the needs or the preferences of the majority of voters. It reflects the needs and the preferences of the donor class. People like Mark Andreessen, people like Jeff Bezos. Politics serve them. It doesn't serve you. It doesn't serve me. And it doesn't matter how you vote. That's not going to change.
So we're coming up on an election year, a presidential election year, at a time when AI is getting really, really good at faking stuff, at manipulating us, at not only curating the content that other humans have created, which will get us angry and get us animated, but also creating variations on it just algorithmically, automatically.
If you are either constitutionally susceptible to that sort of political outrage, or if you over time have been transformed from somebody who's kind of a take it or leave it, easy going kind of person into a political fanatic, you're going to get your strings pulled all day every day in the next couple of years. And you might think that you are the driver, that you are the puppet master, but you're not.
Up until this point, the puppet masters, while they have used technology, they have used artificial intelligence, they've used social media, they've used vast computer networks and surveillance systems to gather data on you and figure out what it is that will get you to respond in the way that they want you to respond. It's mostly been orchestrated by humans. But I think over the next decade, that's going to shift. And the puppet masters will more and more become the AI.
Will that be for the good or will it be for the bad? I don't know. I don't know. All I do know is it's not going to serve me. It's not going to improve my quality of life to get bent out of shape over it. So I am resolved for this coming election cycle. I'm not going to advocate for any particular candidate. I don't care who you vote for. Not interested.
I'm just watching to see how the introduction of artificial intelligence, specifically large language models and diffusion models for creating images, but new types of tech that will certainly be coming online in the coming months and years. I'm just watching this election to see how these new forces are coming into play. Now, I'm not omniscient. I won't be able to see everything happening. I'll have to intuit. I'll have to piece things together.
I'll have to look to other people for their perspective and for their interpretations. But when it comes to electoral politics, that's where I am. My interest is artificial intelligence. Politics is one arena in which it makes itself felt, in which it manifests. And that's how I'm going to treat politics, as an arena in which something that I'm interested in plays out.
But in terms of who wins the contest, I can't say I don't care, but I'm resolved not to make a point of talking about my preferences, my hopes, my desires, or more importantly, my irritation with other people who have different hopes and desires and intentions. If there's a prescription at the end of all this, it's just be kind, be patient, be tolerant, be forgiving. This is going to be a weird, confusing, exciting, stimulating time for all of us.
And all of us, at one point or another, are going to get worked up in a way that we do things and say things that we will regret, you know, when our blood is running cooler. If you yourself get out of hand, forgive yourself. If somebody you know gets out of hand, forgive them. If they irritate you, take some time away, you know? Better to not talk than to have stupid arguments that don't really serve you or them.
You know, that you just make yourself vehicles for this larger orchestrated contest that, no matter whether you win or lose the given argument, you lose. As Matthew Broderick's character learned in that film in the 1980s, what was that film? War games. For some games, like global thermonuclear war, the only way not to lose is not to play. So the cultural struggle around, you know, the coming election, I'm just announcing my intention not to play. All right.
Well, I think that brings us to the end of this podcast. Thank you so much for listening. I put up a whole bunch of stuff on my Patreon feed. Almost none of it is behind the paywall. So if you see a link of mine that goes to Patreon, don't assume that you have to be a Patreon supporter of mine in order to access the content there. This is mostly for people on Twitter. Or X. I also have a sub stack and you can find that and you can find links to my sub stack articles on my Patreon feed.
The Patreon feed has got the easiest of URLs. It's patreon.com slash KMO. Or for a rhyme for added accessibility, KMO.show is another place where you can get tuned into my content, which takes various forms and exists on various platforms. All right. Enough pimping. I will talk to you again pretty soon. Stay well.