Hey everybody. Today my guest is Kenneth E. Harrell. He is a cybersecurity professional and a science fiction novelist. And he came to my attention when I got a notification from Substack saying that Kenneth had started to recommend my Substack blog. So thanks to Kenneth for that. Anyway, Substack's full of men who write science fiction novels who are unpublished by the official publishing apparatus. Not surprising given the direction of the cultural winds these days.
But they may not get published, may not be widely read, but they're sure fun to talk to. So here's a conversation with Kenneth. I am joined by Kenneth Harrell, a fellow Substack author and science fiction author and thinker of things technological and futuristic. Kenneth, it is good to talk to you. Hey, it's good to be here. Alright, so I'm looking at the Amazon description of one of your novels. It's called Awakening. And I'm going to read just a little bit of the description here.
In the aftermath of humanity's last golden age, civilization reached unimaginable heights, only to crumble into dust in an instant. Now thousands of years later, Earth is a transformed world. Nature has reclaimed the planet's greatest cities and colossal abandoned megastructures stand as eerie testaments to a forgotten past. So I'll stop there and let you continue in a more informal way talking about this book. Okay, so I had this idea in my head for quite a while.
The story idea had been rolling around. I had images, concepts in my head, talked to friends about it, but I just hadn't had a chance to actually sit down and write it. After a layoff, I was sort of doing a reassessment of what did I really want to put my energy and efforts into, because frankly, I'd kind of been frustrated by work. I reached a reasonable level in my career, but I wasn't getting a whole lot of life satisfaction out of it.
And there were a lot of things that I'd already sort of, excuse me, had already done and experienced. And I just started to decenter work a lot in life in terms of what's really important to me. I didn't want to move into the director level. So I started to think about what do you really want to do? Talk to my wife about it.
And she's like, well, you've been wanting to write that story, you know, and at first the idea was in the form of a screenplay, but I've been thinking of it more in the form of a novel or maybe a graphic novel. So one day in 2018, I sat down and started writing out my basic ideas on a whiteboard. I started thinking about the history of how this world was going to work, what the technologies were, did a ton of research, read a lot of different papers.
And all of the technologies in the book are based on something. It's just I sort of add, you know, sort of science fiction elements to it. But that's what that's what got me into writing. Then once the pandemic hit, I had a lot of time on my hands. I was working at home, you know, full time and just had a lot of time to think wasn't really going out much. So I just dove into book world, both reading and writing.
And so as scenes would come to me, once I had my outline, I would write them out because I write completely out of sequence, whatever scene happens to come to me, I just write it and then I put it in. Where does this go in my outline? And then I put it there. So I make I might be thinking about a scene between, you know, say a character and their, you know, their parents or between two different characters or flashbacks. I'll just write that scene and I know where it's going to go.
And I sort of put it in that area and and and move on. But I have a habit of writing out of sequence. So it sounds like you you plot out the story before you start writing it. I usually do, but it sort of evolves like it's never locked in. Things happen, as you know, during the course of writing a story, you get different ideas. You know, my approach is such that I will outline it.
But I don't know if you've ever had this experience, but it really does feel like the story is kind of coming from somewhere else. Oh, yeah. And I'm basically just sitting there trying to very quickly write down what it is that I hear. But once that starts going, like once you can get that sort of voice of the muse going, the story then there writes itself. You know, you just set up a scene, you know, you know what you want to achieve by the end of that scene.
And then it becomes a matter of really just trusting the muse and trust in what you hear and then writing it down. But I'd say it's a it's kind of a collaborative process is the way that I see writing. It's basically you and the muse or whatever your process is. And basically just listening to the story as it comes to you. Well, you know, earlier today, I was looking at your substack and you don't seem to post long form essays all that regularly.
But you can learn a lot by somebody by just going, you know, clicking over to their likes and seeing what other things on substack they've liked and also their activity. You can see what they've reposted. And it seems that a lot of your your interest and concern, at least when you're on substack, involves presentation to the public, basically relating to an audience as a creator online. Is that is that central to your your thoughts these days?
Yeah, I mean, I joined substack essentially to to, you know, market the the novels. I sort of talked to some marketing people, but I just kind of didn't like the approach that they wanted to take. They really wanted to go hard and heavy into social media. I've kind of given up on social media. I've got a lot of opinions about it, but I just really didn't want to do that.
I wanted to have someplace where I could go, where I could talk about, you know, my novels, where I could sort of explain what my perspective is on my own stories. And I didn't really think that that any of the other platforms are really appropriate for that. But I had heard about substack for a while. I had seen articles there for a while, and it seems like a space where you could do that. And that's why I started writing on substack. And what sort of people are you connecting with on substack?
Mostly other writers, but I'm also interested in people that are interested in things like AI, cybersecurity, technology, books in general. I'm really not on substack for political stuff just because I think that political things right now have an outsized influence on everyone's life. And I'm not sure that's the healthiest way to live. I agree. I was thinking if you were alive back in the 1800s and something went down in Washington, you found out about it like a month and a half later.
Now you can watch it as it happens. And I'm just not so sure that that was the original intent is for us to be so our minds are just so heavily involved in politics. And I just don't like it. I've grown exhausted of it all. So I try to stay in book world. I listen to I read other authors, I write my own stories. I do a lot of reading of just things that I find interesting. Like I read a Russian study on gravitational waves a while back. Didn't understand most of it.
But when I was going up, my mother challenged me. She said, if you don't understand something, basically just keep studying it until you do. And so I will often read things that are way over my head. But I try to understand them.
And when it comes to certain things, especially with regards to like physics, the only thing that I really regret is that my the extent of my understanding of some things in physics really stops at metaphor, because to truly understand it, you really have to get into the mathematics of it. And I just I'm just not there. Yeah, I'm not either. And also, I'm not all that interested in reading something that's really super technical. Yeah, yeah.
Well, my novel is technical to a degree, but it's mostly explanations of how the technologies work. So there's I remember when I was when I was looking around for an editor, this was when I was going to take the traditional publishing route, which I have now given up on, came across a couple of editors that just told me rather strange things like, oh, it's it's too techie for me. I'm like, well, I kind of explain how every technology works.
Had another guy, he just couldn't rock the idea of a artificial planetary ring. He's like, well, I just don't see how you would ever be able to build that. And I'm like, well, I kind of did explain it in the book. So yeah, so it's. Well, what's more? I mean, like Larry Niven's Ringworld is is, you know, it is a decades old classic, well established chestnut in science fiction.
So you know, if you can't if you're your editor can't understand a planetary ring, you know, much less a ring world or like a bank's orbital, then they're probably just not suited to editing a sci fi novel. Yeah. And it really wasn't even on that scale. The planetary ring I was talking about is something that would be orbital. It is sort of around the planet.
The Larry Niven basically took a planet and turned it into a strip like you take a planet, you unfurl it and you put it around on the inside of some type of incredible megastructure made out of God knows what. And yeah, mine wasn't even at that scale. It's just a planetary rings like not that. And I explained it in the book. I'm like, hey, it's molecular self assembly. It's you know, it's fully explainable.
But because I've been thinking about different ways that you could build large scale, you know, large scale structures in space. And so I was like, well, what about just, you know, self assembly? And I got the idea from reading a couple of articles a while back about experiments that have been done with essentially robots that, you know, sort of build and construct things. And I extrapolated that to nanotechnology.
I'm like, well, what if we took nanotechnology and just combined it with AI, where you could have like generative physicality. So you know, like in the same way that we will use a an LLM or a generative AI generator to generate a picture? Well, if you you could do the same thing with matter. And so I sort of set up a situation in my book where that's possible. You basically give a a description of what it is that you're trying to make, what the function should be.
And then basically, the the nanotech will just build that. You don't need to necessarily know the nuts and bolts of how something works. You just say, I need something that, you know, has the following features and functions in this way. And then it will just sort of build itself in front of you. But I did really explain how all of the different technologies work. So to the degree that it's really technical, it's just an explanation of how those things work.
Right. You know, I'm looking at your Amazon, your three titles here on Amazon, Awakening, Body of Work, City of Dreams. And there was one other I saw. Where is it? I guess Body of Work and City of Dreams. Oh, we're there's Body of Work, the Enforcer. Right. So the Body of Work series is set in the same universe. So one of the things that happens in Awakening is that mankind starts to go out and spread across the galaxy.
And I wanted to go back and say, hey, you know, what happened to those colonies? You know, Earth may not be around anymore. But what about all those colonies that humans created? So Body of Work is set in one of those colonies, a place called the Union of Worlds. And they essentially were a colony of Earth and Mars that broke off and became its own thing in the post-collapse era. Well, I noticed that you have only Kindle editions listed here. Are there not paperback editions as well?
There's a paperback edition of Awakening. But Body of Work series right now is just Kindle. And that's because it is a short story. And I wasn't so sure that it was worth it. It just doesn't seem to be very cost effective to have a print version. I have a couple of print versions that I gave to friends. But I just thought that for what you're getting, the length of the book is just better in Kindle form. Gotcha. Well, I've published on Kindle and paperback. I have not done a hardback.
And I would very much like to do an audiobook because most of the sci-fi I take in these days I take in as, you know, Audible books. And Amazon has got, you know, Amazon owns Audible. So they're very integrated there. They have a function where an AI narrator will read your novel. And I haven't done it because, you know, I want different character voices. I think you can choose different character voices.
They have like a production option where there's a couple of different options on how you can generate that. And I think one of them is there's a production option. I also found another company that will do audiobooks. I'll have to find it and send it to you afterward. But you can technically do this via 11 labs. However, long generations tend to have more of a compute demand. And you'll start noticing a degradation in audio quality the longer it goes on.
Also it doesn't always pronounce certain types of words. Like if you have a lot of specialty terms, it may mispronounce those. And you can add phonetic corrections in. But it's very labor intensive. But technically, if you're willing to put in the time and you're willing to do the editing and you're willing to, it's a lot of generation. You can technically do an entire audiobook using 11 labs. But it's not at the place where it should be.
Ideally, what would happen is you would take your EPUB file, you would upload it. And then you would say, okay, I want this voice for these characters. And I want this voice for these characters. And then you would basically be able to sort of lay it out. Like that's what would be ideal. And I'm sure something like that's coming. But that's not where we are right now. Well, I did use 11 labs to do an audiobook for a novella that I published on Substack.
But I just ended up reading the whole thing myself and then just using 11 labs to change the voice. Because there's a female protagonist. So I made it into a female narrator. Oh, nice. Yeah. And I had to do that to get the right intonations and also the right pronunciations of characters names. Yeah. And I think 11 labs right now has a voice to voice. Did you do it through that? Or did you, was it just text to voice? No, it was voice to voice. I had to read it. Voice to voice.
Yeah. Nice. Yeah. So it's a crazy amount of work. I mean, you know, I'm basically creating the audiobook myself. You know, my reading is not perfect on the first go. So I would have to do a long recording session and edit that session and then upload that file to 11 labs. And yeah, and it's cost money. It's not free. Yeah. And don't you have to have a pro account? You have to have like the most expensive account to do it? No, I don't think so. No, I think I had the cheapest account. Oh, wow.
Yeah. So how long were your generations? Like over a paragraph or under a paragraph? I forget. I think it was a word count, you know, maximum word count. And I just don't remember the number. It was more than a paragraph for sure. It was maybe a minute or two. That's interesting. Yeah, because I found really long ones that the audio quality just drops, right? The volume drops and the audio quality drops off. And so I had more success with shorter generations. I would just put shorter.
Like for all of my Substack articles, I kind of have like an audio annotation and it is my voice, but it's an AI version of my voice. And so I have to do it in chunks and then edit them all together. Yeah, pardon me, which is a lot of work. And it seems like the sort of work that would be good to hand off to an AI and the AI is just not there yet.
And that's, I think, sort of a microscopic example of just basically people's experience of AI writ large, which is that, man, this stuff is really amazing. It seems like it should be able to do all kinds of stuff that it's just not very good at. Yeah, I think we'll eventually get there, but it won't take years and years. I mean, because there's capabilities that we have now that we didn't have like exactly one year ago. So I think that we will eventually get there.
But you know, this AI thing has been pretty exciting, but we also have to be like realistic about it. And I think that the place where this will probably land will be something like it will be another sub genre. So you know, like when you were like 1415, I think all boys did this, you're sitting around and you're trying to think of bizarre movie and story combinations like, man, what if Batman, you know, fought Darth Vader or something? Well, you'll be able to do all those scenarios now.
And I think it will it will just be another thing that people do. I don't believe that we're headed towards a future where people are going to sit around watching AI movies and doing AI stories. I think it'll be thrown into the mix. It will be yet another thing that folks are doing. But you know, people really do have to write stories. I do think that AI can be used to enhance and make that process easier so that more people can can do that.
But all of this reminds me a lot of what happened with desktop publishing and drafting in the 90s. I remember there were a lot of people that were in the publishing business and even in school where if you were writing your papers using a word processor, I had certain instructors that would insist that you type it. I'm like, well, what difference does it make? It's like, well, not if not if you just press a button, it'll do your paper. I'm like, no, that's not what's happening.
And I remember during with desktop publishing, they used to have these service. I don't know if you remember service bureaus, but they were like these places you'd go and they had everything there. They had computers, they had printers, they had if you needed to, if you needed a plotter, they had those and you could rent the time. And they were very popular like in the early, early mid 90s. I'm thinking 90. Gosh, this was 94, 95 ish.
And I remember at that time, a lot of people just dumped out of the drafting, you know, an art business because they said, well, you know, it's all being taken over by Photoshop and you know, all of these Adobe products. And they just kind of gave up on their career. And the thing is, is that you have to kind of whether you're a writer or whatever you're involved in, you have to embrace these technologies and integrate them in. And if you look, that's what people have always done.
So you know, like the the great artistic masters, people think that, you know, they just drew that stuff straight from hand. And what I found out is that what a lot of them did was they use all kinds of complicated optics in order to create projections that they would then trace over. So you ever you heard of a camera obscura? Oh, yeah.
Yeah. So oftentimes, works were created by using camera obscura, where you just have this box, you have a pinhole, you have the subject outside the artist inside. And although the image is inverted, you just trace over what it is that you're seeing. And then that would give you a foundation upon which to build. Now, some people think of this as cheating. I don't think it's cheating. It's just you utilizing the technologies that were available.
So as far back as you can look, writers, artists have always used whatever the latest tech is to, you know, to further their craft or whatever it is, whether it's the stage, whether it's music or art or, in our case, writing. So I think we just have to figure out ways of how do we integrate these technologies into what it is that we're doing. But I think that the losing game are all of these people that are really into no, no, no AI, you know, no AI art, you know, no AI anything.
And I'm like, well, you're missing out. Because I treat AI like a buddy that sits on my desk and I say, hey, does this scene make any sense to you? And I'll put the scene in and also how can I make this scene better in terms of tone, in terms of what's going on? And so first I make sure that it, in quotation marks, understands the scene. And then I start to improve it. So I don't think it's a matter of, you know, cutting and pasting from your LLM.
It's working with the LLM to make your writing better. And from some of the suggestions I've gotten, it's like not exactly what I want, but it gives me ideas to write something else. And that's kind of how I think about a lot of these tools is their creativity. Like I call it like a gumbo starter for, you know, for your creativity or for your writing. So anybody that's ever made gumbo, I'm terrible at making a room, which to me is like the hardest part of making gumbo.
So I usually use a starter mix, right? Well, that's kind of how I think about like generative AI art, LLMs, things like that. They can sort of be a booster to your creative process. They can help it along. And I actually think that between the AI tools that I use, I have a, I use in Word, it's called, hang on a second, I actually can't remember. It's called ProWritingAid. So ProWritingAid is what I use. And that's completely replaced the need for an editor for me.
And then for everything else, I use a custom GPT that's been trained on things that I've written. I've been trying to capture my voice in the LLM. So I trained it on all this stuff. Poems I wrote, blog articles that I wrote in the past, old screenplays. I dumped it all into the custom GPT so that when I'm going through scenes, I can say, hey, how can I make this scene better? And it'll give me suggestions. And then I take some of those suggestions and some of me.
And that's how I've been working since I integrated using some of these tools into my work. And with regard to art, I used to just go to DeviantArt and I would just sort of randomly look around at different artists, try to get ideas. Now I can actually describe exactly what it is that I want to see. I can generate that in using something like Lexica, which I use often. And then just seeing those images around. I'll make a bunch of different versions.
And between the AI art and the music and just getting myself in that zone, the story just starts to come. It's like literally just turning a faucet on and the story starts coming. I've noticed that the AI image generators came on the scene a little bit ahead of chat GPT. And GPT 2.5, GPT 3, and 3.5. That's when people, when they put out chat GPT, that's when a lot of people first woke up to transformer-based LLMs. And so there was the pushback from the artists.
And mostly I think it was pushback from aspiring artists against AI image generation. That came a little bit before the current AI pushback, which I think is now oriented more toward text-based or text generation. And I remember seeing really rapid improvement in the quality of AI image generation over the course of 2022 and 2023. And I was using it a lot for a while. And I just realized I don't use those things very often anymore. I'm just not all that interested.
I noticed you're talking about your custom GPTs and whatnot. And I just realized that we have a new cultural split over technology, I think, that will be emerging. There's the PC versus Mac or iPhone versus Android. And you're one of those open AI people. And I'm over on Claude at Anthropic. I don't much care for open AI. I don't like just the off-the-shelf voice of chat GPT. I much prefer Claude. Yeah. Also, the device I was thinking of is called a Camera Ludica.
So this is like a portable optical tool that was used, I think it was patented back in 1806. And it was used by the old masters in order to create that artwork where they were tracing over and that tracing would give them the foundation. And then from there, they would just move on. But it just shows how, if you really look at it, artists have always integrated new technologies and techniques into their work. And I think that we should probably continue to do that.
And yes, I do consider writing to be an art form. You follow Brian Chow? No, no, I've never heard of him. Well, it's one of those worlds. There's so many people who are online saying good stuff. You can't follow them all. But he was talking about basically the sort of provincial and small minded response to AI generated text and how his opinion is that the ideas are what's important.
And if all you have is sort of your style of expressing the ideas and that style can be duplicated algorithmically, well, then you don't really have much to offer. And he's not very sympathetic to people who are basically filling up moats and defending their territory against transgression by artificial intelligence. And I'm not sure I entirely agree with that. But I wrote about this just recently.
I'm not tempted to have an AI write anything for me because the stuff that it puts out is just sort of bland and generic and not very interesting. And I noticed that if I'm reading, if I'm on Substack and I come across something that feels like it's AI generated to me, I just move on to something else. So I'm not remotely tempted to have an AI just create something for me that I then put up on the web. That just kind of seems like a waste of time. Yeah, it's a complete waste of time.
And that's not really the way that you use it. The way that I think about how you use AI in the same way that you would have a friend that's hanging around at your house, you say, hey, does this read this thing for me and tell me, does this make any sense? And how can I improve it? I look at it as being no different from that. Yeah, it's something to help you improve what it is that you're doing. It's not something to replace what you're doing.
I mean, there's certainly, now look, there's going to be a lot of people, and there are now, that are taking AI generated books and just putting them up on Kindle. And it's kind of like their thing. I just don't think that that's the way to go. That's not a proper use of AI. AI should be sparking your imagination. It should be helping you be more creative and a better writer, not doing the work for you. That's not the idea.
Now, if capabilities continue to improve, though, a few years from now, we might be having this conversation and we might be saying, you know, I just I notice I'm not reading any human written stuff anymore, that pretty much everything I read is completely AI generated. And at that point, it's a very different story. I mean, then, you know, once once AI is doing things like that better than the best humans, we live in a very different world. I think we're very formulaic.
I mean, you know, I think it's sports stories have been written by AI since the 90s. Because sports story like when you're reporting on baseball, it's very, very formulaic in terms of what happens. There's not like a zillion things that can happen. There's only what let's just say maybe, I don't know, maybe a thousand different things that could possibly happen. So it's not an infinite number of options.
And I remember reading articles in 1999 where they were saying that a lot of a lot of sports articles are just automatically generated. These were really super, you know, primitive as compared to what we have today with LLMs and not sure what the what the model was at that time. I think they were just using pre-programmed rules. But you know, if if a if a score is made, the score is made by a person.
If if if you know something happens, it's it's done in a particular way and all that can be scripted out. So there's a lot of AI generated content out there that's been out there in different forms. But I just think it's going to get it's going to get increasingly more difficult to have this sort of, you know, no AI stance, especially when it's being integrated into absolutely everything. I mean, it's being integrated into into office. It's already integrated into many Adobe products.
You know, there's it just seems like it's going to be difficult to have a no AI stance realistically, especially if it's integrated everywhere and into everything. Yeah. I mean, there are certain conversations I see repeated online again and again. And I don't, you know, insert myself into them. I just kind of stand back and watch people carry out the sort of rote moves. But, you know, lots of people, they just post on social media declaring they will never use AI for anything.
And invariably somebody else posts, well, not that you know of, but, you know, you are interacting with these systems already and that the number of them that you will be interacting with without even knowing it will increase in the future. Yeah. And I'm also not sure what people think they're achieving by having like a no AI stance. I remember I was having a debate and you ever had a debate with someone where you realize a couple of seconds in that, all right, this is probably not worth it.
His entire argument was he says it's missing. It doesn't have sold. And I'm like, oh, God, we're going to have, you know, we're going to have that conversation. But I don't know. I think part of part of my frustration with all of this, all of the talk around AI is that everyone's talking with this degree of certainty that they shouldn't have. This is a new thing. I don't know how this is going to turn out. I mean, we're all riding on this on this big wave.
It's like, who knows where this thing's going? You know, and then there's other times where I almost feel like we're flying down the road. The headlights aren't working. You know, there might be a turn coming up, but we don't know. You look to the guy next to you, say, hey, should we slow down? And the answer is like, the Chinese aren't going to slow down. We're just barreling down the road. You know, you got no headlights. You can barely see. And sometimes it feels like that's where we are.
But I think that's what it feels like when you're in the middle of like a transformational time period like the one that we're like the one that we're living in right now. Well, the dynamic you just described, I think, is terminal race condition. Yeah. Yeah. Where you want to stop, you want to slow down, you want to proceed with more caution, but you're in competition with somebody who has, you know, the same competitive motive to cut every corner and to just push the pedal to the metal.
Yeah, I think it was Mark Zuckerberg that had that argument. He's like, well, the Chinese are good. I'm like, so basically that's going to be the standard now. Like, you know, it's like, are we are we going to stop drinking the poison? Well, the Chinese are going to stop drinking. Yeah, we live through that in the 20th century with nuclear weapons. Yeah, we're going to spend how much to build these things that do what? And it's like, well, we have to because the Soviets are doing it.
Yeah, that was the idea. I mean, it's still that way now. Like, I would be willing to bet that we're going to have a lot of changes in our law in order to address drone overflights. So I don't know if you've been keeping up with what's been going on in New Jersey, but they have these drones and then there's confusion. Is it UAPs or is it drones? And there's a limitation on what the military can and can't do because of the Posse Comitatus Act.
So I'm pretty sure that we have some some law changes coming to allow the military to be able to like, hey, and I thought that they had this ability already. But I started researching and found out, no, it's actually rather complicated.
And also from other information security folks that I know that are military, part of the reason why sometimes they will allow sort of like because this infuriated a lot of people is the Chinese balloon was allowed to overfly the United States is that sometimes the reason why that's allowed is because although they're capturing data on you, you're also capturing data on them.
So it's a two way data capture opportunity for both the aggressor and the person that's trying to defend against the aggressor. So these things aren't unfortunately explained in the press all the time. But my understanding is that that's why they will sometimes allow surveillance overflights of these balloons because they want to do data capture. I would be interested in hearing how AI is changing your work in cybersecurity.
Well, I could tell you a lot of the tools that we're using leverage, leverage AI, and I'm using it in order to understand logs. So a lot of times when you have to pour through a log file, I'll just dump it into our address and say, hey, give me a summary of exactly what this means. And it's really helpful for things like that. It's also helpful for things like email security.
So we use certain tools that will if it comes across an email that it doesn't it can't determine a verdict for it will kick that over to us for human review. And then we'll go through it, run it through Joe Sandbox and other tools, Virus Total, in order to see what's wrong with it. And then if we find something wrong with it, we can report back or we can score it and come up with our own verdict.
And by all of the different users everywhere doing that, you're also helping to train their AI model on how to better identify things. So in terms of AI threats, I have noticed that phishing emails have vastly improved in terms of like spelling issues and grammar issues. The only thing that it can't seem to get perfect is certain cultural things. So I saw a scam email a while back that talked about something called the US Department of Income.
And I'm like, well, we don't have a US Department of Income. We have apps. So they're just not clear on what our departments are, which is odd since that's public information. But yeah, we haven't seen, let me say I haven't seen too many specifically AI related threats, but we have done a ton of research on potential future threats.
So one of the things I did, I was involved in a project at my work where we wanted to show, hey, could someone simulate someone's voice in leadership and then call into support to try to get them to do something like say turn off 2FA. And so I created a couple of different scenarios and they played them to the board and they were slightly horrified. And so we started to take some measures and put in some tools and policies to try to mitigate against this. But again, it's a moving target.
I also did, I think I did another demo where the president of the company is explaining to employees why he wants them to follow local law enforcement dictates to move away from the coast because the creature is now attacking San Francisco. So I did Godzilla attack San Francisco. And yeah, that was a fun project. It was fun to kind of terrify the board like that. But this stuff is really easy to do. And all of the voice samples that I use were just captured from, he's all over YouTube.
So I mean, I just captured his voice from YouTube videos from the investment call, things like that. And that was enough. You don't need a whole lot. I think I used seven samples of his voice and they were relatively high quality. And I was able to replicate his voice and make leadership say whatever one of them say. It's not difficult at all.
And I think there was a case where someone, there was like a fake kidnapping and ransom case, someone thought that their child was being held for ransom and they were just simulating the voice. So I suspect that a new cultural norm will emerge soon where we will either not share our names or we will have, I already have this set up with my family, but we will have certain phrases so that we can distinguish between ourselves and a artificial generated version of ourselves.
You know, I've been podcasting since 2006 and I've done well over a thousand podcasts for various different shows. I've been guests on different shows. And so my voice is out there. I know that anybody who wants to can clone it at any time. There's nothing I can do about that. You mentioned email. I've had the same email address since 1995. It now forwards to my Gmail address and I have 300,000, 302,572 unread emails in my inbox. There's so much stuff in my inbox.
I don't really like go through it and, you know, read everything. I just sort of glance through it looking for familiar names, basically. Google is okay at like putting this little yellow flag on things that it thinks might be legitimate or, you know, worth my interest, but it's moved from being, hey, this is suspicious to hey, out of this huge tsunami of crap, here's a couple of things that you should probably open and look at.
And I noticed that every like once in a blue moon, I'll go into my spam folder and I'll find things in there that really shouldn't have been, you know, tagged as spam. But as far as I can tell, Google's getting pretty good at filtering this stuff for me. And I never see like scam emails anymore. But what I see a lot of are thirst trap, you know, fishing, like cat fishing, they call it or pig fattening. The pig butchering. Coming in via my phone, you know, through text messages.
And it's in one respect, you know, it's obviously annoying and infuriating. A lot of times the people who are doing this, you know, it's like a sweatshop sort of thing. They've got a board with, you know, 40 phones strapped down to it. And they're just sending these automated messages, you know, these scripted messages to huge lists of, you know, of numbers that they purchased somewhere.
But the people actually doing the work are, you know, a lot of times they're trafficked, they're not the scammers themselves. The scammers are their bosses. Yeah, it's the most I've seen those setups. It's the most cyberpunk thing you've ever seen. It's guys with sitting around, you know, and like you say, in front of boards exactly how it happens. And they're they're running these scams.
They also have situations where I was aware of a situation where this one individual in India was running two businesses. One was a legit, like service desk type business. But the other one was an illegitimate business, and they were in the same building, same floor, but divided by two different sides of the building. So the illegitimate business was on one side, the legit was on the other. And this kind of stuff apparently is common.
I had a boss where he used to work with Interpol to actually go and, you know, knock down doors. And he said that when you get to these places where in Eastern Europe where they're operating, he says it's a business by every visual standard, you have to badge in, they have HR, there's catered lunch in every way. It looks like a startup company, but it's a scam company. Company just run scams. So I wish I could say that there's this movie called Beekeeper. I don't know if you saw it.
I wish I could say that was an exaggeration. It's kind of what those places are like. Probably not as glitzy, but it's pretty darn close. I haven't seen Beekeeper, but I know it's a Jason Statham revenge flick and that it doesn't incorporate a lot of the topics, the subject matter that we're discussing here. But a point I was trying to get around to was that while these catfishing schemes, the fact that they're always popping up on my phone every single day, I get several.
It's actually kind of encouraging because I can see in a glance what's happening, which means that they're not that sophisticated yet. Eventually, there's going to come a time when they will fool me. Yeah, yeah. I think it's just going to be a back and forth kind of thing. It almost feels like spy versus spy where the protection methods will increase and then their attack techniques will change. And we'll just kind of go back and forth.
But yeah, it's going to be challenging and it is challenging. How it is that we're ultimately going to deal with it? I don't think that law is the way to do this. I think that there are things that industry can do in order to address it, but law just moves way too slow. And I think that a lot of the issues that we have are because different things are moving at different rates. So I kind of imagine it as a giant phonograph record that's spinning. Technology is almost at the center.
It's the thing that's moving the fastest, right? Then you have the general public. They're sort of like in the middle. They're not too slow. They are aware of some things, but they're certainly not moving as fast as, say, those that are closer to the center of the phonograph record. And then you got the law, which was way out on the edge. And they're the slowest thing moving. And so all of these things are moving. Industry, politics, media, all of these things are moving at different rates.
And we keep expecting them to align. And they never quite do. And I think that the other issue is that we used to treat all of these things separately. So there was the realm of finance. There was the realm of technology. There was the realm of politics, et cetera. And we had all of these separate realms. And now it feels like all of these things have been converged, like you've taken all of the different colors of Plato and mashed them together.
And it's very difficult to disentangle these things, where it's like, well, finance and technology and politics are now all inextricably bound to one another. And I think that we still think of them as separate things, but they're not. They've kind of merged into this new thing that we don't really have a name for or really haven't quite recognized. And even in that binding, that's also changing. It's interesting to watch Hollywood undergo this very slow collapse.
It's like watching a star turn dark around the edges, and it's just slowly starting to collapse. It's just really odd watching all of these things happen. And it seems like it's happening faster than ever. I was talking to a friend of mine about, doesn't this tech era feel like it's moving infinitely faster than what we remember in the 1990s? Timescales used to mean something. Like if we said three to five years, we had a fairly good idea about what's going to happen three to five years.
I have absolutely no idea what's going to happen in the next five years. I wouldn't even dare to guess. And so all of these timescales that used to mean something now don't really mean anything. It's all been compressed by just the speed of technology. So a lot of times I'll be saying, I'll catch myself in the middle of saying something like, you know what? I don't know. I don't know if that may be a possibility in the future. So that's just where we live now.
We live in this place where the recognizable timescales that we used to plan, we used to say, okay, three years, five years, 10 years. And we had a good idea as to what could be achieved in that time. Can you imagine anything in 10 years? Because I can't. In very broad strokes, but certainly not in the specifics, and it's the specifics that people don't anticipate that turn out to be really important, you know, that define eras. I mean, those are the black swan events.
You are in the Pacific time zone. Are you in California? Yes, I am. So in your state, you know, a lot of the big AI companies are headquartered there. And the heads of those companies collaborated with the California legislature to come up with an AI safety bill that was passed and then your governor vetoed it. What's your experience of being a Californian and watching that happen? I mean, California has got a lot of different issues.
I actually worked for the state of California for a while, and I realized how what I found was that for every single role, at least where I was working, there were there were two people working every role. There was the state worker and then there was the contractor that did the actual work. And every everything that I had ever been told about like the, you know, well, now granted, the private sector isn't any more efficient. Private sector has their issues, too.
So my goodness, the amount of waste that I saw working in that position for the state, like risk accepting everything, for example, and not touching anything, because people are terrified to touch the code base. They're terrified to patch anything because it could break the entire system. And you've got a code base that's over 20 years old. You've got systems in place that have been in place for years.
And so instead of solving problems, I found they just risk accepted everything, which is not which is not security. And you know, I was so upset by what I saw there. I eventually left, but I was so upset by what I saw. I just wrote up everything that needed to be fixed and how to fix it and gave it to all of the the folks in the different departments. And then I wrote a letter, I think, at the time to Kamala Harris, strangely enough to say this organization has some really serious issues.
And, you know, it deals with a lot of data from Californians. You know, something needs to be done. I got a sort of a boilerplate letter back. Never heard anything else about it. But yeah, I'm just not so sure if things like an A.I. safety bill is going to do it. What we need is is and what we sort of have are more private sector solutions similar to bioethics.
So when genetic engineering and these these technologies start to emerge, people realize quickly we need to set up like an international bioethics, you know, infrastructure to sort of deal with some of these issues and come to some agreements. We need something like that for for for A.I. And I think that in a lot of ways, we're sort of headed there.
But it's unclear to me that if we're really going to it's unclear to me that that the law is going to be flexible and agile enough to keep up with where the technology is going. Are you familiar with the concept of accelerationism? I am. And when I kind of looked into it, it was immediate turn off to me because it just seems very anti-human.
In my opinion, the notion that if you can do something, you should you should do it better is that you should we should be accelerating things towards some type of, you know, technologically economic termination points. I mean, a lot of damage can be done to real people in the real world along the way. So I don't know. I came across this. I think it's the EAC is what it's it's how it's referred to. Well, the ACC originally by itself was a thing.
The main figure associated with that was a guy named Nick Land, who wrote in a very impenetrable style. But he was very I mean, he ended up in a very anti-human place, saying basically, it doesn't matter if humans survive, we just need to push forward to this informational singularity and send the AI off to the stars or whatever to colonize the galaxy. Yeah. EAC, effective accelerationism, is sort of the continuation of effective altruism.
That effective altruism was discredited with the the FTX debacle with, you know, Sam Bankman Fried basically being the sort of the poster child for effective altruism. And, you know, he turns out to be a criminal. So a lot of the same people who were persuaded by that school of thought have sort of moved over to effective accelerationism, which basically says for the good of humanity, we need to, you know, push the techno capital lever, you know, as far as it goes and as fast as it goes.
And like major figures there are Mark Andreessen, who has made some very sort of tone deaf statements about people who are not involved in tech, you know, basically that they don't matter and that he's he's grateful for video games and Oxycontin to keep them quiet and occupied and sort of out of the way. So yeah, go ahead. Yeah, I mean, it's a little history lesson there. But what's your experience of it? It's just a complete turn off. I'm not interested in it at all. I can't I encountered it.
I think the first time it was Alex Friedman podcast when I first heard about it. And then a friend of mine texted me and it was so strange because I think that when he texted me by his reaction, he thought I would be pro EAAC. And he said, what do you think about this? And I had already seen. I said, yeah, I've seen it. He's like, what do you think? I said, I think it's the most anti human ideology I've ever seen.
And I could just tell that there was like this huge sigh of relief from him, even through like, dude, I said, what did you think I was going to say? And he said, I thought that you were going to be I thought you'd be in support of it. And I'm like, no. So that was the first time I came across it. But now I I mean, you know, Douglas Roscoff pretty much killed the techno utopian in me. I still think that that technology is probably one of the best tools we have to improve the human condition.
What I think I don't think that people really hate technology. What they hate are these business models that are wrapped around the technology. I think that people would love to use this tech if it didn't track us and collect our data, things like that. It's not the technology, it's the business models. And many of the business models are pretty exploitative and kind of shitty. And I think that's the thing that people really, really resent.
But it does make me wonder, I might explore this in a story like what would what would a technological civilization like ours at our level look like? But without all the all the extractive data, all the extractive business models, what if they were things that actually, you know, added to the the human project and didn't just extract things? Because it seems like we're moving to a future with these eight this EA. I think it's E slash a CC, I think. Yeah. EAC is how it's usually settled out.
Yeah. We're moving to this place where the only value that humans will serve is to is to generate training data. And I just don't think that's what we are as beings. We're not things to be used to generate training data. You know, we're human beings. And I just don't understand where this is going like that. Like I don't understand what world you would end up with other than something close to the Borg.
I just don't see where how that how their ideology would be in any way beneficial to humanity. When the first Avatar film came out, I think when 2009, I remember reading something from a I guess you'd call them a techno utopian or a techno techno file. Basically, somebody who is really contemptuous of spirituality and like, you know, deep green sentiments, ecological sentiments.
But they were saying, yeah, this this whole like the scenario in Avatar of this planet where you have all these tall, beautiful, strong people who live in harmony with nature and they all have ponytails that they can plug into the ponytails on animals, you know, and sort of interface with them and they can plug into this great tree and become one with the mind of the planet.
But this was all created in the wake of a technological singularity where some super intelligence basically just created this sort of spiritual playground and populated it with these naive entities who think, you know, that everything they're doing is spiritual and organic and ecological. When in fact, it's all just a big construct that was created as sort of a paradise, you know, an artificial paradise. And that's that's one, you know, one sort of trajectory.
Like if you wanted if I know people who are genuinely anti technology, I mean, it's not that they hate capitalism. They do hate capitalism, but they also hate machinery. They hate cars. They hate planes. They hate computers. I mean, they just they love biology.
And for somebody with that mentality, you know, with sophisticated enough technology, you could create a paradise for them, which seems to answer to all of their preferences and obscures from them the fact that it was provided by technology, the very technology that they hate. Yeah, I mean, I do think that spirituality for him is kind of an unavoidable thing. And I think the reason it's unavoidable is because we die.
And yeah, and because we die, there's this question that's there, which is, hey, you know, everything that I am, you know, does that just end when I die? Or is there some type of after death state? And I think that as long as that question remains unanswered, the issue of spirituality is is is kind of inescapable.
And I sort of wonder sometimes for people that don't have any kind of spirituality or belief at all, things must be pretty miserable because you must spend like a lot of time trying to, you know, get away from the idea of spirituality. And I've just come to the conclusion that it's for humans, it's inescapable. So long as as the possibility of death exists with the most the other issue that you that you had brought up there. I don't know that it was an issue.
I was just reproducing somebody's argument that I had encountered a while back. I mean, you say spirituality is inescapable. And this is this is sort of a tired trope. But you know, the people who are the most anti-religion and pro-technology tend to construct these very religious seeming narratives about the future, you know, about the technological singularity and uploading their consciousness and, you know, immortality through technology.
And, you know, it's it's basically replicating the the psychological palliative of religion, you know, while holding the actual concept of religion at arm's length. Yeah, I would have to agree. There's a book by a guy named John C. Lennox that goes into this and shows how a lot of techno utopianism really is just a surrogate for religious belief. It just is.
There's even a an after death state that's described where you would, you know, treating human consciousness as data that can be moved from one thing to another and that you would just sort of live perpetually in this sort of, you know, this sort of, you know, cloud or some type of virtual state.
Interesting. But I think that if you want to really get into a good exploration of that, John C. Lennox in his book 2084 did a pretty good pretty good job at really going through what that what that argument is really about. All right. Another book for the very tall list of unread books that I'd like to get to someday. Yeah. All right. Well, we've been on for about an hour, so we should wrap it up.
But before I go, we've been talking a lot about, you know, artificial intelligence and technology and, you know, systems that involve economics and sociology and technology. But I'd really like to get back to science fiction. What are some of your like foundational science fiction texts? What are the big books for you? Well, for me, it kind of all starts with with Dune when I read Dune as a kid. And I've read Dune so many times, like throughout my life. It's had a huge influence on my thinking.
I realized that in Dune that there's a I learned a lot about politics by reading Dune. Also learned a lot about power. And the thing is, is that, you know, I'm not so sure that Paul Atreides is a hero, per se. Definitely not. He's taking advantage of a situation. But then he even sort of breaks from that because his mother has a certain thing that, you know, she wants to achieve. He has certain things that he wants to achieve. But I actually learned a lot from Dune.
I've read like I haven't read the ones that were written by his sons. I've kind of skimmed them. But I've read all of the core Dune novels. And yeah, tremendous amount of influence is has been with with Dune. Also like, you know, Old Man's War, John Scalzi's books among many, many of the books that I read during the pandemic. Adrian Tchaikovsky's Children of Time, Children of Ruin. I really enjoyed those.
And then there's other books that I read, like, you know, The Art of War, you know, The Fourth Turning. Yeah, I read books that are other than science fiction. One good book that I read about a while back was a book by this guy named Ernest Becker's called The Birth and Death of Meaning by Ernest Becker. Really good book. So I try to read like a lot of different kinds of things. But James A. Corey, and I know that that's a that's a pen name. But I've read a bunch of those.
Recently I read Andy Fitturo's No Dogs in Philly, which is a really gritty cyberpunk novel. I actually enjoyed it a lot. I'm going to get the second version. Also Nick Webb, his Legacy Fleet series I enjoyed a lot. First book is kind of like a little rough to get through. But once you get through that first book, it just it takes off. I don't know if you've ever heard of Nick Webb, but check out his legacy fleet series is actually pretty good. So Cloud Atlas, you know, I enjoyed that book a lot.
So I've got a pretty broad set of of things I like. Sometimes I'll read autobiographies as well. So the last one I read was Becoming Superman by J. Michael Straczynski, who's also a sci-fi writer. He's the guy that created Babylon 5. Oh, yeah. Yeah, he's written a lot of comics. That book really pulled me out of a pretty deep depression that I was going through at the time. I had been laid off.
It was one of those types of situations where I was laid off about a year and a half out from fully vesting. So there were a lot of plans that I had planned to do with that with that that stock money and just never happened. And I was just doing a lot of reevaluation. And once I read that book, it had such a profound effect on me because I'm like, man, if this guy could survive everything that he did, you know, there's no good reason why I can't survive what I'm going through.
And it just gave me a lot of inspiration. But I try to read a lot of different kinds of things. Right now, I'm reading this book called Chronicles from the Future, the Imagen story of Paul Amadeus Dina. And that's kind of a weird book. Definitely check that one out. Yeah, so I try to read a lot of different kinds of things. Have you been watching Dune prophecy? I'm saving them. I'm letting them build up and then then we're going to just marathon them. But I've seen a couple of clips.
But yeah, I'm kind of holding off until we can watch them all. But I look forward to it. Let me encourage you to calibrate your expectations. Really? Yeah, I've stopped watching it. I watched the first two episodes and I didn't get through the third. I tried watching it on three different nights and I was like, I just don't care. I don't care what happens. I don't care about these characters. Oh, wow. Any of this. Yeah. Oh, one of them I forgot to mention.
Walter John Williams, Hardwired. Yeah, Richard Paul Russo's, basically Richard Paul Russo's Destroying Angel. That had had a huge effect on me in the 90s. I mean, that book just melted my brain down. If you've ever read Destroying Angel, that's very cyberpunk. It's sort of set in a cyberpunk San Francisco. And man, it's pretty intense. Well, I listened to several audiobooks by Walter John Williams. What's it called?
It's about this empire, this multi-species empire where the dominant species basically went extinct and as soon as the last one died, all the other species sort of went to war with each other. Gosh, what was that called? It's like at least six books. It's two trilogies and maybe more. It's been a long time in the mind of Walter John Williams in the last couple of years. That was pretty cool. Yeah. Do you like David Mitchell? He's the author of Cloud Atlas. I have never read any David Mitchell.
Yeah. Now that guy, the thing that's trippy about that book is that he completely changes his writing style from story to story. And there comes this point where you're like, wait a minute, and you like thumb back in the book. You're like, what's going on? And it takes you a couple of seconds to figure out why is this voice like this? He completely changes his writing voice from story to story. It's quite amazing.
Yeah. The big names in sci-fi for me, they tend to be older ones, I guess, like Ursula K. Le Guin. I liked, you know, I used to read a lot of Larry Niven books. They're very sort of pulpy sci-fi adventure in space. And you know, I have a love for that. So I also love Ian M. Banks, you know, the culture novels and other space opera stuff that he does. I've read any of his literary fiction and maybe never will. The culture stuff is just it's so perfect for, you know, my personality and interests.
Yeah, I've heard of the culture novels and I do want to read them, but I want to read them after I'm done with mine because I don't want to be influenced by them. So yeah, I do plan on reading them. Because I've been told by people that have read my book, they're like, this is a lot like the culture novels, because I do have like this expansive, you know, human diaspora that has sort of spread itself across the galaxy.
And you know, there's been a lot of cultural and genetic drift because cultures have had to adapt themselves to those different planets. And so you have those genetic adaptations over time tend to build and, you know, you just end up with very different looking humans. But if people are comparing you to Ian and banks, take it as a compliment. Yeah, I mean, I don't I don't think I'm there on that level yet, but I'm certainly I'm certainly shooting for it. Neil Stevenson is interesting.
Red Snow Crash. Yeah, that's that's an interesting that's an interesting guy. Books are a little long. I haven't read this thing. No, I did read Termination Shock. So I'm kind of looking through my book list here. Yeah, Neil Stevenson was pretty good. Do you like John Scalzi? You know, I've heard of Old Man's War, but I haven't read any Scalzi. Yeah, Old Man's War was was pretty good. I read God, I think I read them all. The Collapsing Empire was the one that was pretty good as well.
It's a different book, but I think there are three different ones. There's Collapsing Empire and the what was it? I think it's The Last Emperor and a couple of others. But that's a series that you might want to might want to check out. All right. Right now I'm listening to the audiobooks of the what are they called? The Draca. This is it's basically it starts out as an alternate history.
Where elements from the you know, the Confederacy end up taking over South Africa, along with some people from Nordic countries and some Brits. And they create this English speaking, you know, overtly racist empire and they become expansionist. And you know, they basically the European countries go to war with one another and fight themselves into, you know, utter weakness, utter weakness. And then the Draca just come in and, you know, take over.
And it basically their their position is we cannot countenance any, you know, any competing political or social system. So world domination is their objective. Well, what is the name of this? The first book is called Marching Through Georgia. And the second one is Under the Yoke. That's the one that I'm in right now. But I know that there's three more. And right now it's basically, you know, where I am in the book, it's in the late 1940s, this alternate history.
But I know that it advances into the future, into space. And eventually, somebody who is a descendant of these Draca, who is genetically engineered and just really just ruthless and malevolent, malevolent comes over to our world and is trying to build a border from theirs to ours so that they can, you know, basically invade and impose their their cultural system on us.
Yeah. So I'm really looking forward to those, although I'm told that those later books are not as engaging as the the first ones. But the first ones are really I mean, it's one of those things where as soon as I finish the book, I'm on to the next one. There's no temptation to go to any other series right now. Oh, wow. Yeah, I have to check that out. That's how the Scalzi books were for me.
I mean, you know, it's the pandemic, of course, but I was doing anything to just keep my mind occupied at that time. But yeah, I'll have to check those books out. All right. Well, hey, Kenneth, it was good talking to you. Yeah, it's good talking to you as well. And you know, I'll keep following you on Substack.
And anyone that's interested in my books, you can go to books to read dot com forward slash can Kenneth E. Harrell and if you want to see my Substack, you can just go to Substack at Kenneth E. Harrell and I will post a link. All right. It was good talking to you. Yeah. Take care. All right. That was Kenneth. My New Year's resolution for twenty twenty four was to publish to Substack twice a week on Tuesday and Thursday.
And with a couple misses, I kept that up all year until November when I cut down to just Tuesday so that I could focus on writing fiction for National Novel Writing Month or NaNoWriMo. But now that it's December and I'm writing twice a week again, the Thursday Thursday post is feeling kind of forced. So I think I am going to stick with a once a week publishing schedule for twenty twenty five, but add in a weekly podcast conversation of some type.
Now, the most unappealing part of podcasting, at least for me, is the producer's role, which is to say keeping the pipeline full. So if you are a person on Substack who is interested in science fiction or A.I. or future technology or anything that would sound right, you know, that would seem to fit in the context of a show or a blog called Gen X science fiction and futurism, feel free to contact me.
We can talk and if you've got a book or some of something else you'd like to promote, well, you know, I'll be linking to your Substack. And I have to say, I am pretty Substack centric these days. Got kicked off of Facebook years ago. I tried to establish like self promotion habits on Instagram and X and things like that. And I just I don't use those platforms. I don't care about them. They're not engaging to me. They're not just naturally enjoyable and I just tend to neglect them.
Substack that's pretty much where I'm at in terms of online activity these days. So I like the place. I'm going to focus on it. And I'm also going to use it as my primary means of networking and finding people to talk to for this podcast. All right. I'm out. Have a great day. Thanks for everything. OMEGA.