Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. What is short-termism? Is it a buzzword or something that really impacts businesses and the economy? WBUR Podcasts. Boston. Alright, now I'm gonna bring you a story. Wait, I'm bringing you a story. Didn't you say that you were going to be the dresser? No, you're going to be the dresser. I'm the amuse-bouche.
Did I not say that I would be the amuse-bouche and you would bring whatever you were going to bring and then you said I'll be the appetizer, the salad. in the main course which all of which does come after the amuse-bouche as you should know i don't know but great everybody's shaking their heads at me the amuse-bouche comes first It does. The amuse-bouche is the amusement of the tongue or the amusement of the bouche. It's the amuse, which is French for the amusement of the bouche.
This is getting worse and worse. All right. Allow me to amuse the bush. Okay. Artificial Intelligence never heard of it so do you know what the latest supposed tell of artificial intelligence online is. I only know because you told me earlier in the week, I think before we knew that we were going to do this. Okay. Yeah, that's true. I was horrified when you said this. You were right? Yeah, I want the listener to have the same experience that I did. Do you dabble in the dash?
I dabble left and right morning to night in the M-dash. Okay. So the M-dash is supposedly these days. a tell of a post upon Reddit or the internet more widely that has been composed by ChatGPT. I'm Anne-Marie Sievertson. I'm Ben Brock Johnson, and you're listening to Endless Threat. Coming to you from WBUR, Boston's NPR. Today's episode, the robots are taking over.
This is something that apparently started in November of 2024, so it's been going for a while. But what I will say is in the last couple of months in 2025, I have been watching a lot of posts about the MDash. Again, the idea here is... There's been an explosion of people asking ChatGPT to compose posts.
stories, things that are likely to go viral, and then posting those online for internet points to Karma Farm to basically get attention. And, you know, I think there are some other things that are happening too where... You know, in a lot of, especially the sort of like drama-based subreddits that we, that I peruse, There's a lot of relationship posts that immediately get called fake by the commenters who are basically like, oh, this was written by AI, and here are the reasons.
If you click on this link that I just sent you. Okay. Oh, it looks like DNA. So what this is, is it's an x-axis and a y-axis. It's a graphing. of the use of em dashes in original posts on some very large subreddits and the use of em dashes on posts on these subreddits has as you can see from the graph has exploded yeah in the past few months So this post got about 7,000 upvotes, lots of comments. This is by Delta V Zerta. They say...
LMAO, this is pure gold. The M-dash conspiracy is actually real. As someone who's been lurking on r slash startups for years, I've definitely noticed this trend. ai content has this weird writing style with em dashes everywhere em dash like this em dash and it's getting more obvious every month. The graph perfectly captures what's happening. Look at that hockey stick growth from August onward. Oh my god, I know where this is going. You can literally spot the AI posts now.
Unnecessarily formal tone. Em dashes everywhere for no reason. That weird quote, I'm pretending to be helpful while subtly promoting something vibe. The next time you see a post like, as an entrepreneur, emdash, with 10 years of experience, emdash. I found that the key to success, Emdash, in this competitive landscape, Emdash,
is to focus on customer acquisition. You know what you're dealing with, LOL. Someone should make a browser extension that flags potential AI content based on m-frequency. Would save us all a lot of time. That's actually a really good idea. What do you think? M-Dash. Is it real or is it not real? The M-Dash conspiracy. How do you feel in this moment? The vibe I got from this comment. This is AI. Delta V Zerta is AI. The call is coming from inside the house.
All right. Yes. And it was not the M dash that gave it away because obviously they're explaining the like writing style with M dashes everywhere. It was this graph perfectly captures what's happening. Look at that hockey stick growth from August onward. This just read like bot speak. And then I cheated and scrolled a little further down because someone responded to them.
And then DeltaVZerta said, by the way, you're responding to AI. As a rule, I don't use AI for Reddit posts, but I thought it would be ironic for this one. So... Spotted! Detected! Detected. What does this all make you think about? Well, I'm thinking about anything that I've written recently that has had em dashes and now I'm very concerned that the recipient thought I was a bot. But I'm not a bot.
I'm a real girl. It's hard for me to, I guess, put my finger on what made this feel like AI to me. What stands out to me is not necessarily what would stand out to somebody else, and collectively what stands out to all of us is is what will save the day so i guess i'm glad to know about it and i'll definitely think twice before i am dash so i want to direct your thoughts towards a writer named Adam Cecil.
He has a podcast, it looks like, and also a site called Night Water. And Adam has a post called, You Can Take My M-Dashes From My Cold Dead Hands. and uh which adam i couldn't agree more adam points out that the problem with this concept is that plenty of writers, artificial and human alike, use MDashes, including yours truly. So I don't think we're clear yet on whether or not the MDash is a real sign of AI.
But what is clear to me, and just again sort of connects to the general problem that we're all feeling these days, is like, what's real and what's fake? And do we really have to expend mental energy figuring that out? i'm tired man i'm so tired i'm just tired do you know what i mean Absolutely. But you know what's gonna pick me up? A little break? A little break. We'll take a little break. And I'm ready for a salad. Or the appetizer. After this M-Amoose Boosh. Okay.
Appetizer, starter salad coming right up after a break. Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. A recent episode explores the potential dangers of short when companies chase quick wins and lose sight of long term goals. I think it's a huge problem because I think it's a behavioral issue, not a systemic issue. And when I see these kinds of systemic ideas of changing capitalism, it scares me.
Follow Is Business Broken wherever you get your podcasts and stick around until the end of this podcast for a sneak preview. Okay, we're back. Ben. Amory. Have you heard of the Redditor Genevieve Strom? I was trying to come up with a nickname for Genevieve Strone just to be like, yeah, of course I know my girl Strone. It's Strom, so you clearly don't. Exactly. I was like, I don't know how to do this. No, I have no idea. Who's Genevieve Strom? Strom? Okay. Strom. Who's Genevieve Minestrom?
I'll get there. I'll give you one more chance. Have you heard of the Redditor Spongermaniac? No, I don't think so. Okay. I'm like poem for your sprog war lizard. That's my... Okay, well, these two Redditors were involved. In a... A scandal, really. Scandal almost doesn't seem like a big enough word for this, but this happened in the Change My View subreddit. Okay. There's a big post, a meta post made by the mods in late April. Okay. And here's an excerpt of that post.
The CMV mod team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. Oh yeah. Yep. I do know about this. I do. Yes. Please continue. They go on. This experiment deployed AI-generated comments to study how AI could be used to change view. So you have changed my view posts, things like... Popcorn is disgusting. Change my view. That's a very stupid example, but you know what I mean.
And then these researchers created AI users like Genevieve Strom and Spongermaniac, among many others. to comment on these posts with AI-generated hot takes of their own or attempts to change the OP's view. The mods write CMV rules do not allow the use of undisclosed AI-generated content or bots on our site.
The researchers did not contact us ahead of the study, and if they had, we would have declined. We have requested an apology from the researchers and ask that this research not be published, among other complaints. So. Wow. This is an excerpt of what the University of Zurich researchers wrote to the CMV mods. Okay. They say in commenting with these fake accounts
We did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful. So bots are writing them, they are reviewing them, but then they get posted on the subreddit. Yep.
People are up in arms about this. There's a big Atlantic piece by Tom Bartlett written on it. He quotes Amy Bruckman, a professor at the Georgia Institute of Technology, who refers to this as the worst internet research ethics violation. she has ever seen, quote, no contest. Reddit's legal team has gotten involved. Ben Lee, who's their chief legal officer. Oh, boy.
NPR just covered this as well. It's gone stratospheric. It's gone stratospheric. It's only in the last couple of days as of when we're recording this. That the researchers apologized, though. Well, I was going to say, was it a spicy researcher response? Because I love a spicy researcher response.
They're not in a spicy place. They're in an uh-oh, we screwed up place because they're facing legal action. AI companies have licensing deals with Reddit. OpenAI has a licensing deal with Reddit to be able to train. its program on Reddit post. And so the idea that researchers would be doing that without a deal and without consent. has people pissed off. And OpenAI, I should say, did do something similar. They used Change My View posts and had AI models write replies.
but in a closed environment apparently, so they weren't actually putting the replies. On post. they were coming up with replies and then they were showing those replies to like designated testers who were looking at those replies and then
telling them how persuasive they found those arguments or not. And I think part of the reason why people are so upset about this is the nature of the posts that they were having AI comment on. Just like really... controversial political hot takes, things about sexual assault, areas where what are already super muddy waters you do not necessarily want fake. humans weighing in on. Yeah.
So the researchers have written an apology and realized just how big of a deal this is. And so the first part of the apology is all just, you know, the disappointment, the frustration, profound sense of personal sorrow. And then they say, we want you to know that we've taken this wake-up call seriously. In that spirit, we have already implemented the following measures. We have permanently ended the use of the data set generated from this experiment.
And they have, in case it's not clear from that, they did say that they will not publish this research. Wait, so they've ended it? What does that mean? They have ended the use of the data set. That's true, the use of the data set. So they've said they're not going to publish it, but they haven't said that they've deleted it. That seems like a weird phrasing, but yes, please go on.
It says we will never publish any part of this research. We commit to stronger ethical safeguards in future research. Going forward we will only consider research designs where all participants are fully informed and have given consent. In order to rebuild trust with Change My View and to further demonstrate our sincere regret, we declare our willingness to collaborate at no cost.
with the subreddit to develop systems that can promptly detect and block unauthorized interference and can support the development of a clear framework for handling violations. Oh, I'm sure that'll go somewhere. Would you like to work with us? We promise we'll do it right next time. They also said, we respectfully request
that our anonymity be preserved to protect the safety and privacy of our families. Boy, oh boy. Yeah, well, the lead researcher was named and quoted in the Atlantic article and said that these researchers were receiving death threats.
Well, there's a problem right there. To say it has rubbed people the wrong way is truly an understatement. And the mod response to the researcher's apology was, While we appreciate the offer, we have already made arrangements with other groups and Reddit admins have proactively made changes to the platform.
So it's not totally clear what those changes are and what the changes within the subreddit will be and what legal action, if any, will be taken. But this was a clear, like, a line has been crossed. And a very large Reddit community has maybe been further alerted to the fact that The bots are there. They're swimming in their waters. It's harder to tell than ever, and they're going to weigh in on things that
It seemed really icky for bots to be trying to change or shape our opinions on. Those bots be astroturfing. They do. They do. And the Karma account for the LLM research team user account, which was made so that they could respond to some of the response. Uh-oh.
Negative 100. I kind of expected it to be even further. I don't think I've ever seen a karma. I've never seen karma in the negatives. I know it can happen, but wow. So I guess a question I have about this is, No, it's more of a comment than a question. You're allowed to comment. The comment I have about this is, so like, if folks will remember, which maybe they won't, but we made a series about chatbots.
In one of those episodes, we talked to a researcher at Dartmouth who is scraping Reddit content. to feed into an LLM that was then going to decide if certain users might be displaying signs through their Reddit commenting of mental illness. Right? Yep. and so I think what's interesting here is the sort of ethics of using data from a website like Reddit that is public-facing and figuring out how to do that in the right way.
You know, I think there's, like, we're in this kind of, like, Wild West time of, like, everybody's experimenting with AI. But, like... If you're a Reddit user, you know what you don't want to do. You don't want to be talking to bots. You just don't. You want to be talking to other people. If you're an internet user, that's what you want. You don't want to be commenting on fake posts. You don't want fake bots to be commenting on your posts.
You just like don't want that if you believe in this sort of basic idea of the internet as a place where you can connect with other human beings, right? And so it's like... If they had stopped and thought about this for a second... I think They could have used their power of logic to understand. that people don't want that and that the community would react negatively to that. If you're trying to manipulate a community of real humans by inserting a bunch of robot comments...
Like, that's just a bad look. Like, go ask a six-year-old if that's a good idea. Do you know what I mean? Not to give these people too hard a time, but I'm just like, dude, pause for one second and decide whether or not you want to enter into a public space where a bunch of people are trying to talk to each other. and have a bunch of robots in there without disclosing that they're robots.
That's just not a good idea on its face. So I just think it's kind of weird that the researchers decided to do that. Well, I have some thoughts about that because OpenAI having an agreement with Reddit to scrape it. user information posts and things like that to train its AI. It makes sense that they would pay for that access and yet that's not necessarily what Redditors signed up for. Researchers doing research.
might think that they have kind of a a pass on that because they're they're saying this is for research this is not to train our you know, tool at a private company. This is for research. Yeah, but like, you know. I'm not defending it. I'm not defending it. Another comment that that professor that I quoted, Amy Bruckman at the Georgia Institute of Technology. Uh-huh.
Another comment that is included from her in the Atlantic piece is this concern that The uproar around this, the kerfuffle around this, is going to undermine the research that really should be done into how AI influences the way that humans Interact. And that's not to say that this is the way to do it. It's not. You can't experiment on non-consenting people. And you are absolutely right. Redditors do not want to talk to bots. They're there to talk to other humans.
And so this is the wrong way to do it, but this is research that... done correctly in a consenting, non-icky way. I do think is really important to do and it is sad that this was executed in this way. Because I do worry that people are going to balk at the idea of research like this being done because it's been done so poorly. But I guess like, okay. Yes, okay. I'm just going to play the role of the cranky person here for a second. Go for it. And I'm cranky too, but go for it.
I would love for someone to come and pay me a lot of money to tell you whether or not it's a good idea. to have bots shift people's opinions. I think that we don't need research on that. The answer is no. it's not a good idea bots are going to shift people's opinions no matter what though like that that's going to happen and so if there's something to be learned about how to avoid people from falling down particular rabbit holes.
I think that can be valuable. That's not to say that this research was ever going to accomplish that. It wasn't. But that's what I mean. I think there's like a fallacy in there for me where it's sort of like... yeah, we got to research this to see if it works. And it's like, I don't care. It works. We know it works. It's already been done. And also it's bad. So let's throw it out. We don't need the bots changing our minds. I appreciate that perspective. The Cranky Ben perspective.
Well, we're working on a few episodes right now that have to do with AI. Yes. And there was someone quoted in a piece that I was reading as part of my research for that other episode where I think the quote was something like, Using AI is not going to be optional in the future. It's just going to be a part of life.
the way that using the internet right now we would say is not optional i don't know i feel like more and more people these days are like you know what it was a mistake let's put it in the trash so i'm like yes i agree with you and also we could decide to not do it yes I mean I take your point
I was reading a TechCrunch article about the OpenAI version of this experiment with ChangeMyView, where they were having AI write the posts, but they weren't being surfaced for actual Redditors in the threads to potentially respond to. And I think they said that their goal was not to create hyper-persuasive AI models, but instead to ensure AI models don't get too persuasive. Thank you.
And. that to me my like bullshit meter goes off a little bit there where I'm like I do think there's we do need to research this stuff but to say we're doing it so that models don't get too persuasive when you're open AI and you are absolutely trying to create technology that is as human-like and convincing as possible. I'm not sure that I buy that coming from OpenAI. I might buy it coming from a team of genuine researchers who have... consenting participants.
Genuine independent researchers. Genuine independent researchers. Then again... What are we cutting, slashing funding left and right to? Genuine independent research and university research right now. You're right. You're right. And I was going to try to say that too. Like I'm not, I'm pro research. I'm pro science.
full stop. I'm in favor. I just, every once in a while, I just want to say your scientists were so concerned with whether or not they should. They didn't think about, no, whether or not they could. They didn't think about whether or not they should. Ruff, ruff, ruff! Anyway, I thought this might be thought-provoking. No, it was. Were your thoughts provoked? They were provoked. Oh, God. Are you a robot? Oh, no. I don't know, M-dash. Am I? Oh, no! No! No! We've lost her!
This episode was co-hosted by me, Ben Brock Johnson, real 100%. grown grass-fed human. It was produced by Franny Monahan. Our sound designer is... Vicus, our managing producer. Joshi. Our show is edited by me. The rest of our team is Dean Russell, Grace Tatter, and Emily Jankowski. If you've got an untold history, an unsolved mystery, or another wild story from the internet that you want us to tell, hit us up, EndlessThread, at... You are. Dot.
Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. How should companies balance short-term pressures with long-term interests in the relentless pursuit of profits in the present? Are we sacrificing the future? These are questions posed at a recent panel hosted by BU Questrom School of Business. The full conversation is available on the Is Business Broken podcast. Listen on for a preview.
Just in your mind, what is short termism? If there's a picture in the dictionary, what's the picture? I'll start with one ugly one. When I was still doing activism as global head of activism and defense so bank or defending corporations I worked with Toshiba in Japan, and those guys had five different activists, each one of which had a very different idea of what they should do right now, like short term.
very different perspectives and unfortunately under pressure from the shareholders the company had to go through two different rounds of breaking itself up, selling itself and going for shareholder votes. I mean, that company was effectively broken because the leadership had to yield under the pressure of shareholders who couldn't even agree.
what's needed in the short term so to me that is when this behavioral problem you're under pressure and you can't think long term becomes a real real disaster Tony, you didn't have a board like that. I mean, the obvious ones, I mean, you look at, there's quarterly earnings, we all know that. You have businesses that
will do everything they can to make a quarterly earning, right? And then we'll get an analyst and what causes that. I'm not even going to go there. But there's also, there's a lot of pressure on businesses to, if you've got a portfolio of businesses, sell off an element of that portfolio. And as a manager, you say, wait, this is a really good business. Might be down this year. Might be, but it's a great business.
Another one is R&D spending. You can cut your R&D spending if you want to, and you can make your numbers for a year or two, but we all know where that's going to lead a company. And you can see those decisions every day, and you can see businesses that don't make that sacrifice. And I think in the long term they win.
Andy, I'm going to turn to you. Maybe you want to give an example of people complaining about short-termism that you think isn't. I don't really believe it exists. I mean, again, I don't really even understand what it is. But what I hear is we take some stories And then we impose on them this idea that had they behaved differently, thought about the long term, they would have behaved differently. That's not really science.
Find the full episode by searching for Is Business Broken wherever you get your podcasts and learn more about the Merotra Institute for Business, Markets, and Society at ibms.bu.edu.