Fired By Artificial Intelligence (AI) - podcast episode cover

Fired By Artificial Intelligence (AI)

Sep 17, 202134 minSeason 1Ep. 23
--:--
--:--
Listen in podcast apps:

Episode description

Remember that movie where George Clooney flew around the world firing people so their actual bosses didn't have to? We were all aghast that learn that companies could outsource such a thing. Then we pictured ourselves as absurdly handsome jet-setters with super-elite frequent flyer status and wondered where we could sign up. Sadly the golden age of fire-for-hire has passed us by. Suddenly we're seeing computers not only break the news, but actually make the decision to fire employees. What could possibly go wrong? 

Come visit us at busynessparadox.com to see episode transcripts, blog posts and other content while you’re there!

Transcript

Frank Butler  0:00  
Hello Busybodies, welcome to another episode of the Busyness Paradox. I'm Frank Butler here with Paul Harvey. 

Paul Harvey  0:07  
Good day. 

Frank Butler  0:08  
And on today's episode, what do we got?

Paul Harvey  0:12  
We got bot firing 

Frank Butler  0:14  
Bot firing, what the heck 

Paul Harvey  0:16  
You ever been fired by a bot?

Frank Butler  0:17  
I have not been fired by a bot 

Paul Harvey  0:19  
Me neither. 

Frank Butler  0:20  
I think I've been annoyed by a bot on Twitter before, but...

Paul Harvey  0:22  
I think I've lost an argument with a bot on Reddit before but haven't been fired by one yet. I look forward to the opportunity. But a lot of people have, apparently.

Frank Butler  0:33  
Really?

Paul Harvey  0:34  
Indeed. At least, employees of Amazon have been getting fired by bots for a few years now. Most recently in the news, we've seen complaints from delivery drivers experiencing that unfortunate fate, finding out by apparently email that their services are no longer required goodbye without any real recourse opportunities. I guess there is an appeal process. But at least based on this, the accounts that are you know, showing up in the press, it doesn't seem to go in in the ex employees favor very often.

Frank Butler  1:07  
It's It's odd just the thing that you get fired by email. I don't know. To me, it just sounds like how can we be less human to our humans? in the company.

Paul Harvey  1:20  
I'm wondering, you know, I not a big fan of email. And I sometimes go longer than they should without checking my email. I could see myself being fired from a job and still showing up for like a week. Four, I finally check my email and say, oh, wow, I got fired 10 days ago, damn,

Frank Butler  1:35  
you probably wouldn't notice until you get your paycheck, right? It's like, wait, this is only two weeks pay.

Paul Harvey  1:40  
There'd probably be signs along the way. How come I stopped getting paid. My passwords don't work. 

Frank Butler  1:46  
Oh, I don't, man. Yeah, that's so I don't even have words to describe how frustrating this is, you know, it goes counter to the whole idea of treating humans like humans, you're fired by email

Paul Harvey  1:59  
It does a bit but you know, this seems to be a recurring theme lately. But you could say this is another example of our kind of advice being taken too far. You know, we're always saying focus on the outputs focus on the results, not the amount of time spent working mount time at the desk, all the little proxy indicators of performance so that we're always going on about this is purely outcome results focused, right? They're being evaluated by a computer, based on raw numbers, the ultimate embodiment of what we keep preaching about focusing on results. Yeah, indicating once again, that too much of anything can be problematic.

Frank Butler  2:36  
That's true. I mean, you're absolutely right. We've we talk frequently about how we should become more focused on output, and less focused in on time spent, although I think in this context, it's the time that's sort of driving some of the firing right. 

Paul Harvey  2:53  
The time in terms of deliveries, you mean? 

Frank Butler  2:55  
Yes. 

Paul Harvey  2:56  
Like, how many deliveries they're doing? 

Frank Butler  2:58  
Yep.

Paul Harvey  2:59  
How many are on time? How many are can't remember the different metrics they have. They're all valid things, you know, the adherence to customer requests and things like that, you know, they're all things that are important to customers. So what's the problem? No, it sounds like, hey, that's great. they've identified the irrelevant performance metrics. And they're accurately measuring those metrics and making decisions based on them. What could go wrong?

Frank Butler  3:21  
What could go wrong? If you're fired by

Both  3:23  
Get fired by email?

Paul Harvey  3:25  
What goes wrong?

Frank Butler  3:25  
Yeah. So what's interesting is that I believe Bloomberg is the one who spilled the beans on some of this more later firing. It's a article out of 2021, June 28 2021. And they interviewed 15 of these drivers that one of them, the story was picked up from that one by being fired by email. The person was a veteran. And he's like, I give 110% no matter what he's like, I'm old school is like I was doing my job. I didn't do anything wrong. But what they were apparently saying is that in this context, I interviewed 15 drivers, four of which were wrongly Terminator. So they claim, they're saying that the largely automated system is insufficiently attuned to the real world challenges drivers face every day. Amazon knew delegating work to machines would lead to mistakes and damaging headlines. But these former Amazon managers said Amazon decided it was cheaper to trust the algorithms than pay people to investigate mistaken firings, so long as the drivers could be replaced easily.

Paul Harvey  4:31  
And that's the recurring theme you see through this case, in a lot of other situations, we talked about whenever there's a ready supply of people ready and willing to take the job that's been vacated by someone who's wrongly terminated or quits because of bad things, frustrations. There's very little incentive to fix those problems. If you're that easily replaceable. employee retention becomes desirable but not the necessity it would be otherwise.

Frank Butler  4:56  
It's taking the supply and demand logic for Economics and applying it to people, though, which is sad in its own way, if you really think about it, I mean, we're talking about people who are trying to earn a livelihood in some way or trying to pay rent or to take care of their families. And I've heard the stories associated with these Amazon drivers that included them carrying What? Plastic bottles and jugs with them, so they could go to the bathroom as they're driving, because they would get fired so easily 

Paul Harvey  5:24  
In the warehouses. 

Frank Butler  5:26  
That's taking it too far. 

Paul Harvey  5:28  
Yeah. 

Frank Butler  5:28  
And I think in some states probably violates their required break rules

Paul Harvey  5:32  
Oh I'm sure it violates a lot of things. Now, Amazon has always pushed back against those claims. And you know, who knows what's true and what isn't. But it's a little bit frustrating to me, because I think there's a lot of potential here being pissed away. By this kind of misuse, you know, if you think about this sort of artificial intelligence driven assessment, or employee evaluation, there's a lot of reasons why this could be a good thing, not just the fact that it's primarily focused on results and outcomes. Although, you know, this sort of system probably could be designed in such a way that it was evaluating people based on Face Time and time spent at desk or those silly things. I think it's kind of telling that, far as I know, no company has gone down that road with AI. When you're developing an assessment system like this, you're gonna focus on the things that matter, not the silly things, which is where the purely human based employee evaluation can be so problematic, with perceptions, being biased and influenced by silly behaviors and someone's appearance and all that stuff that we talked about in the perceptions episode. And there's so much you can do with this, it's much more objective than humans are much more accurate, much more efficient. It's not as disruptive, you don't have to take an hour out of your workday every quarter or a year, whatever it is to do these awkward assessments, employee evaluation meetings with your boss, it's not to get all academic, but some of my research on psychological entitlement, one of the things that we keep consistently finding is that the more objective feedback is, more concrete performance results are, the less biased employees can be terms of, you know, you got one employee who thinks they're great, but their boss, and everyone else doesn't really see them that way. It's hard to fight the wrong battle, when you've got the numbers sitting right in front of you. And they're, they can be designed. And they often are, that they come at you in real time. So you know what your performance is, as you're performing, there's no waiting till the end of the year to find out. If your boss thinks you've been performing at a high enough level or whatever, you know, you can know everyday in the moment. Okay, here's how I did today. Here's how my co workers did today. There's a lot of good as what I'm saying that could come out of this. But this kind of misuse of it is really giving it a bad name.

Frank Butler  7:57  
Well, this type of use is certainly very extreme, I would say in how it's being applied. And in this context, if you're not taking into account real world challenges that people are going to face every day, whether it's traffic you because you can't predict an accident, or if it's having to stop and use the restroom. Whatever it is, it's it's creating an almost an impossible or Sisyphean, Sisyphean sort of environment, right?

Paul Harvey  8:24  
Potentially. You could make the argument that an AI system will, over time, learn to take those factors into account in some way. But the problem is the learning curve, every mistake that gets made is someone's job. It's not like, we usually think of AI machine learning happening in kind of a low consequence environment. While it's figuring out all the nuances, to the extent that computer systems that were CAD, there's, there are limitations to that. But without the human involvement, even at the end of this long learning curve, mistakes are like that are always going to be made. Because like you say, there's always going to be more stuff that no existing computer, algorithmic simulation, whatever you want to call, it can account for Black Swan events, you know, the stuff that driver gets hit by a meteor fragment or something on the way when we do a delivery, like this was one offs, there's always going to be those one offs that without human intervention, and really anyone who's remotely legit in the artificial intelligence world, I think, from what I'm not one of those people, but from what I know, or what I've read is that no one really thinks that AI is going to be ever its own thing, like there's always going to have to be a human, whether you call it moderator or you know, someone there to to augment it

Frank Butler  9:40  
Humans gotta have their hands in the pot, right? I mean, truly, because it's going to help shape what that decision means where that information means to help with the system, especially in the case of these one offs, right, you use the one off example of a meteor hitting the truck, right? Think about that. It's one of those that okay, the human can step in To say, Well, we know that it's going to lead to this outcome, even though the system might not predict it. And so that way, you get more accurate output then right, because you're having a human get involved thing, it's probably going to be a lot worse than what you're predicting it be based on your prior experiences. Just because we know that a meteor is going to probably destroy the vehicle and make it unusable and you just have the the drivers not been killed in the process, right? And that they're okay. And they can walk away unscathed, sort of men and black style or whatever, right. But that's just it is that human assisted aspect of it. And we're, we're just hiring here, a new data analytics position. And one of the research presentations from one of the candidates was actually on something very similar to this idea, or pretty much exactly to this. And what they basically were discussing was theme park admissions, and in predicting attendance, and they had data for this. And they basically had human experts making predictions. They had pure AI making predictions. And then they had human assisted AI, basically. So they had three different versions of it. And they found that while the pure AI was pretty good at predicting it, the humans were pretty close. But neither were as good as the human and the AI system working together. And we're talking about it was a pretty large gap in there. And he had this graph that talked about the daily results from the actual to what the system predicted. And if you look at it, there was a lot of peaks and valleys that were way off. You know, overall, it was like the human predicted, like 69%, the AI system was 71% accurate, but the the two together was like 93 or five, it was much greater. And just watching the daily movement of it in that graph was so incredible, because it was much smoother and closer to accuracy. Because of those things like the Black Swan events. Right. One of the things that there was with this theme park was, I believe there was a major death of a beloved figure in that country. And it was very clear that that was going to impact attendance, that part the system wouldn't have caught it. But the humans did. And so that's why that human intervention with the AI led to those better outcomes.

Paul Harvey  12:21  
Yeah, I found a another example of a study like that, after you had mentioned something about that the other day, very different contexts, but arrives at the same conclusion. Harvard pathology study from 2016, where they were identifying breast cancer cells, I can put a link to this article I found about it. But it's a Forbes article says on its own, the automated diagnostic method was able to accurately identify cancer cells roughly 92% of the time, this figure stands just shy of the success rate of human pathologists who accurately identify cancer cells about 96% of the time, but together 99.5%.

Frank Butler  13:01  
That's incredible. That's the game changer. Talk about early intervention, if you can identify that stuff early. And you increase your chances of accuracy that's in this context, that's live safe.

Paul Harvey  13:12  
Yeah. It's that sort of thing that leads me to say that, I don't think we have this notion of artificial intelligence, in the popular consciousness of the scary thing where computers are smarter than us and take over the world. But I don't think anyone that's actually working in that area sees AI ever being completely in charge of anything, there's always going to be a human element involved in it. And to be fair, at least nominally, there is supposed to be a human element in the the Amazon process, where the even not just the firing decisions. But the ongoing evaluations, apparently, managers have an ability to override what the computer comes up with. But what do we know from everything we've talked about, when one part of your job becomes more efficient, what happens? It gets back filled in creates a vacuum, where you've got more time now than you used to. And we tend to backfill that time with other work as you work something else. So that it becomes much easier to just say, Yeah, okay, hit that ok, button, approve whatever the algorithm says the performance evaluation supposed to be, and get on with your other jobs. So I think that's some of what we're seeing with Amazon, too, is that the human intervention is not really functioning as human intervention. And they seem to be letting the AI system run the show, probably more than it was designed to.

Frank Butler  14:30  
Yeah, I think that overall, the general gist is, AI can be an extremely good tool at helping with job performance information, especially in conjunction with a human input. But I think there's that element as you had mentioned, taking away biases and such and just focusing on the output. And if you get the system set up, right, the output can be really quite solid at helping in our The future of the work environment, if we're thinking about more people working hybrid, or straight, remote, or even a mix of all of it, or just being full time on campus, thinking about it from that perspective, I'm thinking about a manager having a team report to him or her. And that team is diverse in their work modalities, you know, having a tool that takes away, for example, that bias of Hey, I'm at the office every day, now I'm seeing my one of my employees at work every day, they suddenly become my favorite because we talk and we get to know each other more, and now they're friends ish, that's going to be a challenge them to appropriately evaluate, maybe somebody who's on in the office two days of the week, or one day or three days, or even that person who is part of your team that you always see every time in a zoom meeting. And that's where these things can become very useful to help sort of balance that evaluation out and help create maybe a foundation to understand the managers evaluation. As long as we don't go full extreme, like HR tracker, right. HR tracker, I feel like is the extreme version of what Amazon's doing, which I think we were pretty clear that we thought was a terrible idea.

Paul Harvey  16:14  
And you know, Well, two things for those who are perhaps newer listeners who aren't familiar with HR tracker, go back and listen to...our second episode?

Frank Butler  16:24  
The 40 Hour Workweek. I think it's the first full episode, not

Paul Harvey  16:27  
the 40 Hour Workweek episode, I think is where HR tracker made its appearance the first time. The other thing, you're talking about objective data, helping to cancel out, you know, managerial biases in the evaluation process. Even if you know, you like the employee, their performance isn't great. And if you've ever been in the position of having to given a negative performance evaluation, it sucks. It's awkward. And it's even more awkward if you're like buddies with the person. So if you've got that, you know, that automated data with Yes, say, yeah, you know, you're, we go way back, but look at these numbers, we got to do something. So I think it makes it easier for an manager to do the hard part of the job sometimes say, look, these are things you need to work on. And it takes some of the onus and some of the awkwardness off the shoulders of the person and puts it on the data, potentially

Frank Butler  17:19  
See, I think that's what's huge that the potential there is so monumental is moving the onus from the manager to the facts, in essence, right, I really find it that our evaluation process is really guided by the personality of your manager. And by that I mean, if you have somebody who's non confrontational, is by nature, as a manager in their evaluating their reports, they're going to be more likely to avoid those negative performance aspects that they have to actually critique you on and tell you how do you get better there? That's not good. to want to be truthful and transparent about, hey, you need to work on this, instead of giving that okay, you're doing just fine. Let me just give you that check. Okay, your meeting expectations? Let's move on? No, no, let's be real, let's get the real evaluation going. Because now we have this tool to help support that. In my context, I would have a lot of students who would, you know, they were missing an assignment or they like it was gonna be late, or they were gonna have to move a test or whatever, or grandparent died and whatever. And one of the things I did is I changed it. Whereas like, if you have to miss any graded assignment, you have to go to our dean of students office to get verification for the reason. And the reason I did that is because it took any personal bias away from it. Because I don't know about you, but I'm more inclined to be lenient to a student who I know is a good student. Yeah, that's true. versus a student that I don't know. And that's not fair. Right? I just don't think that's fair. And it makes it so much easier. If I take it out of my hands and go, Hey, you got to go to the Dean of Students Office, if they send me a letter with verification of your issue, whatever that is, it could be an illness, it could be a death, whatever. It's all good. 

Paul Harvey  19:02  
So you kind of proceduralize it basically, right? Follow these steps. If this results, this is the outcome.

Frank Butler  19:09  
I work with you then, right? It's it's as easy as that. And it's so good because it it just means that everybody has been treated fairly in that process.

Paul Harvey  19:19  
You know, sometime over the last year or two, we use at my university, we started getting emails from some office, when a student has had some event happened in their life that goes on the test necessitate something like you said, assignment extensions or whatever. I never gave it a whole lot of thought like this just started happening one day, but it is kind of helpful, you know, so it kind of Prime's you to okay the students having like legit issues, if they come to you, we can vouch for him kind of thing. So there's less questioning and no God, what do I do in this situation going on on my end? It kind of accomplishes what you're saying. I wonder if that was the genesis of of that new procedure because I like it. You know? It, like you said, it takes my own biases out of the equation, and makes it clear that this has been given the okay or it hasn't.

Frank Butler  20:08  
It's also there's a team that's dedicated to verifying some of those details, right? Oh, we needed a copy of the obituary, there's a note from your doctor, whatever it is 

Paul Harvey  20:17  
We need someone to attend the funeral. We need to see the body.

Frank Butler  20:21  
We need to see the body pictures of it, or didn't happen. It's morbid. But you know, it's true, though. Right? I've appreciated that. Because it takes that personal bias away. Yeah. And I think that's where this AI system can be super helpful in a manager, employee exchange situation and talking about actual performance and how they're doing. I think there's clearly something missing with regards to like personality challenges, right? Because there's some people who are difficult to work with, or they don't work well with others. They don't play in a team. Nice.

Paul Harvey  20:56  
Yeah, we see that in sports all the time, right? Like, the basketball player whose individual stats aren't great, but the whole team, the other four guys on the floor, collectively play better when he's on the floor. Right? How do you capture that and, you know, an algorithmic sort of way.

Frank Butler  21:10  
Exactly. And that's difficult, right? That's where you need the human side of it. You know, if that person is detracting from getting things done, even though they get their work done in a very efficient level, that's not good either, right? If it's pulling a team's efficiency down, that doesn't mean you fire the person, maybe they need to be reassigned into a different role, or in a different context in some way. But people are sticky. And I don't know if people know what I mean by sticky is that when you have a certain skill, or competence, it's very difficult to move you into a different position, right? If you're good at programming in Java, moving you to programming for cobalt might be a stretch, because it's going to take you so much time to get up to speed. Or if you're good at sales, you're just a natural at selling things. And building those relationships and closing big deals and just cool as a cucumber under pressure like that, you're probably not going to be well suited to a back end job where you're sitting in front of a computer doing analysis such stuff, this is probably not in your strengths.

Paul Harvey  22:15  
And then you need the human manager to evaluate the trade off there. say, Okay, this person is really great at this really bad for these other reasons. Is it worth keeping them on or not? It's a human decision at that point. Right. Right. And a similar vein, potential downside of automated evaluation is that it opens itself up to the employees gaming the system. What's the term for that? When you play to the measures, basically, you know, you play the measures, right? You focus in on the things that are being measured and ignore those, just the things being measured, and nothing else, right. And so it's notoriously difficult for a computer to understand when that's happening, but the human can see that quite easily usually. So again, there's a lot of potential here, there's all the benefits of the human mind all the benefits of computational power, you got to put them together, can't just rely on replacing one with the other, like what seems to be happening in these news stories. And that's my coming back to my frustration about this is you look at the comments sections of these articles, and Reddit threads and such. And we're very much looking to throw the baby out with the bathwater here, yes, get rid of all this AI nonsense. And that's what's gonna happen if it's misused like this.

Frank Butler  23:28  
And I think, to that, and to, that's the dichotomy of people's reactions today, right? It's either it's got to be all something or nothing of something. And that's the challenge is that no, just because it's being misused here, right? doesn't mean we can't get it right. The question is, how do we encourage in incentivize organizations to do it the right way? And I think, right, in the case of Amazon, it's by maybe not buying from Amazon until they get less finicky with their employees, but its convenience is outweighing some of the negativity. 

Paul Harvey  24:03  
You can't argue with the results in...throughout this pandemic. Amazon has been quite incredible. Yeah.

Frank Butler  24:09  
Yeah. And it just kind of, just as an aside, I keep thinking about how AI is firing people. And one of the articles that we had found the by Victor Tang, Herman at futurism calm. He said, This time, artificial intelligence is literally taking jobs. And that immediately triggered that whole South Park thing with the jobs thing with the immigrants and stuff. It's like, it's not the immigrants it's now the now I don't wanna get political or anything like that, but it's just kind of funny how this this theme of they're taking our jobs. It's now AI is the threat though. Yeah,

Paul Harvey  24:44  
it's funny because it's been going on for so long. We've always had these fears of automation taken taking our jobs and it's so far it's not mean Yeah, it's replaced humans in some capacities usually make really unpleasant, dangerous, boring types of jobs. Like we've added jobs elsewhere. To compensate like it's it's worked out. Okay, so far, but this time?

Frank Butler  25:04  
Exactly, yeah. And that tends to be my sort of argument against the universal basic income. At the moment, we're not at a point where we're seeing huge job losses because of automation, we see an evolution of industries, right, the more automated we get. That means there's more jobs needed in manufacturing or more jobs needed for maintenance of these, especially like automated equipment, let's say that it might not be run by person, but you got to take care of these things and keep them going. And that's a skilled job. Right? So we see we see these shifts going on, I mean, we no longer really mined coal the way we once did. But now we have to install solar panels and wind turbines. So there's these shifts, and that's what I think happens in that process. But anyway, that was just a little aside, it gets me thinking, exactly, as you said, we've been talking about people losing their jobs, since I think, what the 20s, at least the Industrial Revolution, and how automation is going to eventually get rid of all the jobs. And if we go back even further, you know, Alexander Hamilton, intentionally wanted to make the US more manufacturing oriented, because it was going to be a tool to get rid of slavery. And so manufacturing was the great equalizer. At least that was the thought in the 1700s. You know, and so we're, I still think we're a long way away, before humans are replaced by technology. Because there's going to be human intervention needed in so many different areas, and new jobs created because of these changes and transitions in the way we do things.

Paul Harvey  26:37  
And hey, maybe we'll reach a point where we don't all have to work 4050 6070 hours a week, we can all work 10 hours a week, and the machines can do the rest and

Frank Butler  26:46  
we still get paid. 

Paul Harvey  26:47  
Still get paid

Frank Butler  26:47  
And have good lives. Everything's good enjoy the work life balance and be able to survive off that reduced our paycheck versus having to work for jobs, and

Paul Harvey  26:55  
Which sounds ridiculous. But if you look at where technology has come in the last 100 years, we should we could be closer, a lot closer to that than we are if we want it to be.

Frank Butler  27:04  
I think we could easily. I think we could be there in some ways. Yeah, we could start, we could certainly have started having that push. Let's put it that way. Now, I think we've saved the crown jewel of how bad Amazon implemented this project to the end here. And that's with how they send those emails, or at least, who signs off on it. And I'm using this term loosely here, they have these auto generated emails that are signed with people with Indian names

Paul Harvey  27:36  
Almost exclusively based on what we're seeing in these articles. baton bannerjee pavini G. baton again, I mean, gang are getting Gangadhar sorry, Gangadhar Gangadhar M. Think about that. They're all indian names. Yeah, though. They're written by a machine.

Frank Butler  27:55  
And we went through that outsourcing challenge with India, right, there was a lot of concerns about outsourcing to India was going to take a lot of customer related service related jobs and such. And it did have some impact, for sure. But that pendulum swung out is swung back because they weren't able to necessarily handle all of the challenges that were needed to be taken care of here. So we've seen a rebalancing and a lot of ways, but talk about creating a hatred toward a culture that's not even really to blame here.

Paul Harvey  28:27  
And admittedly, this is a bit of a conspiracy theory on our part, but it does seem really strange that they're signing off all these automated Britain by computer emails with indian names. Is that a strategic decision? Like you're saying? Or is someone maybe this has been outsourced? And those are actual people who are applying their names these emails, but if you look at the examples in these articles, it doesn't seem that way. You know, you'll have the same name applied to emails that wouldn't come from the same person, basically. Right. So it kind of seems like these are fake names. Why are they all indian names? If anyone knows anything about the backstory here, we'd be curious because it does seem like they're outsourcing the negative reaction of this right? Some offshore Indian company that's doing this

Frank Butler  29:12  
The Amazon leadership gets to point their finger "No, no! it's somebody in India!"

Paul Harvey  29:15  
That was a mistake made over there. Yeah. 

Frank Butler  29:17  
When you point your finger that means there's three fingers pointing back at you Amazon. Actually, I'm curious I mean, it is sort of a conspiracy theory on our part but truly though the number of examples we've seen using these indian names as the signature on these emails is pointing to more smoke about it in is it creating some sort of Zena phobic type perceptions or anger towards an entire group of people who are playing on Zena phobic perceptions are playing on z Right. I mean, what's the you know? I honestly I don't trust Amazon very much. I especially as we see more information coming out about all that you just kind of feel a little icky 

Paul Harvey  29:59  
It does start to feel like Walmart circa 2003, or something where you spent a lot of money with this company and you're just starting to think, I don't know, man, it's starting to get a little bit creepy, isn't it?

Frank Butler  30:11  
Yep, it is. And that's just it. But I think if any of you guys know the story behind to signing off on these emails are more about the way the system works. We're all ears, we want to hear about it, 

Paul Harvey  30:23  
If anyone's experienced this, from either side of the equation, on the receiving end, or if you work at Amazon, we'd love to hear.

Frank Butler  30:30  
Yeah. And in particular, if you know of other organizations who are using something similar to this, it would be good to know. Very curious, I'm just this is a fascinating and awful thing. But I want to hear good things, too. I want to hear if those companies are doing this right, in some way.

Paul Harvey  30:45  
And that's the thing you wouldn't hear about it. If a company is doing it right. It wouldn't even be noticeable. Really, it would just be part of the job.

Frank Butler  30:52  
Right? And that's why I want to hear if there's anybody who's got an organization that's doing it well, exactly. Good point. Anyone let us know. Yeah. Thanks for listening, folks.

Paul Harvey  31:01  
I got an Amazon delivery coming so I gotta sign off. I probably do, unfortunately.

Frank Butler  31:09  
Actually, I just got billed for my Amazon Prime for this. Man, it they really do. Right. That's, I mean, that's what companies have done today is they've really created an environment where it's hard to let it go because of the incredible convenience

Paul Harvey  31:28  
They've done incredible stuff. I mean, they've earned their success. But yeah, they've kind of made us dependent on them.

Frank Butler  31:34  
Yeah. And it's tough, because you look at, I guess I could break away from Amazon the same way that I broke away from Facebook, which was probably one of the better decisions that I've made in my life, in that sense, in terms of, you know, product usage, but

Paul Harvey  31:48  
You have to, like find out where to buy stuff, like where do you get stuff nowadays, like some things I, again, this is like Walmart was 20 years ago, when you start realizing, I don't know where to buy, like pillows, besides going to Walmart starting to feel like that with Amazon. There are some things where, like, where would you go to buy these things?

Frank Butler  32:05  
Or at least for the price is off and go? Yeah, I mean, that's been another huge and the selection. Yeah, sometimes it's too much, though. Like there's just too much choice. Yep. That's a whole another aside. Yeah. We'd love to hear from you guys. Please make sure to like and subscribe to the podcast. please rate it. We would love to have you rate it. And definitely send us an email, Busyness. paradox.com. So it's just what is the input at Busyness? paradox.com. Just reach out to us. Let us know Twitter, Twitter at Busyness Paradox. We're also on Reddit too. We're going to start using those handles a little bit more. We've got some some stuff that we're going to be getting on there too. And we're considering you've been doing a Ask me anything in ama as they call it. So keep an eye out. detuned Thanks, everybody.

Paul Harvey  32:49  
So long, folks.

Transcribed by https://otter.ai

Transcript source: Provided by creator in RSS feed: download file