What do you mean, not exactly?
Well, I mean I just assumed that this man--
Did you say you assumed?
Yes.
Your honor, may the defense counsel use that Blackboard?
Yes.
Thank you, your honor. Miss Olaf, you should never assume because when you assume, you make an ass of you and me.
You are listening to WREK Atlanta, and this is "Lost in the Stacks," the research library rock and roll radio show. And I am lost in the cords of my headphones here, and I'm back.
You're sounding good.
Oh, thank you. I am Charlie Bennett in the studio with Fred Rascoe, who you just heard, and Marlee Givens, who will speak in a moment. Each week on "Lost in the Stacks," we pick a theme and then use it to create a mix of music, library talk, and the cords of these headphones. Whichever you're here for, we hope you dig it. MARLEE GIVENS: Today's show is called assumptions about an AI future.
CHARLIE BENNETT: Tools like Jat-- Jat-- ChatGPT are quickly insinuating themselves into our lives and thus into the academic discourse despite all our best efforts.
Oy vey. OK, on past episodes, we've called ChatGPT a shiny new toy somewhat derisively, mostly based on assumptions and first impressions. So today, we thought we'd explore a few of our assumptions, and maybe we'll learn whether our assumptions still hold.
And you know what happens when you assume.
That's right. You end up lost in Jacksonville Beach. We have a lot of assumptions to cover. So be prepared for very shallow dives, which is our trademark, and knee jerk reactions.
That's not our other trademark.
It is today.
All right. Our songs today are about automated machines, diminishment of human connections--
Uh oh.
--and, of course, assumptions. I guess we're even assuming that it's OK to assume. So let's start with "May I Assume" by Shaffiq Husayn featuring Jimetta Rose and Fatima right here on "Lost in the Stacks."
"May I Assume" by Shafiq Hussein featuring Jimetta Rose and-- Fatima? Fatima? Fatima? Fatima? Uh. One of those has got to be right.
Why don't you just assume one and roll with it?
I'll just assume the one that Marlee said was right, and I think--
I know it's not Fat Imma. So--
Wow. Happy Friday, everybody.
Oh man. So, this is "Lost in the Stacks," and our show today is called "Assumptions About an AI Future," kind of a loosey goosey show today.
Don't you feel like it's really assumptions about this AI present? I mean, aren't we kind of in it right now?
I guess we're, like, kind of looking at the present and extrapolating towards what that means for the future. So I mean, AI is probably going to be a seismic change in our industry. There have been lots of seismic changes in librarianship that we've seen in our lifetimes.
Our profession is always in crisis.
And a lot of the changes that people think come along, you know, they're bright shinies. But some of the bright shinies are things that kind of stick around. But we've seen new things come along that librarians at the time really scoffed at, derided, and--
Do you have any examples, Fred, of some of those things that librarians pooh poohed when they arrived on the scene?
Well, some of the technologies that have arrived on the scene in our lifetimes, you know-- and we're all in the 50-ish range of age. I'm just about there. Well, for one, smartphones. That's pretty-- within the last 20 years. Wikipedia-- I mean, early on in "Lost in the Stacks," I think there were some very anti-Wikipedia shows.
I'm still anti-Wikipedia.
Definitely anti-Wikipedia editors.
Mostly because of the echo chamber, which we will talk about in just a minute.
Yeah. Google.
So, yeah, Google came along when I was in library school.
Yeah.
And did they say DIALOGUE forever?
I'm sure some people--
You know, when I was in library school-- and we'll talk about the-- we'll talk about this. Google was really, derided especially by the faculty. And this is 2002, 2003.
And I should very quickly provide context for what I just said. DIALOG, all caps, is the name of a database and interface that looked kind of like a DOS prompt that we were all trained on briefly as librarians. And it was fading as I think all of us hit the scene.
Right, yeah. But I mean, things like the internet-- that's in our lifetime. Online catalogs-- you know, when I started college, they had the card-- the paper card catalog. There, it had-- it was existing simultaneously with the online catalog, but it was still there in 1992.
Yeah.
Affordable personal computers. The point is that, I mean, change-- these kinds of changes have transformed, you know, some things about librarianship or how we've done our jobs. And not all change has an upside to go with the downside. I'm thinking of technologies like blockchain here. I have yet to hear anything articulated that's beneficial about blockchain.
I think blockchain is the Beanie Babies of technological innovation.
Tamagotchis.
So, I mean, AI and ChatGPT is just like one of those things that we're going to look at and is like, well, is it a bright shiny that's going to last, or is it just a toy that we're just going to play with for a few minutes?
In 20 years in some library science class, someone will say, when ChatGPT arrived on the scene, many librarians derided it as an inessential part of the profession. But here, as we can now see, everything is ChatGPT.
And that brings me to--
Although that's a brand name, right?
It is, but we could say chat GPT-like
Yeah.
--or large language model systems. But that brings us to the thing that I want to do today is talk about some of these, like, assumptions that we have about AI and things like ChatGPT and how it will change, and we'll get through-- I've got a list here, maybe 10 or 15. And we'll get through as many as we can in our half hour here. And just-- we'll see what we think.
So you're going to make an assumption, Fred.
I am.
And then we're all going to respond to it.
And the first assumption is AI is going to replace librarianship. Let's start with a big one. That could be the whole show.
You know, I hate to be a cliche, but allow me to simply scoff [SCOFF] about AI replacing librarianship.
Yeah, I'm with you. It's not going to replace librarianship. It's going to be a tool that librarians use constantly. CHARLIE BENNETT:Although I noticed out of the corner of my eye that Marlee just stared at me briefly when I said that. So now I'm interested in what you think.
Oh, no, no, no. I-- it's probably my resting whatever face that--
Your resting inquisitive face.
I'm constantly accused of, like, looking concerned about things.
I want to get back to what you said about Google coming along, you know, when we were in library school. For me, it was 2002, 2003. And Google existed before then, but it was, like, really picking up. And one of my professors in library school during a class was, like, very haughtily scoffed at, you know, you'll get students in-- and I've seen this happen.
Students come in, and you tell them to search something on Google, and they start typing in, you know, what were the agricultural effects of the Civil War on southern-- you know? And just like, you're just going to need library skills to help these students do that. And here we are, over 20 years later, and search engines are designing themselves to answer those questions, and ChatGPT is designed to answer questions like that.
OK, but this brings up exactly the point about AI-powered machine learning text generators, right, if I can be a little bit dismissive. I'm so cranky today. You all should just know that. It has nothing to do with anything you all say to me. The AI-influenced sort of search environment is essentially a repeating echo chamber, right? It can read everything much quicker than a person can, and it can deliver a semblance of what it has read over and over and over again, right?
And so it can give you a result back. If you say, hey, what was the agricultural effect of the Civil War, it can say, the agricultural effect of the Civil War was this, this, and this. And then it will probably mention something like Captain America or Neil Young because of their culture and the number of mentions of these things that are not exactly connected. And I say Captain America because of the film Civil War, and I say Neil Young because of Farm Aid.
And these are things that would happen that if you were a person, you'd be able to at least say, well, that doesn't seem quite right. Let's dig into it a little bit.
And see, I think in this scenario, you're that library science instructor that I had in 2002, like, because Google was, relative to what it is now, kind of terrible in 2002. It was better than they thought it was, but it was terrible. But--
Fred--
--it's going--
Google is terrible now too.
Well-- CHARLIE BENNETT: Because it's using AI. Go ahead and type in-- while we're on the break, you'll type in agricultural effects of the Civil War and then look at those answers that-- like I said, I'm sorry. I'm very cranky. I feel like I'm kind of being painted into a corner of defending these technologies like Google and Wikipedia as a--
If you didn't want to be a defender, you shouldn't have worn that T-shirt.
Oh, I'm not even wearing a T-shirt.
That says, I love AI and want to use it in librarianship.
I didn't know where you were going with that. Yeah, I think smartphones, Google, Wikipedia, I'm not saying that those are, like, net good things that have happened to humanity. But librarians existed still even though they came along and adapted to use them.
It didn't replace it. Influenced it deeply.
Yeah, and ChatGPT is obviously-- ChatGPT has come along that can tell you that "Captain America-- Civil War" is not part of the American Civil War and has nothing to do with agriculture.
Now, let's also be accurate--
That's an assumption I'm making about the future.
A great assumption too. Let's remember that AI is, much like the internet, much older than when we first started talking about it in popular culture, right? This stuff's been going on for a while. We do have to come to an end of a segment, but Marlee, I really feel like you've been excluded from this. You want to throw one last thought as we head into the end?
I think that AI is-- it fits right in with librarianship. What I've learned in the last few years, especially some of the guests that we've had on this show, is that the library is basically turning a mirror on the rest of society. And I feel like that's what AI is doing to us now. And some people are going, that's me. I'm comfortable with that. I'm going to trust this thing. And other people are going, no, no. That is not me. That is not anyone I know. I'm not going to trust this thing.
That's such a more measured response than anything you and I said, Fred.
I'm glad we closed with that one.
This is "Lost in the Stacks." We'll be back with more assumptions about an AI future after a music set.
File this set under BD555.K5.
That was "You Can Still Change Your Mind" by Tom Petty and the Heartbreakers. Hey, Charlie. That's that jangly music that you like.
I do like Tom Petty even if sometimes, it sounds like REM.
And before that was "Put Your Halo On" by Lungfish.
Yes.
Those are songs about being uncomfortable with prevailing assumptions. All right, this is "Lost in the Stacks," and we're talking about assumptions about an AI future. And, OK, in the last segment, we let that one assumption kind of get away from us time-wise. Will AI replace librarianship? I think we all landed on no.
Mm-hm. CHARLIE BENNETT: Yeah, but like, no, but we're going to have to deal with it anyway.
Exactly. Yeah, yeah. So let's try in this next segment to just kind of, like, maybe whip through some more of these that I've got the--
OK, let's crank them out.
And if you guys have any of your own that you think of, please feel free to shout them out. But anyway, here's the next one on my list that I've got. AI will improve education and teaching. Marlee, let's go to you first.
Yes.
OK,
But it's actually-- I think it's related to-- so, speaking as a teacher, I think it's related to a later assumption that you're going to read out about mundane tasks. There are a lot of mundane tasks in teaching, and especially if you're just trying to come up with just one new idea or you're trying to kind of automate something that you do regularly-- you know, like, you're putting a lesson plan together.
ChatGPT, this is actually my first experience with people discussing ChatGPT in particular on the internet. It was educators talking about using it for things like creating lesson plans. And I've used it to sort of generate potential research questions around a topic so I can at least sort of come up with some demo searches. So I mean, I think it will enhance things. It will free us up to do the things that we're better at.
Right.
What do you think, Charlie?
Oh, I think it's a lot like everything in the world. The people who are terrible will use it badly, and the people who are good will struggle with it and eventually will say, oh, that's better than it used to be even though it's not.
That's a good point. I think, to Marlee's point, using it to create-- oh, I'm creating examples for my class, things that I never would have thought of. But I guarantee you there will be professors sitting down at their desks with their head on their-- I can't face another, like, curriculum design. ChatGPT-- OK, sure. That's fine.
Just because the stakes are small does not mean that AI won't operate like an evil scientist's creation.
Now, I do think, though, the real danger is not in necessarily the teachers themselves but their bosses. Like, you know, the Board of Regents or whoever might say, why do we need all these teachers putting in all these hours doing these things? Can't we turn some of this stuff overto--
Yeah, can't we generate a lesson--
We've already got, at Georgia Tech, a very prominent graduate teaching assistant that's a robot.
Yeah.
Yeah. And, well, getting to students, students are going to cheat. That's my assumption.
Yeah.
Which I think is already, you know, well-known and established that that's going to happen.
Yeah, but that's what-- students cheat already.
That is an excellent point.
Yeah, let's be formal about this. The next assumption is using AI, students will cheat and cheat more.
Right.
And as Marlee just said, no, doi. I mean, like--
But professors are going to use AI to try to catch cheating. It's going to be using the-- fighting fire with fire, and they're going to be wrong. I think it's already happened a few times.
Yeah.
Professors have claimed people, oh, you've used ChatGPT, and-- CHARLIE BENNETT: This is like saying the internet is going to help students cheat and help professors catch cheaters. Like, yeah, obviously. But also, it's a mess. It's a horrible mess. People have been busted for using-- people have been busted for plagiarism by AI tools that read what other people write all the time and then say, oh, that's what people write. But then they also read all that, and then they write it.
And people say, oh, look. AI also writes that. So people who write a standard sentence-- like, the definition of kind of sentence of things get busted because of the promulgating of mediocrity that repetitive echoing language learning can do. You know, when I was an English major, which is how I ended up a librarian-- familiar path-- one of my writing--
Whoops! Oh, I'm a librarian.
Right, exactly. One of my writing professors told us about cheating, not cheating in any automated way, but someone that would go and take old stories and just claim that they wrote them for their short story class.
There it is.
The mistake was using a Saul Bellow story one time.
[INAUDIBLE]
[INAUDIBLE]. I guess they assumed that that was an obscure author.
We have to end this segment, but we should use this last assumption to end because it's an easy one.
OK. Read it out, Charlie.
AI will bear the cognitive load of mundane tasks. Yes.
We had Robin Fay on that talked about that exact thing. CHARLIE BENNETT: We did a whole show that says that that's a pretty easy assumption to say yes to. Yeah, she said, use AI as an intern.
You're listening to "Lost in the Stacks," and we will talk more about our AI assumptions on the left side of the hour.
This is Homestar Runner, and I am literally "Lost in the Stacks" on WREK Atlanta. Somebody help! I've run out of food! I don't know where I am in these stacks!
Thanks, Homestar, and thanks, Brother Chaps for that. One of the fundamental assumptions about artificial intelligence that applies to all of our discussions so far today is that of wish granting. We love how we can just enter a prompt, and AI grants our wish with generative text. Humans have told each other stories about the granting of fantastical wishes for as long as history has been written down.
It's powerful, that idea that something can come along and give us what we want with no effort of our own but the asking. It's what the author Robert Plotkin called the genie in the machine. But humans have always been in the business of making real wish granting machines or at least something close to it. You need to copy a book, don't do it by hand. Just arrange these metal letters on a press, and you can have a new copy in a fraction of the time. Voila.
You need to carry a message 100 miles away, don't send it on horseback. Use a telegraph to get it there instantly. Magic. You need to know the top three agricultural crop exports of Brazil? I have a device in my pocket that can answer that question almost instantly. No need to call a reference librarian, by the way. Incidentally, it's soybeans, sugar, and corn, according to USDA research. Anyway, of course, there are a lot of other technologies we can name that fulfill some wish.
The point is, however, that a lot of human effort went into the creation of these wish making machines. Likewise, artificial technologies didn't invent their own existence. They are the product of human effort and incorporate human flaws.
So, by creating these algorithms of wish fulfillment like GPT, is our human effort going towards teaching software to invent a useful and helpful tool like C-3PO in "Star Wars" or a destructive Skynet like in "Terminator?" My assumption is that whichever one we're really wishing for deep in our heart, that's the one we're going to get.
Oh,
File this set under KF3131.P58.
"Pay Your Way in Pain" by St. Vincent and, before that, "Hard to Explain" by Coricky. Songs about breaks and disruptions in human connection.
This is "Lost in the Stacks," and today's show is called assumptions about an AI future. CHARLIE BENNETT: Fred, how are we going to get through all of these? Right, yeah. We've got a few more. So let's just go with the list that we have, and let's just do like a lightning round to kind of say yes or no for the rest of these assumptions that we got. And then we can dive back into some more of--
All right, I like it. OK.
So, the first assumption on my list for this segment-- eventually, it will be entirely powered by the algorithm and totally inhuman.
AI?
Mm-hm.
That seems like a dictionary definition, doesn't it?
You would think so. I've got a hard no on that one.
All right.
Yeah, I think I'm with Fred on this one. Maybe not a hard no. I think it will be surprising, though.
Maybe we need to get back into that one when we come around again.
Well, right, I do feel like-- so you're all talking about how there's people in works--
Oh, there's people behind the curtain.
--putting all that stuff into it. Not the process, but how-- OK, you know what? I changed my answer.
OK.
Yeah.
Yeah. Definitely. Well, let's talk about-- CHARLIE BENNETT: Exploitation is a constant. So, next assumption that we might be able to get into deeper-- oh, this would be a good one. The sex worker industry will decide the future of AI.
Yes.
That seems like an interesting reversal of Bruce Sterling's standard sort of, you know, with streaming anything that happens to artists will eventually happen to everyone. So in this, it'll be, you know, what happens to AI when it interacts with this one industry will show what happens. But this is the difference between desire and inspiration, right? If it's a function built on desire, I want this. Obviously, sex workers will be part of how it happens.
I think it's going to be a lot of it. OK, next assumption for our lightning round-- AI will reduce our contact with humans in our daily lives. Doctors, schools. Stores.
Is it going to reduce it a lot more than it's already been reduced, or is it just going to continue the--
Right, and are you hoping for that future or dreading it?
No. I mean, I think that we are tribal creatures, and we're always going to crave that human connection. We'll just have human connection with different people, I think.
And yet, when I have to actually call someone on the phone, I feel really annoyed that I--
Oh, I do too. Yeah.
OK, another assumption, kind of a broad one-- AI is going to be dangerous to humanity.
You know, whenever this comes up, I think people start having that, like, "Terminator 2" feeling in the back of their head, and that's not how it's going to be dangerous. Much more of a "Wall-E" kind of future.
Yeah, I didn't specify the danger, I guess. So that's the hedge and this assumption.
Yeah. Yeah, the "Wall-E" future. Yeah.
No "Terminators" but definitely dead-eyed consumers.
How about AI will reinforce corporate power or economic power in the hands of the few?
Oh, come on.
OK.
Is this even an assumption, or is that just-- look. The moral arc of the universe, a lot of people think it goes one way. But really, what it goes to is back to the Gilded Age.
Yeah, I guess you can plug-- you can take AI out of that sentence and plug in anything else and--
Yes, will this new technology or product increase inequality and put power in the hands of the few?
Will this--
Yeah. Yeah, it will.
OK.
Although, has it done that for Wikipedia?
I think it's put power in the hands of the few whether they're [econmically] power or not.
OK, yes. Absolutely.
Powerful or not.
Yeah, yeah.
The editors, they control--
Yeah. CHARLIE BENNETT: --certain things. AI will result in less personal privacy. Yeah, I think our-- every aspect of our future is less personal privacy.
Yeah.
I'd like to dig into that just a little bit more. When we say AI will result in less personal privacy, I don't think it means that somehow we're going to be infringed on immediately. Like, oh, it's going to reveal stuff about you. But everything that is written and put on the internet and put out there into the world is being read and processed.
And so a lot of your secret thoughts or your internet confessions or the things that you kind of felt were hidden a little bit are going to be pulled into the great aggregate of how we understand humanity.
Yeah, things that you put on social media, things that you enter into search boxes, it's going to be in the hands of corporations, and they are going to know a lot more about you than you think you do from just, like, what you put on Instagram or the thing that you posted in a search box on Amazon.
Yeah, yeah. To quote a great classic, the only way to win is not to play.
Mm-hm, yeah. I didn't say that the internet or Google or all those technologies were a net positive for humanity, just they've become ubiquitous. CHARLIE BENNETT: You know, I want to go back to the sex worker thing because I feel like-- Yeah, all right.
--it pulls-- yeah, yeah, yeah. I feel like it pulls everything in that we were just talking about, right? And so the reason that the sex worker industry will decide the future of AI, the reason that that's a possible assumption, is because the crafting of a responsive entity is a huge part of sex worker progress, I guess. And you could have this sort of unexploited technology-- add big quote marks around unexploited.
[INAUDIBLE] it's always going to be a virtual girlfriend because a virtual boyfriend is probably unnecessary. Except maybe I'm wrong about that. Maybe I have my own--
You know, it goes back to--
--sense of--
--I think what you were saying about desire.
Yeah.
You know, whether it's-- maybe stereotypically, it'll be a girlfriend. There might be a boyfriend, you know? But it's all going to be about desire and the machine that feel fulfills the wish of that desire, and you don't have to interact with a human to have that desire fulfilled. CHARLIE BENNETT: Or you can interact with what seems like a human that does more what you want than what you don't want.
And then if we add that to everything else we said-- reduce our contact with humans, dangerous to humanity, reinforce corporate power, result in less personal privacy. Hey, you know, what did you say to your virtual girlfriend? Because that got processed. That got recorded. I mean, you might as well have said, Alexa-- Yeah. CHARLIE BENNETT: --can I tell you-- It's now data--
--my private fantasies?
It's now data in the hands of the few.
Yeah, yeah.
In a black box.
I said that word, and it woke up my phone. Whoops.
That's a perfect summation of our discussion here, a little bit of insidiousness--
No privacy.
--of insinuation.
Now, I made a made a kind of joking like, oh, come on when we said reinforce corporate power or economic power in the hands of the few. But I think that's a very important thing for us to take just a moment with. I don't want to just be a, oh, come on response. I think each of you should throw a little something in on that one too.
I think it definitely is going to concentrate, at least in the near term-- and I'm [INAUDIBLE] this field, like, really explodes. I don't want to say that I know beyond that. But it is going-- the Amazons or the Microsofts or whatever of AI are going to develop-- maybe it's going to be part of a company that already exists like Google, or maybe it's going to be a new company. But there will be a Google of AI or Microsoft or whatever it is.
Yeah, I think so. And I also-- I mean, this is maybe not exactly what we're talking about but related. In so many cases, it's the loudest voices that dominate. And you know, I think that that's what's going to happen here. This is the kind of technology that does, if it doesn't become its own behemoth, will get sucked up into one of these others.
So, we're about out of time for this segment.
Oh, we've been out of time for a while with AI.
Yeah. So obviously, it's all assumptions, and this is knee jerks and shallow dives. Nothing is inevitable, and no technology's destiny is foretold. Likewise, however, I'll just want to end on this thought. Humans have never ever, ever, ever, never, never, ever stopped themselves from doing something that they had the capability to do regardless of the negative consequences. Never, ever, ever, ever. 100% batting average.
Hey, Marlee. You remember how Fred was going to lean more into optimism and try and be less of a pessimist on the show?
I don't know. I thought he-- I thought he deliberately became a pessimist about halfway through. I don't know.
This is "Lost in the Stacks," and that's all the shallow dives we have time for. Let's play some music, Fred.
File this set under Q335.H87. You just heard "Your Dystopic Creation Doesn't Fear You" by Deer Hoof featuring Awkwafina. And before that, we heard "Little Robot" by Mamalarky, songs about preparing for a future, we may not want.
Today's show is called "Assumptions About an AI Future," and we talked about how AI might insinuate itself into our world of librarianship and higher education and a lot of other worlds too. We covered a lot of assumptions today, but what did we forget that we're going to have to talk about next time?
Well, what about the assumption that professors in academia hate ChatGPT?
We did not dig deep into that one.
Spoiler alert-- in a couple of weeks, we're going to have on a biology professor here at Georgia Tech on the show, and she uses it in her class, and she kind of likes it.
I would like to hear the alternate views.
Yeah, and I'm wondering, why is no one talking about how this is going to save us so much time that we can have a four day workweek or a four hour workweek or--
Finally.
--retire early? You know, what about you, Charlie?
Get engineered out of our jobs. I have not heard a lot about the fact that companies are building these, and then companies are saying, oh, we should stop building these until we figure out what's going on when they already have proprietary technology in play. Basically, why aren't we talking about chopping heads off so that we can get on? I should stop now. Hey, roll the credits, Fred.
See, I'm not the only pessimist. CHARLIE BENNETT: I'm not a pessimist. I just know how bad things are. "Lost in the Stacks" is a collaboration between WREK Atlanta and the Georgia Tech Library. I chose some appropriate music.
This is some spooky stuff, dude. FRED RASCOE: Written and produced by Charlie Bennett, Fred Roscoe, and Marlee Givens. Legal counsel and a heavily marked copy of an Alan Turing biography were provided by the Burroughs Intellectual Property Law Group in Atlanta, Georgia.
Special thanks to the humans working to make sure technology doesn't dehumanize us, and thanks, as always, to each and every one of you for listening.
Our web page is library.gatech.e du/lostinthestacks, where you'll find our most recent episode, a link to our podcast feed, and a web form if you want to get in touch with us.
Next week, "Lost in the Stacks" is a rerun. But after that, we'll be back to question a Georgia Tech faculty member's assumptions about AI.
Fred is all about the AI this year.
It's time for our last song today. Language model AI systems may be ubiquitous pretty soon, but for now, at least, they still seem to kind of just be the latest higher end flavor of the month, you know? Who knows what the future holds, though? So, let's close with "Flavor of the Month" by Black Sheep right here on "Lost in the Stacks." Have a great weekend. And by the way, none of today's show notes were written by ChatGPT.
I didn't think that was true until you said it wasn't true, Fred.
Van damn. Let's see what kind of flavor. Hm, do I want vanilla? Or do I want a taste the chocolate? I want something different. I want something slamming. What's the slammingest flavor [INAUDIBLE]??