So, Abner, we've had some news today, or over the last few days, actually not today, that's incorrect, about Pixie, potentially what that meant for what's going to come in terms of Pixel hardware, what this means for the future of the Assistant, and... I guess it all ties into what's going to happen with Gemini and the Google Assistant almost taking a step back. Yeah. So I guess to rewind, in December of 23, we got a rumor that Google was working.
on some sort of pixel assistant. This was, yeah, this was, it was codenamed Pixie. The report said it would help you accomplish your tasks. And yeah, it was pretty vague. It was, what, eight, nine months before, well, a good year, actually, based on the old timeline of when the next Pixel phones would come out. So it was a big of a... It was a bit of an unexpected thing. People have been wondering what it is, and we got a better idea of that this week when we got a report.
clarifying that pixie um what happened to it so the high level it was it was described as an ai assistant an ai agent for pixel uh that could complete tasks across different apps, which I don't know about you, but sounds a lot like what Gemini does. And at that time, that early December, Gemini was clearly in full swing. So that happened. And when the Pixel 9 launched, we didn't really get anything like that, did we? But anyways, to the last month, it emerged that Pixie...
What Pixie was, was it ended up being split into the Pixel Screenshots app and some features for Gemini. The reason this original vision didn't come to fruition was that... Google decided that Pixie was indeed too much like Gemini, and they didn't want any overlap there, which I think at a high level does make sense.
What do you think of the name though? I feel like if Google were going to do this, they would obviously Gemini is now the flag bearer for Google's AI efforts. And I do wonder having, like you say, it does make sense to kind of... try not to keep things too separate i don't i'm not as convinced by the name pixie i think it's a really i've been completely it's a terrible name but it's it's interesting it's i'm sure there's a code name i'm sure it wasn't anywhere near
what the final thing would be called. That being said, a lot of people have gravitated to the name. It's close to Pixel. I don't think that was ever like the wake word. I don't think that was ever being considered. So as is, as the rumor originally stood, everybody thought it would be its own smart assistant. And it could very well have been, but I don't know. Again, the competition at Gemini would have been, I don't think it would have made sense to give Pixel owners their own assistant.
No, I mean, when you consider how much of an effort and potentially the... I mean, obviously AI is very power hungry. I wonder how much they could have allotted to that. Like it doesn't really make as much sense to have. That's a good point. That's a very good point. The Pixel doesn't have to have its own unique assistant when.
I mean, I'm going to get into something a bit more broadly here with Android in general. I do think that one of the biggest strengths of Android, and I've kind of looked into this recently, or thought about it more recently, is that... The ubiquity of what you can do across every single device is such a really powerful tool in the way that iOS and Apple handles things is you are within this in air quotes ecosystem.
It's very, very difficult for you to then separate that and then move to other services. I think the power of Gemini is, and this is, again, another theory I kind of have, is that Google's efforts to decouple lots of portions of Android.
And then doing the same with Gemini makes a lot of sense. So for instance, I can go and get a Samsung Galaxy phone. And I know that Galaxy already has, in air quotes, a version of Pixi with that multimodality in Gemini. Like, I do wonder what Google's... process would have been if they'd have gone down this route of this is independent of everybody else it's its own individual thing how are they going to sell that
As an asset, whereas Gemini is very, very easy to, you can use it on your desktop. You can use it on your phone. You can use it on your Mac. You can use it on your PC. Like I really liked the way that they would go about doing that as opposed to this is a.
assistant for this device and this device only, it would have been the antithesis of what they've tried to do with the regular assistant and all of the other services that they've basically built out over the past, what, 25 years? So I think the key point there that... the thing you bring up. We don't know how useful Pixie was, is. So like the report from last month, it has this quote.
talking about compared to screenshots and the Gemini features that got split up, neither product offers as many useful functions as what was initially envisioned for the Pixel Assistant, according to the person. The thing to emphasize that is initially envisioned, we don't know what it would have actually been capable of.
it might as well have been in the early planning stages before it actually came to fruition. So we don't really know how good it would have been. It sounded like it was on device. It was... not as power intense. It had to be local to the device. It wasn't cloud-powered, anything like that, compared to Gemini. So I think that's an important thing to keep in mind if we're thinking about, oh, what we lost by Google.
sticking to gemini so we don't know what i mean we had we do have a good idea of what like on-device gemini nano can do today and it's not that much um so yeah I think the second phase of this story is that there's something called Pixel Sense that leaked earlier this month. Yes, it's... It was originally described as the new Pixie or the new Pixie assistant coming to a year later, that kind of stuff. The report was framed as...
This is Pixie actually happening and the new name. But looking at what has leaked, it talks about it takes, it can streamline actions. common actions that you do it makes notes for you about things i don't know it i'd really to me calling it this pixel sense thing an assistant That doesn't seem right even before we got this report saying that Google does not want two competing products happening.
Yeah, I feel like it just feels like an extension of what Google's already been doing with some of their other efforts, like I guess recorder, screenshots, all those kind of things. And then Smart Reply, all of these like... It just feels like a step above that from my own brief reading into it. I don't know. Is this the kind of thing which you think this is the best case in between? Does this feel like an in-between step? I think that's what I'm trying to say.
So I think, again, I don't think this PixelSense or this Pixie, it's not a second smart assistant. You're not getting a new thing, a new place where you can say, hot word, here's my command. I don't think it's at this stage what PixelSense, I don't think that's what PixelSense is. I think it's a background service. I think...
Honestly, from this vague screenshot of the setup screen that we've gotten, it honestly seems like an extension of Pixel. So when you take a screenshot on the 9 series, Corner preview, and then there's a bar with suggested actions to send it to somebody, to send a screenshot to somebody, et cetera, et cetera. I think at best, PixelSense is an extension of that. It's some kind of background service that tries to...
provide proactive assistance. Again, I don't think it's something you talk to directly or you even issue commands to. So I think that's what we're getting in terms of more proactive help. with Pixel 10 incoming. Yeah, I think that's where we're at with what this Pixel Sense thing is. Yeah, I agree with you. I feel this is more of a passive integration as opposed to something you're actively engaging with. I guess that's what Gemini is intended to be.
So you actively engage with Gemini to do something. Whereas PixelSense kind of is like, oh, you're in the kitchen. I guess a good analogy is like you're in the kitchen and you reach for an ingredient. It's like, do you want this ingredient as well? I feel like that's probably where it comes from in terms of a usability function. And again, I think it probably just goes a bit beyond what we've had with Smart Reply and all that kind of stuff by allowing you to more directly tie to the...
files, functions, anything else that you've saved on your device. It also feels in some ways a little bit like, I'm not going to say rebuttal, but I guess it kind of is to what Apple said they were going to do with their devices. Apple intelligence, it felt a little bit less, very much like you can make queries based upon your on-device files. And I hope that we get to see more of it in the next few months.
But like Gemini itself, definitely. I like the way it's siloed away in some respects. If these both come together and then become this like extra. extra high powered version of Gemini with PixelSense integration, then that makes that in some respects makes a lot more sense to me than Pixie because having it siloed away on a...
a device that, let's be completely honest, the Pixel devices, they just don't sell globally in the numbers that, say, a Samsung Galaxy S25 will. So I guess you could see it from a financial perspective what Google are trying to do with it. with Gemini and I mean, does, does it, does it even make sense to, to speculate? I don't know. I mean, that's the whole point of what we're talking about. I think it's a really, really strange thing to think about that.
We never saw anything come to fruition. And if PixelSense is the only form of that, then I guess to me it makes a lot of sense. No pun intended. Yeah, it's... I don't know stuff like screenshots stuff like recorder to me so far how the pixel team has been adding AI to pixel has been pretty thoughtful. It's not screen grabby. It's not, I don't know, it isn't, I don't know, compared to Apple Intelligence, say, it's not as...
It's more subtle. It's more realistic. It's stuff that has actually launched. But to the point about Apple intelligence, it's... It's this idea of controlling your apps with your voice. I know it's interesting to me because... The Gemini we have today, we have extensions slash apps that you can access Gmail, you can access YouTube, et cetera, et cetera. The way you do stuff is through API.
to API, to integrations. The thing Apple Intelligence is trying to do is you look at your screen and you can control your phone that way. It's like an agent that controls your phone, to use that specific terminology of an agent. And the biggest thing I was waiting for with app intelligence was... whether people want to navigate instead of visually navigating and touching their phone. I'm curious whether they will use the voice aspect, whether voice control, let's call it.
I don't know. That was the vision with the Pixel 4 and the next-gen assistant, which didn't really come to fruition. I just wonder whether... I've been, I don't know, since Apple was moving in that direction, I was under the assumption that Google would also do that, this app control via voice. But I don't know, I'm not sure. what the best approach is, whether like background integrations, API calls, or like literally replicating a human ability to tap the screen.
I think that's maybe one reason why having the split has become a... It is the most sensible way that Google have maybe handled this. If Pixie was its own thing and... or Gemini, they're going to overlap it. Having these individual apps still gives you the benefit of both of those functions. So you don't have to use or utilize.
the full voice hands-free assistant, you still get in the best of both worlds. So you get the benefits of all of those things. I mean, PixelSense will tie into that, won't it? So you'll have... integration with the applications you need and when you do want to make those voice controls or even the old hat touchscreen options
you're not losing out on functionality just by not accessing it with your voice i guess even i mean i'm going to go even deeper from that from an accessibility point of view that's quite good i think that not everyone wants to use the voice not everyone can use their voice to use certain functions
And also, I'm sure there's a lot of people out there who really don't care about AI. And I still sometimes wonder if I'm on the fence a little bit. Some things I really like, other things I just cannot ever see a use for it. So I wonder if there's a little bit of playing both sides here and it allows people to take advantage of one side of things without going fully in. Although then again, you can see the criticism there. They should go all in on it.
and go whole hog a little bit like Apple tried to do and seemingly have either delayed or failed doing so far. I feel like it's more of a delay. I do think that they have actual prototypes, of course. It's probably not hitting the accuracy rates they want, but I'm sure it exists in the lab. So to your point about that... I guess we've been circling around the topic of people using AI or whether they want to. The other day I posted something about the Gemini use cases and I think the...
This is, it's not a revelation by any stretch, but I think what I've come around to over the past few months where I do feel like I've been using it more, Gemini, to do tasks is... I think AI is the hardest thing to market in the world because we have these examples of, like with the Pixel 9 series, we have this example of...
using gemini to take a picture of what's in your fridge and asking what can be made from it um meal planning i guess let's call it and it's i don't know like i i'm of the belief that AI is generally useful, that it can help speed things up in people's lives. But it's just so hard to know. where it can be helpful to try AI in the first place, especially to the more average user, to the common user. Like the example I used was I was buying...
Self-checkout line, I was buying granola bars at like a dozen. I wanted to make sure I scanned all of them, so I took a picture of the receipt and had Gemini count. I could have, of course, counted it myself, but I was walking. walking and talking, and I just had my phone do it. That's like a very specific but absurdly boring example of how AI can be useful. You would not put that.
on the marketing, on the ad, because it's so mundane, so specific. But the thing is, I think that people in their lives, they have like dozens of those every single day. that could benefit from AI speeding it up. And it's just the hardest marketing problem. It's the hardest adoption problem. Like AI can be so versatile. It can be so... It can do so many things if you know how to use it, but I think we're still so far away from that, especially since you're dealing with a chatbot UI.
Maybe this changes once you can talk to it, once you can have natural conversations. I think they're kind of far away from that. I do wonder if we're being softened up for this a little bit as well. The idea that... you'd have Gemini Live is probably going to be a bigger component. And then once we get this, once the Astra integration rolls out more widely, I'm really excited to try that out for myself to just put it through its paces.
I agree with you that I think there's going to be some options there where at the moment it feels very deliberate. I think AI is very deliberate on most people's interactions with AI. I haven't really done much of the... the voice controls on my S25 Ultra. I can't seem, I'm kind of, I don't know, maybe I would do things like set a timer, add something to my calendar.
and I know you can do those already, I don't think I could be the person who would be like, send this message and do this. I don't think I would put multiple steps and even trust AI at its peak to do that for me. And I wonder if a PixelSense would, I don't know if I'd even use it. Like how many times were you accessing Google Assistant a day on your phone? I'd probably say control the lights. Is that about the extent of most people's usage? So I think it's a bit more than that.
So this recent Siri implosion or whatever, like it can't tell you what month it is. That really reminded me we've had smart assistants, quote unquote, for the past 10 years. People have been living with this stuff. People have been living with basic voice control. And I think controlling smart lights is a good example, but also people in their cars sending voice commands hands-free.
To them, that is a smart assistant. And the past 10 years of these smart assistants have been very poor. They've taken the worst aspects of technology in that where the user has to conform. to this command structure, to this prompt structure that's very stilted that, I don't know, people over the past 10 years, they've learned how to phrase things specifically. And I think...
That's like the antithesis of technology of it should be the other way around that you can say anything in a natural manner and Siri or Google Assistant understands what to do with it. But we simply haven't had that. And I think the real downside of these dumb assistants have been that people have been dumbing down the expectations for what they can do with their phones.
I know we've had this sci-fi for years and years of people talking to computers in just the way they talk to people. And my fear is that in the past 10 years, people's expectations have been lowered so much. that once we have these actual LLM things that Gemini, Alexa's LLM upgrade, people have been conditioned to not try anything in advance. They're doing simple one-line commands.
instead of being able to do these multifaceted things, orchestrating, in the example of Gemini, these extensions slash apps, like you can call... have used the Google Maps extension to call forth information and then have it be sent as a text. Like these are, I think, natural commands, like piecing, like finding something and sending it to somebody. That's...
should be something that's ingrained to people once they realize they can actually do it successfully. I know, I 100% agree. And this, again, raises that point. Does Pixie, and I, again, I... I despise that name. Does Pixie even make sense if it did exist, if it was going to exist, if it was ever going to be a product?
that we saw exclusive to Google Pixel devices, would it have made sense for Google to have put all that development time, effort, energy, teams spending potentially hundreds of thousands of hours building that out? to then have it on such a small pool of people that never really, it never gets kind of spoken about widely. Whereas if this is integrated into Gemini and the Gemini stack can run on practically any device.
And then eventually the more powerful your device gets, you can run some of these LLMs locally. That would just make more financial sense for a start. Secondary, if there is critical breakthroughs and you can...
basically control every facet of your smartphone, would it not make sense? To me, it makes a lot of sense for Google to just say, hey, this is available on every single Android device. And Google is really pushing the platform. They're pushing the platform a lot more recently than they probably ever have. And so I think I'm kind of just think about it logically from a, you're a pixel, you're a hardcore pixel fan. You want something that truly differentiates your device. I must admit.
you could understand someone being annoyed because there aren't as many true differentiating factors with the pixel like they used to be. Like you would, yes, you get some functions early, you get the updates earlier. But if Gemini is at the forefront of those updates now,
And it's going on every single device at the same time. I do think it's, personally, I think it's a great thing for the Android platform, but for a Pixel owner, you don't have that exclusivity that you really potentially want. Like let's say an iOS user has or a... even a Galaxy user to an extent, because Samsung is getting a lot of these features first, circle to search, this multimodal controls. I think saving grace there, it's at the same time, that it's Samsung and Pixel.
I don't know, honestly, it seems like Google got a carve-out at Samsung to, yeah, we can also do it at Pixel at the same time, which, good. That's non-negotiable. It should be at the same time. But it's, yeah, like what you said, like Gemini Live, this asterisk, the asterisk stuff in Gemini Live is actively rolling out very slowly, I'm sure, to manage the...
Pure load, I guess. Pure load, yeah. I think one of the anecdotes I've seen from the people that have had it, it was slow in the first examples we got. It was kind of slow, but... They say that it feels like it's gotten faster in the past few days. So this will definitely take a long time to scale, but it's starting to happen. Google said March and people are seeing it. I think we talked about this in the recent episode, like the idea that it's Astra and Gemini Live.
Or is it out to people, to everybody by IO? I think that would make a lot of sense, right? That would like, I know most people who, I mean, if you're listening to this episode, you care about. what happens with say pixel devices and where this goes but i think most people don't care so just being completely as neutral as possible as logical as possible as almost as dry as possible
So getting everything else out there makes the most sense. Astra is probably going to be the biggest, one of the biggest changes for Gemini that we will see for a long time. I feel like it will be the biggest update for a long time. Once people start getting out there and doing things that... I know obviously we've made fun slightly of the WWDC keynote last year when someone took a picture of a dog instead of asking the owner.
People are going to be able to do that with astronauts from the next, say, two to three months. I think that's going to be a real big change. They can just take a picture and do it now. Yeah, but in real time, I mean, in real time and get that feedback and then be like, hey, where can you do this? Where should I go from here? All those kind of things that we saw in the original Astro demo, hopefully working and functional. And yeah, I can definitely foresee that being a big...
inflection point for how people use specifically Gemini Live. Yeah. And again, all this stuff becomes more natural if you can put it on glasses, on smart glasses. So that's probably the next big step. But yeah, I do think that Gemini Live is the next, is the big thing. And from there, Gemini Live needs to get apps slash extension support so that you can actually take command on stuff.
So yeah, I think there's a very promising roadmap for what happened to Gemini Live. And I kind of feel like the other next stage is... We're waiting for this Project Mariner, which is Google's agent that can navigate the web for you. I feel like that's another big moment after this in terms of, I don't know, if they can get it. Right. I think that could be a splashy IO announcement. Like remember when assistant, when they announced like assist, they call.
make the appointment stuff the duplex stuff um i think something like that could be pretty splashy and that would be a great on-stage announcement yeah no 100 100 and i think that kind of sums up where we feel about Pixie and Gemini Live. But I think it would be a bit remiss of us not to mention that there's some other news this week about particularly Android OS. We're Android guys. We love Android at our very core.
Effectively, Google's said that they're going to develop Android itself, or not said, but it's kind of confirmed that they said, okay, they said this publicly that Android development is going to occur within Google's internal branches. There's going to be no almost... public development i guess is that the best way to describe it it's it's i it's yeah again i so i think there's two branches a lot of development go
into the android os um i'd say most of it happens behind closed doors already but there's some large components that people have visibility into and can track the changes It's now all going internal. And I think the stated reason is to simplify development, which I think now that we've got... more outside opinion about it yeah it makes sense that they don't want to they don't want to keep merging stuff they don't want to keep doing
any X to work. It's a streamlining thing that I think has been a common theme at Google for the past few months, especially since the merger of the Android Pixel thing. And just company-wide, they've been streamed. Trying to streamline stuff and just not do repetitive work, that kind of stuff. So I do think it makes sense why they're doing this. And the end result for the regular people.
There's none, really. Yeah, yeah, I don't think there's anyone's going to say. I mean, I guess it's a little bit like previously you could sometimes be in a restaurant and see part of the dish being made in front of you, whereas this time around it's all...
It's all behind a closed door and you just get the finished product, as it were. I mean, I think a lot of people will be confused. Well, if you have any... a base level of knowledge and interest in android you know that this does not mean that android is becoming closed source i just think that a lot of people don't realize that this means
When they get these changes or release to AOSP, the full new versions will just be pushed out when they're ready, as opposed to like kind of seeing them. I guess it makes sense from a, like, not... having Google almost effectively, I mean, it's not leaking, is it? But they're going to allude to products being in the works, for instance, like we saw that.
We've seen it happen with multiple products in the past, like mentions of upcoming pixels before it's been officially publicly acknowledged. And I guess that's not going to happen as often now. I do wonder what that's going to happen in terms of like... how it will affect certain leaks. But I mean, leaks happen regardless of the size of the company. So I don't know, those tidbits, I agree with the original report and from Ben on site that...
They can be a little bit overhyped sometimes. It's not like it's a secret that Google are developing Pixel products. Yeah. And sometimes it does. Yeah, I mean, it's always interesting to see, but... I don't think it's not going to affect anything like you said. But yeah, just to clarify any confusion around it with some people may have is this does not mean Android is closed source. It just means you're not going to see.
any major public development going on, if that makes sense. You're not going to see the build in progress. But yeah, it's all good. I think it's always great to see that in some respects, if Google are going to streamline this. it could mean that we get a little bit more, less problem areas, if that makes sense. I think having everything in one and coming out when it's ready, it means there's going to be less confusion because I think sometimes...
People do get very confused and certain people misinterpret things, which happens to the best of us. So yeah, I'm kind of, just want to make sure that people are aware of it. But yeah, I think that probably... I don't know. I don't know. Is it going to make a difference to us how we do things? Probably not. We're still going to cover Android to the level of depth and intricacies that we normally do. But yeah, I just want to say...
Thanks again, Abner, for joining me. I know this has been a little bit of a Pixie slash Gemini heavy episode, but it's always interesting talking about what's going to happen with the Pixel series. And hey, we're at the end of March. Could very soon be... learning more about these devices we're still waiting on the pixel 9a
No news on when that potentially is going to be available for people to pre-order, but hopefully that'll be in the next few weeks as well. Just want to say thanks again, like I say, Abner, for joining me. This has been episode 47 of Pixelated. Yeah, just before we go as well.
If you are listening to this, you really enjoy what we do, listen to us ramble, talk rubbish, leave a like and subscribe, whatever platforms you use, share it with your friends, anyone who's interested in the Pixel series. We'd really, really appreciate that. Thanks again. all right bye