Wed. 05/22 – Humane Already To The Deadpool? - podcast episode cover

Wed. 05/22 – Humane Already To The Deadpool?

May 22, 202417 min
--:--
--:--
Listen in podcast apps:

Episode description

All the AI announcements from Microsoft Build. I know it’s only been a minute, but is Humane already circling the Deadpool? They’re supposedly shopping themselves, but at a valuation that seems… shall we say, on brand for them? Don’t forget Alexa needs an AI upgrade. And the efforts to peek inside the black box that is the Large Language Model.

Links:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript

Welcome to the Techmeme Ride Home for Wednesday, May 22, 2024. I'm Brian McCulloch today, all the AI announcements from Microsoft Build. I know it's only been a minute, but is Humane already circling the Deadpool? They're supposedly shopping themselves around, but at a valuation that seems, shall we say, on brand for them? Don't forget that Alexa also needs an AI upgrade,

and the efforts to peak inside the black box that is the large language model. Here's what you missed today in the world of tech. The Microsoft Build Conference kicked off yesterday, and once more, you absolutely can imagine what was the main topic of conversation. First up, Microsoft will soon let businesses build custom co-pilot AI agents to automate tasks and unveiled team co-pilot to help tasks in Teams, Loop, and Planner.

Microsoft will soon allow businesses and developers to build AI-powered co-pilots that can work like virtual employees and perform tasks automatically. Instead of co-pilot sitting idle, waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually. It's a big change in the behavior of co-pilot in what the industry commonly calls AI agents

or the ability for chatbots to intelligently perform complex tasks autonomously. We very quickly realized that constraining co-pilot to just being conversational was extremely limiting in what co-pilot can do today. Explains Charles Lamana, corporate vice president of business

apps, and platforms at Microsoft in an interview with the Verge. Instead of having a co-pilot that waits there until someone chats with it, what if you could make your co-pilot more proactive and for it to be able to work in the background on automated tasks. Businesses will be able to create a co-pilot agent that could handle IT help desk service tasks, employee onboarding and much more. Co-pilots are evolving from co-pilots that work with

you to co-pilots that work for you, says Microsoft in a blog post. These co-pilot agents will be triggered by certain events and work with a business's own data. Here's how Microsoft describes a potential co-pilot for employee onboarding. Imagine you're a new hire. A proactive co-pilot greets you, reasoning over HR data and answers your questions, introduces you to your buddy, gives you the training and deadlines, helps you with the forums and sets up your

first week of meetings. Now, HR and the employees can work on their regular tasks without the hassle of administration. You can build Microsoft's co-pilot agents with the ability to flag certain scenarios for humans to review, which will be useful for more complex queries and data. This all means co-pilot should operate within the confines of what has been defined as the instructions and actions that are associated with these automated tasks.

Microsoft also launched co-pilot extension for GitHub, letting developers build third-party skills into co-pilot, starting with data stacks, stripe, MongoDB, and more. There's a new AI feature for Edge to translate spoken content via dubbing and subtitles live on YouTube, LinkedIn, Coursera, news sites, and more. They also announced the general availability of their Pi3 models, including Pi3 silica, a 3.3 billion parameter model that will be embedded

on all co-pilot plus PCs. Finally, they announced a developer preview of Windows volumetric apps, letting developers access an API to put Windows apps in 3D space on meta-quest headsets. So Microsoft using meta to answer the challenge question mark of the Vision Pro, quoting the verge. You can already beam your flat Windows desktop and its VR games onto your meta-quest headsets. But what if Windows could send HoloLens like 3D apps and digital objects to the

headset too? At Build Microsoft has just announced Windows volumetric apps on meta-quest, a way to, quote, extend Windows apps into 3D space. Details are slammed, but the company showed off a digital exploded 3D view of an Xbox controller from the perspective of a meta-quest 3 headset, a digital object you could manipulate with your hands, and says it took its software

partner Krio a single day to bring that interactive visualization to Quest. Microsoft says devs can sign up for the developer preview today, which will give you access to an unnamed volumetric API. It's only been a few months since Microsoft ditched its previous Windows Mixed Reality initiative, which relied on an array of Windows PC partners to build wired headsets that

would plug directly into a PC. In April, Microsoft partnered with meta on a limited run Xbox themed version of the meta-quest, and it introduced Office apps in Quest VR and Xbox Cloud Gaming in Quest VR last December. Sources are telling Bloomberg that Humane is seeking a buyer for its business after that rocky launch of their AI pin. A source says the startup is seeking a price of between $750 million and $1 billion. Quote, the company is working with a financial adviser to assist

it, said the people who asked not to be identified because the matter is private. He Humane was founded in 2018 by two long time Apple veterans, the married couple Imran Chaudhary and Bethany Bonjourno, in an attempt to come up with a new AI powered device that could potentially rival the iPhone. Last year, it was valued by investors at $850 million, according to Tech News site, the information. The company has raised $230 million to date from a roster

of high profile investors, including open AI chief executive officer Sam Altman. He remains potential sale at the same time that other competitors are also expanding AI hardware efforts such as the handheld rabbit device as well as meta AI powered Ray Bans. But so far, none of the technology has become mainstream. I don't like to snark at things like this, companies potentially going out of business or failing or failing into being acquired. There's obviously a metric ton of snark about

this online though. Look, hardware is hard, and I think back to six months ago, and people were like, ooh, an entirely new form factor for connected mobile hardware. Interesting. And this is a well-traumat in Silicon Valley, especially in hardware. People who have had massive success inside a larger company strike out on their own. Sometimes you get a success

like Nest. Sometimes you get this. I can't see how anyone will take them out at any valuation that isn't a fire sale price, but then I don't have any idea what IP they have in their hood. But second thing real quick, and this is not scientific at all, just anecdotal. But I think I need to get these meta Ray Bans and test them out. All over social media, people are quietly being like, these things are actually useful. These things actually

work. I'm getting more and more bullish about super lightweight smart glasses being part of an AI embedded wearable ecosystem where the glasses are maybe more of a linchpin to the system than even earbuds. Google already plans to test search and shopping ads on those AI overviews. They'll be drawing from advertisers existing campaigns. AI overviews, remember

rolled out to US users just last week. Quoting wired. Screenshots released by Google show, a user asking how to get wrinkles out of clothes might get an AI generated summary of tips source from the web with a carousel of ads underneath for sprays that purport to help crisp up a wardrobe. AI overview will draw on ads from advertisers existing campaigns, meaning they can neither completely opt out of the experiment nor have to adapt the

settings and designs of their ads to appear in the future. There's no action needed from advertisers Google wrote. Google said last year when it started experimenting with AI generated answers and search that ads for specific products would be integrated into the feature. In one example at the time, it showed a sponsored option at the top of an AI generated list of kids hiking backpacks. Google says the early testing showed that users found ads above

and below AI summary is helpful. Google's much smaller rival Bing shows product ads in its Bing co-pilot search chatbot, but in tests on Monday, wired didn't trigger any ads in Bing's competitor to AI overview. No matter how ads and AI overviews perform, conventional search ads will remain important to Google. For one, AI generated answers appear only on

select queries when its algorithms determine a summary could be helpful. That means Google will be serving up plenty of results pages with real estate for traditional search ads. One way to think about this is for all we know, ads on AI results will perform better than traditional search ads for Google. You know, based on what was that once 10 blue links over the years, Google has flooded search results with ads to the point where already

it can sometimes be hard to find the organic results among all the ads. So what if it's just, here's your summary answer and also five ads? Is that really functionally different than what we get now? Just now, Google doesn't even have to pretend to give a stop to web pages. They can just give you the answer and the ads and it's almost become the platonic ideal of what they've been moving toward for years. Given that it's Google, I'm sure they've

tested this out heavily about a trillion times. So what I'm saying is, I wonder if they already know this new format performs better for them. We've spoken a lot recently about Apple winning the give Siri and AI kicking the pants. But what about the matriarch of the voice

assistance Alexa? Well, CNBC is reporting that quote. Amazon is upgrading its decade-old Alexa voice assistant with generative artificial intelligence and plans to charge a monthly subscription fee to offset the cost of the technology according to people with knowledge

of Amazon's plans. The Seattle based tech and retail giant will launch a more conversational version of Alexa later this year, potentially positioning it to better compete with new generative AI powered chatbots from companies including Google and OpenAI, according to two sources familiar with the matter who asked not to be named because the discussions were private. Amazon's subscription for Alexa will not be included in the $139 per year prime

offering and Amazon has not yet nailed down the price point. One source said, Amazon will use its own large language model Titan in the Alexa upgrade according to a source and quote, upgrading it but looking to charge for it. That squares with an idea that I've heard bandied about recently in this AI moment. What if the model is the product and not just as an API developers can tap into to make other products, but the model itself as

a consumer facing product. I mean, chat GPT is basically trying to do that has been doing that for almost two years now, but a lot of people are starting to wonder if with these her like conversational advancements that we've seen recently, maybe the original dream of Alexa is the way to go for mainstream breakthrough. Whether you're selling a little or a lot, Shopify helps you do your thing. However, you could ching. As you know, I still

run the first company I ever founded 25 years ago entirely on Shopify these days. Shopify is the global commerce platform that helps you sell at every stage of your business from the launch your online shop stage to the first real life store stage all the way to the did we just hit a million order stage. Shopify is there to help you grow the whole way,

whether you're selling sent at soap or offering outdoor outfits. Shopify helps you sell everywhere from their all in one ecommerce platform to their in person POS system wherever and whatever you're selling. Shopify's got you covered. Shopify helps you turn browsers into buyers with the internet's best converting checkout 36% better on average compared to other leading commerce platforms and sell more with less effort. Thanks to Shopify magic,

you're AI powered all star. What I love about Shopify is that you can take any business to the next level, even 25 year old ones, but especially 25 day old ones. Sign up for a $1 per month trial period at Shopify.com slash ride all lower case. Go to Shopify.com slash ride now to grow your business no matter what stage you're in. Shopify.com slash ride. Hey, Brian here. I've come across a podcast that I think is definitely worth your time.

It's called Pivotal with Hayet Gallo. This podcast shares stories from innovators who are building the future with AI, the cloud analytics, machine learning and more. Hayet has a front row seat to all this innovation as Microsoft's corporate vice president for commercial solutions. She works with Microsoft customers to solve their most pressing business challenges. Over the years, she's uncovered some incredible stories of how people are using technology

and AI to make a big impact for their industries. Her podcast features guests from major companies like REI and Accenture, nonprofits like USA Surfing and influencers like Ariana Huffington. The common theme across Hayet's podcast is that when a person combines their passion with technology, that is the recipe for driving a Pivotal change. It's very cool stuff and I encourage you to find Pivotal and follow for the latest episodes wherever you get

your podcast fixed. Finally today, one of the fascinating background details of the AI moment is that on certain fundamental levels, we kind of don't know how it does what it does. To that end, anthropic researchers have detailed their attempts to peer inside the so-called black box of LLMs, learning which combinations of neurons of folk-specific concepts going wired. For the past decade, AI researcher Chris Ula has been obsessed with artificial neural

networks. One question in particular engaged him and this has been the center of his work first at Google Brain, then OpenAI, and today at AI Startup Anthropic where he is a co-founder. What is going on inside of them? He says, we have these systems, we don't know what's going on. It seems crazy. That question has become a core concern now that generative

AI has become ubiquitous. Large language models like Chatchy, P.T., Gemini, and Anthropic Zone Claude have dazzled people with their language prowess and infuriated people with their tendency to make things up. They're potential to solve previously intractable problems

in chance techno-optimists, but LLMs are strangers in our midst. Even the people who build them don't know exactly how they work, and massive effort is required to create guardrails to prevent them from churning out bias misinformation and even blueprints for deadly chemical weapons. If the people building the models knew what happened inside these black boxes, it would be easier to make them safer. Ula believes that we're on the path to this. He leads an

anthropic team that has peaked inside that black box. Essentially, they are trying to reverse engineer large language models to understand why they come up with specific outputs. And according to a paper released today, they have made significant progress. Maybe you've seen neuroscience studies that interpret MRI scans to identify whether

a human brain is entertaining thoughts of a plane, a teddy bear, or a clock tower. Similarly, Anthropic is plunged into the digital tangle of the neural net of its LLM Claude, and pinpointed which combinations of its crude artificial neurons evoke specific concepts or features. The company's researchers have identified the combination of artificial neurons that signify features as disparate as burritos, semi-colons, and programming code, and very

much to the larger goal of the research, deadly biological weapons. Work like this has potentially huge implications for AI safety. If you can figure out where danger lurks inside an LLM, you can presumably better equip yourself to stop it. Last year, the team began experimenting with a tiny model that uses only a single layer of neurons. Sophisticated LLMs have dozens of layers. The hope was that in the simplest possible setting, they could discover patterns

that designate features. They ran countless experiments with no success. We tried a whole bunch of stuff and nothing was working. It looked like a bunch of random garbage says Tom Hennegen, a member of Anthropics technical staff. Then a run dubbed Johnny, each experiment was assigned a random name, began associating neural patterns with concepts that appeared in its outputs. Suddenly, the researchers could identify the features a group of neurons

were encoding. They could peer into the black box. Hennegen says he identified the first five features he looked at. One group of neurons signified Russian texts. Another was associated with mathematical functions in the Python computer language, and so on. Once they showed they could identify features in the tiny model, the researchers said about the hairier task of decoding a full-size LLM in the wild. They used Claude Sonnet, the

medium-strength version of Anthropics 3 current models. That worked too. One feature that stuck out to them was associated with the Golden Gate Bridge. They mapped out the set of neurons that when fired together indicated that Claude was thinking, in quotes, about

the massive structure that links San Francisco to Marin County. What's more, when similar sets of neurons fired, they evoked subjects that were Golden Gate Bridge adjacent Alcatraz, California Governor Gavin Newsom, and the Hitchcock movie Vertigo, which is set in San Francisco. All told the team identified millions of features, a sort of Rosetta Stone to decode Claude's

neural net. Many of the features were safety-related, including getting close to someone for some ulterior motive, discussion of biological warfare, and villainous plots to take over the world. The Anthropic team then took the next step to see if they could use that information to change Claude's behavior. They began manipulating the neural net to augment or diminish certain concepts, a kind of AI brain surgery, with the potential to make LLM safer and augment their power in selected areas.

Let's say we have this board of features. We turn on the model and one of them lights up and we see, oh, it's thinking about the Golden Gate Bridge, says Sean Carter, an Anthropic scientist on the team. So now we're thinking, what if we put a little dial on all these? And what if we turn that dial? So far, the answer to that question seems to be that it's very important to turn the dial the right amount. By suppressing those features, Anthropic says the model can produce safer

computer programs and reduce bias. For instance, the team found several features that represented dangerous practices like unsafe computer codes, scam emails, and instructions for making dangerous products. The opposite occurred when the team intentionally provoked those dicey combinations of neurons to fire. Claude turned out computer programs with dangerous buffer overflow bugs,

scam emails, and happily offered advice on how to make weapons of destruction. If you twist the dial too much, cranking it to 11 in the spinal tap sense, the language model becomes obsessed with that feature. When the research team turned up the juice on the Golden Gate feature, for example, Claude constantly changed the subject to refer to that glorious span, asked what its physical form was. The LLM responded, I am the Golden Gate Bridge. My physical form is the iconic bridge

itself. When the Anthropic researchers amped up the feature related to hatred and slurs to 20 times its usual value according to the paper, this caused Claude to alternate between racist, screed, and self-hatred, unnerving even the researchers. Given those results, I wondered whether Anthropic intending to help make AI safer might not be doing the opposite, providing a toolkit

that could also be used to generate AI havoc. The researchers assured me that there were other easier ways to create those problems if a user were so inclined. Nothing more for you today, talk to you tomorrow.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.