Wed. 11/01 – WeWork To The Deadpool? - podcast episode cover

Wed. 11/01 – WeWork To The Deadpool?

Nov 01, 202316 min
--:--
--:--
Listen in podcast apps:

Episode description

LinkedIn has an AI job coach for you. Netflix’s ad tier is doing well. But is it doing well enough. You might have thought this already happened, but WeWork seems to be seriously circling the deadpool. A potentially big breakthrough for medicinal discovery via AI. And more on the evolving AI debate around open source and regulatory capture.

Links:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript

Welcome to the TechMeme Red Home for Wednesday, November 1st, 2023. I'm Brian McCullough today. LinkedIn has an AI job coach for you. Netflix's ad tier is doing well, but is it doing well enough? You might have thought this already happened, but we work seems to be seriously circling the Deadpool, a potentially big breakthrough for medicinal discovery via AI, and more on that evolving AI debate around open source and regulatory capture. Here's what you missed today in the world of tech.

Since the AI moment began a year ago, you had to know something like this was coming eventually. LinkedIn this morning announced a GPT-4 powered AI chatbot aimed at being what they call a job seeker coach, available right now to premium users. As an aside, they also mentioned that LinkedIn now has more than 1 billion members, quoting CNBC.

The new AI chatbot, which aims in part to help users gauge whether a job application is worth their time is powered by open AI's GPT-4 and began rolling out to some premium users Wednesday. Microsoft has invested billions of dollars into open AI. Users of the new chatbot can launch it from a job posting by selecting one of a few questions such as, am I a good fit for this job? And how can I best position myself for this job?

The former would prompt the tool to analyze a user's LinkedIn profile and experience with answers like your profile shows that you have extensive experience in marketing and event planning, which is relevant for this role. The chatbot will also point to potential gaps in a user's experience that could hurt them in the job application process. The quality of responses has to be really good for the stakes being as high as they are here, so we don't want to take that lightly at all.

Guyanda Suchdava, LinkedIn's Vice President of Product Management told CNBC, the user can also follow up by asking who works at the company, which will prompt the chatbot to send them a few employee profiles, potentially second or third degree connections, who the user can then message about the opportunity. The message itself can also be drafted using Generative AI and quote.

In a new shareholder letter, Netflix has revealed that its ad tier now has 15 million monthly active users globally accounting for 30% of new signups where it is available, and they also announced new plans for new ad formats in 2024 as we've been discussing. Quoting Variety.

The company says it will start to offer so-called title sponsorships to advertisers ready to align with the new reality series Squid Game The Challenge, and also the final season of the crown, as part of its bid to accelerate the utility of its ad-supported tier.

We want to shape the future of advertising on Netflix and help marketers tap into the amazing fandom generated by our must-watch shows and movies, says Amy Reinhardt, newly installed as president of advertising at Netflix in a prepared statement. Netflix is making a new bid to Laura Madison Avenue to its offerings, even as many ad buyers say the company has yet to generate the scale necessary to win over their clients ad dollars.

To be sure, Netflix says its ad-supported tier has won over 15 million monthly active users across the globe. In an October letter to shareholders, Netflix said advertising tier subscriptions accounted for approximately 30% of all new signups in the 12 countries that supported that platform. Netflix said it was, quote, working with brands to create formats they will value, in particular, the ability to connect with highly conversational and culturally relevant programming.

As recently as this past summer, executive suggested that the Netflix ad-tier subscriber levels in the US were too small to meet guarantees the company might have made to early sponsors. In late October, Netflix ad sales chief Jeremy Gorman, a veteran who was lured from snap, was leaving the company replaced by Reinhardt who had previously led studio operations.

Starting in the first quarter of 2024, Netflix will offer advertisers across the globe access to a new binge ad format that gives viewers who watch three consecutive episodes the chance to view a fourth without commercial interruptions. Users will be informed that a certain sponsor is giving them the chance to watch an episode without interruption. Netflix also plans to give advertisers the ability to use QR codes in commercials starting in early 2024.

The company plans to offer sponsorships that can be tied to a specific title, a thematic moment, or a live stream. Pepsi Co's Frito Lay, for example, has aligned its smart food popcorn snack with the most recent season of the reality series Love is Blind. Netflix has also signed sponsors for the reality series Squid Game the Challenge, as well as the final season of the Crown. It's always a bit of a debate as to how much we should cover WeWork.

Is it really a technology company or just a real estate one? But given so many startups and solo engineers, and maybe even you listening out there right now, use WeWork, I think it's worth noting, that sources are telling the Wall Street Journal that we work plans to file for Chapter 11 bankruptcy as early as next week. We's stock dropped more than 40% after hours on the news, quoting the journal.

WeWork missed interest payments owed to its bondholders on October 2nd, kicking off a 30-day grace period in which it needs to make the payments. Failing to do so would be considered an event of default. On Tuesday, the company said it had struck an agreement with the bondholders to allow it another seven days to negotiate with the stakeholders before a default is triggered.

In August, the company shook up its board after three directors resigned due to a material disagreement regarding board governance and the company's strategic direction, according to a securities filing. WeWork appointed four new directors with expertise in large, complex financial restructurings. Those directors have been negotiating with WeWork's creditors over the past several months about a restructuring plan as they prepare for the bankruptcy.

The flexible workspace provider has been aiming to renegotiate leases with landlords after signaling that it has substantial doubt about its prospects for survival. Chief Executive David Tolly said during a September conference call with landlords that WeWork's lease commitments must be right-sized to accommodate its operations in the current market because the office real estate market has fundamentally changed.

As of June, WeWork maintained 777 locations across 39 countries, including 229 locations in the U.S. according to securities' filings. WeWork has an estimated $10 billion in lease obligations due starting from the second half of this year through the end of 2027 and an additional 15 billion starting in 2028 according to public filings.

The company burned through $530 million during the first six months of 2023 and has around $205 million of cash on hand as of June according to securities' filings end quote. DeepMind says its latest alpha-fold model can generate predictions for nearly all molecules in the protein data bank and for ligands, nucleic acids and more. According TechCrunch to try to tell you why this is such a big deal.

Nearly five years ago, DeepMind was one of Google's more prolific AI-centered research labs debuting alpha-fold an AI system that can accurately predict the structures of many proteins inside the human body. This then DeepMind has improved on the system, releasing an updated and more capable version of alpha-fold alpha-fold 2 in 2020.

Today, DeepMind revealed that the newest release of alpha-fold, the successor to alpha-fold 2, can generate predictions for nearly all molecules in the protein data bank, the world's largest open access database of biological molecules.

Already, isomorphic labs, a spin-off of DeepMind focused on drug discovery, is applying the new alpha-fold model which it co-designed to therapeutic drug design, according to a post on the DeepMind blog, helping it characterize different types of molecular structures important for treating disease. The new alpha-fold's capabilities extend beyond protein prediction.

DeepMind claims that the model can also accurately predict the structure of ligands, molecules that bind to receptor proteins and cause changes in how cells communicate, as well as nucleic acids, molecules that contain key genetic information and post-translational modifications and chemical changes that occur after a protein's created. Predicting protein, ligand structures can be a useful tool in drug discovery.

DeepMind notes as it can help scientists identify and design new molecules that could become drugs. Currently, pharmaceutical researchers use computer simulations known as docking methods to determine how proteins and ligands will interact. Docking methods require specifying a reference protein structure and a suggested position on that structure for the ligand to bind to. With the latest alpha-fold, however, there's no need to use a reference protein structure or suggested position.

The model can predict proteins that haven't been structurally characterized before, while at the same time, simulating how proteins and nucleic acids interact with other molecules, a level of modeling that DeepMind says isn't possible with today's docking methods. The newest alpha-fold isn't perfect, though.

In a white paper detailing the system's strengths and limitations, researchers at DeepMind and isomorphic labs reveal that the system falls short of the best in class method for predicting the structures of RNA molecules, the molecules in the body that carry the instructions for making proteins. Doubtless, both DeepMind and isomorphic labs are working to address this end quote. Today's podcast is sponsored by Nutrisense.

The Nutrisense biosensor is a small device that you put on the back of your arm that then provides real-time feedback on how your body responds to the foods that you're eating, your exercise, your stress, even your sleep. With Nutrisense, you just take a photo of your meal, adjust for portion size, and Nutrisense does the rest. Nutrisense helps you track your data, see your glucose trends, and understand your macro-nutrient breakdown for each meal.

You also get an overall glucose score for each meal based on your body's response. You'll also be matched with a board-certified nutritionist who will review your data and answer all your questions, plus they can help you with a personalized nutrition plan so that you can achieve your goals. You should try Nutrisense today. It will open your eyes in a profound way to how your food, exercise, and lifestyle choices are affecting you.

What's more, it empowers you with real-time feedback loops showing the consequences of your food and lifestyle choices. Visit Nutrisense.com slash ride and use code ride to start decoding your body's messages and pave the way for a healthier life. Be sure to tell them you learned about Nutrisense on the TechMeme right home podcast. That's Nutrisense.com slash ride to save $30 off your first month, plus get a month of board-certified nutritionist support. Freelance work is booming.

So many people are taking the leap and starting their own businesses. But how do you maximize your earnings, minimize your taxes, and make sure that you're legally compliant? It's overwhelming, it's confusing, and it takes time away from your own billable hours. Collective is the all-in-one financial solution for businesses of one. They handle all of your business formation and compliance paperwork, your taxes, bookkeeping, accounting, even payroll.

Plus, if you're already an LLC, Collective can retroactively elect your S-Corp tax status back to July 1st, which could save you thousands on your 2023 taxes. In fact, Collective members save an average of $10,000 per year on taxes with the structure. A Collective Membership pays for itself within just a few months. But it's 100% tax deductible. Check out Collective.com-Ride before October 31st to potentially save thousands of dollars on your 2023 taxes.

To sweeten the deal, they'll also throw in an extra $100 off when you use my link. But you have to do this before October 31st. That's Collective.com-Ride to get started with your personal team of self-employed tax experts. Collective.com, focus on your passion, not your paperwork. And finally today, back to that evolving debate we discussed yesterday around AI and open source. Because as I say, the debate has been fierce.

Jan Lecun has joined the course of those warning of regulatory capture, and comments asking for rules and regulations around AI that would effectively entrench their current position in the space. And speaking of deep mind, deep mind CEO Demis Hassabis pushed back on claims by Metta's Lecun that he, Sam Altman and Dario, M.O.D. are fear-mongering to achieve AI regulatory capture. Quoting CNBC.

In an interview with CNBC's Aaron Carpall, Hassabis said that deep mind wasn't trying to achieve regulatory capture when it came to the discussion on how best to approach AI. It comes as deep mind is closely informing the UK government on its approach to AI ahead of a pivotal summit on the technology due to take place on Wednesday and Thursday.

Over the weekend, Jan Lecun met his chief AI scientist, said that deep mind's Hassabis along with open AI CEO Sam Altman and Thropic CEO Dario M.O.D. were, quote, doing massive corporate lobbying to ensure only a handful of big tech companies end up controlling AI. He also said they were giving fuel to critics who say that highly advanced AI systems should be banned to avoid a situation where humanity loses control of the technology.

If your fear-mongering campaign succeeds, they will inevitably result in what you and I would identify as a catastrophe. A small number of companies will control AI, Lecun said on X, the platform formerly known as Twitter on Sunday. Like many, I very much support open AI platforms because I believe in a combination of forces, people's creativity, democracy, market forces and product regulations. I also know that producing AI systems that are safe and under our control is possible.

I've made concrete proposals to that effect. Lecun is a big proponent of open source AI or AI software that is openly available to the public for research and development purposes. This is opposed to closed AI systems, the source code of which is kept secret by the companies producing it.

Lecun said that the vision of AI regulation, Hassabis and other AI CEOs are aiming for, would see open source AI, quote, regulated out of existence and allow only a small number of companies from the west coast of the US and China to control the technology. Meta is one of the largest technology companies working to open source its AI models.

The company's Lama Large Language Model Software is one of the biggest open source AI models out there and has advanced language translation features built in. In response to Lecun's comments, Hassabis said, Tuesday, quote, I pretty much disagree with most of those comments from Jan. I think the way we think about it is there's probably three buckets or risks that we need to worry about, said Hassabis.

There's sort of near term harms, things like misinformation, deep fakes, these kinds of things, bias and fairness in the systems that we need to deal with. Then there's sort of the misuse of AI by bad actors, repurposing technology, general purpose technology for bad ends that they were not intended for. That's a question about proliferation of these systems and access to these systems. We have to think about that.

Then finally, I think about the more longer term risk, which is technical AGI or artificial general intelligence risk, Hassabis said. The risk of themselves making sure they're controllable, what value do you want to put into them, have these goals and make sure that they stick to them. End quote.

Hassabis is a big proponent of the idea that we will eventually achieve a form of artificial intelligence powerful enough to surpass humans and all tasks imaginable, something that's referred to in the AI world as artificial general intelligence.

Meanwhile, remember how recently there was that executive order from the president with regards to AI over at AI Snake Oil, Arvind, Nararayan and Sayesh Kapoor go into great detail about that executive order, which you can read the piece for the whole breakdown piece by piece if you want. But I wanted to focus on this one section of it. The executive order does include a requirement to report to the government any AI training runs that are deemed large enough to pose a serious security risk.

And developers must report various other details, including the results of any safety evaluation red teaming that they performed. Further, cloud providers need to inform the government when a foreign person attempts to purchase computational services that suffice to train a large enough model. It remains to be seen how useful the registry will be for safety.

It will depend in part on whether the compute threshold, any training runs involving over 10 to the 26 power mathematical operations is covered, serves as a good proxy for potential risk. And whether the threshold can be replaced with a more nuanced determination that evolves over time. One obvious limitation is that once a model is openly released, fine-tuning can be done far more cheaply and can result in a model with very different behaviors. Such models won't need to be registered.

There are many other potential ways for developers to architect around the reporting requirement if they choose to. In general, we think it is unlikely that a compute threshold or any other predetermined criterion can effectively anticipate the riskiness of individual models. But an aggregate the reporting requirement could give the government a better understanding of the landscape of risks. The effects of the registry will also depend on how it is used.

On the one hand, it might be a stepping stone for licensing or liability requirements, but it might also be used for purposes more compatible with openness, which we discuss below. The registry itself is not a deal breaker for open foundation models, all open models to date fall well below the compute threshold of 10 to the 26 power operations. It remains to be seen if the threshold will stay frozen or change over time.

If the reporting requirements prove to be burdensome, developers will naturally try to avoid them. It might lead to a two-tier system for foundation models. Frontier models whose size is unconstrained by regulation and sub-frontier models that try to stay just under the compute threshold to avoid reporting.

So again, that there at the end does get into that whole idea that disruptors from below could be effectively hampered in the name of safety, but it's also led people online to say what is this, is the government about to get into the business of telling us how much and importantly how powerfully we can compute. That seems like a crazy big brother looking over your shoulder over reach, or maybe not. Again, not taking sides on this, just presenting the arguments as I've seen them.

Thank you for today, talk to you tomorrow.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.