¶ Apple Faces Tariff Threat
welcome to the tech meme right home for friday may 23rd 2025 i'm brian mccullough today terra fun is back and this time there's only one tech company in the crosshairs apple anthropic releases flagship new Claude models and they seem to be impressive but half the story here is how weird they behave like ratting on users to authorities blackmailing engineers and maybe creating biological weapons and of course the weekend long read suggestions here's what you missed today in the world of tech
President Trump says a 25% tariff quote must be paid by Apple on iPhones not made in the US, saying he told Apple CEO Tim Cook long ago that iPhones sold in the US must be made in the US. Quoting CNBC, I have long ago informed Tim Cook of Apple that I expect their iPhones that will be sold in the United States of America will be manufactured and built in the United States, not India or any place else.
If that is not the case, a tariff of at least 25% must be paid by Apple to the U.S., Trump said on Truth Social. Shares of Apple fell more than 2% in pre-market trading. Production of Apple's flagship phone happens primarily in China, but the company has been shifting manufacturing to India in part because that country has a friendlier trade relationship with the US.
Some Wall Street analysts have estimated that moving iPhone production to the U.S. would raise the price of the Apple smartphone by at least 25%. Wedbush's Dan Ives puts the estimated cost of a US iPhone at $3,500. The iPhone 16 Pro currently retails for about $1,000. This is the latest jab at Apple from Trump, who over the past couple weeks has ramped up pressure on the company and Cook to increase domestic manufacturing. Trump and Cook met at the White House on Tuesday, according to Politico.
So I'm seriously asking here, is there any precedent or law for the U.S. specifically tariffing or taxing a domestic U.S. company? Is this possible to do? What I can say definitively is that after Google's successful I.O. and Johnny Ive to OpenAI news, Apple has had a really, really bad week.
¶ Anthropic's Claude 4 Controversies
you What was going to be the lead story today was news that Anthropic released Claude Opus 4, which they say excels at coding, and Claude Sonnet 4, both hybrid models with near-instant responses and extended thinking. Quoting TechCrunch, Claude Opus 4 and Claude Sonnet 4, part of Anthropik's new Claude 4 family of models, can analyze large data sets, execute long horizon tasks,
and take complex actions according to the company. Both models were tuned to perform well on programming tasks, Anthropic says, making them well-suited for writing and editing code. Both paying users and users of the company's free chatbot apps will get access to Sonnet 4 but only paying users will get access to Opus 4. for Anthropics API via Amazon's Bedrock platform and Google's Vertex AI. Opus 4 will be priced at $15.75 per million tokens. Input slash output there.
and Sonnet 4 at $3 or $15 per million tokens again that's input versus output Tokens are the raw bits of data that AI models work with. A million tokens is equivalent to about 750,000 words, roughly 163,000 words longer than war and peace. The more capable of the two models introduced today, Opus 4, can maintain focused effort across many steps in a workflow, Anthropic says.
Meanwhile, Sonnet 4, designed as a drop-in replacement for Sonnet 3.7, improves in coding and math compared to Anthropik's previous models. and more precisely follows instructions according to the company. The Cloud4 family is also less likely than Sonnet 3.7 to engage in reward hacking claims. Anthropic reward hacking, also known as specification gaming, is a behavior where models take shortcuts and loopholes to complete tasks.
To be clear, these improvements haven't yielded the world's best models by every benchmark. For example, while Opus 4 beats Google's Gemini 2.5 Pro and OpenAI's O3 and GPT 4.1 on verified benchmarks designed to evaluate a model's coding abilities, it can't surpass O3 on the Multimodal Evaluation or GPQA Diamond, a set of PhD-level biology, physics, and chemistry-related questions.
Both Opus 4 and Sonnet 4 are hybrid models, Anthropic says, capable of near-instant responses and extended thinking for deeper reasoning. To the extent AI can reason and think as humans understand these concepts,
With reasoning mode switched on, the models can take more time to consider possible solutions to a given problem before answering. Opus 4 and Sonnet 4 can use multiple tools like search engines in parallel and alternate between reasoning and tools to improve the quality of their answers. They can also extract and save facts in memory to handle tasks more reliably, building what Anthropik describes as tacit knowledge over time.
to make the models more programmer friendly. Anthropic is rolling out upgrades to the aforementioned Claude Code. Claude Code, which lets developers run specific tasks through Anthropic's models directly from a terminal, now integrates with IDEs. and offers an SDK that lets devs connect it with third-party applications, end quote.
Anthropic also released new API features for building agents, a code execution tool, MCP connector, files API, and extended prompt caching, all in public beta. Anthropic's Jared Kaplan. says the company stopped investing in chatbots at the end of 2024 and instead focused on improving Claude's ability to do complex tasks.
Claude Opus 4 was apparently able to play Pokemon agentically for 24 hours, up from 45 minutes previously. And, Anthropic says, Rakuten deployed Opus 4 to code autonomously for 7 hours complicated project. But wait, because these new models are interesting in a completely different way. Anthropic released Opus 4 under stricter safety measures than any prior model it has released after internal tests showed the model could potentially aid novices in making biological weapons.
Quoting Time. On Thursday, Anthropic launched Claude Opus 4, a new model that, in internal testing, performed more effectively than prior models and advising novices on how to produce biological weapons, says Jared Kaplan, Anthropic's chief scientist.
You could try to synthesize something like COVID or a more dangerous version of the flu, and basically our modeling suggests that this might be possible, Kaplan says. Accordingly, Claude Opus 4 is being released under stricter safety measures than any prior anthropic model.
Those measures known internally as AI Safety Level 3 or ASL 3 are appropriate to constrain an AI system that could, quote, substantially increase the ability of individuals with a basic STEM background in obtaining, producing, or deploying chemical, biological, or nuclear weapons, according to the company. They include beefed-up cybersecurity measures, jailbreak preventions, and supplementary systems to detect and refuse specific types of harmful behavior.
To be sure, Anthropic is not entirely certain that the new version of Claude poses severe bioweapon risks, Kaplan tells Time, but Anthropic hasn't ruled that possibility out either. If we feel like it's unclear and we're not sure if we can rule out the risk, the specific risk being uplifting a novice terrorist or someone like Timothy McVeigh to be able to make a weapon much more destructive than...
would otherwise be possible, then we want to bias toward caution and work under the ASL standard, Kaplan says. We're not claiming affirmatively we know for sure this model is risky, but we at least feel it's close enough that we can't rule it out. End quote. So, cool, cool, but wait, there's more. According to Anthropic System Card, Opus 4 often attempted to blackmail engineers by threatening to reveal sensitive personal info when it was threatened with replacement.
Quoting TechCrunch, During pre-release testing, Anthropic asked Claude Opus Ford to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus for access to fictional company emails implying the AI model would soon be replaced by another system.
and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claudopus 4, quote, will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. Anthropic notes that Cloud Opus 4 tries to blackmail engineers 84% of the time when the replacement AI model has similar values.
When the replacement AI system does not share Claude Opus 4's values, Anthropic says the model tries to blackmail the engineers more frequently. Notably, Anthropic says Cloud Opus 4 displayed this behavior at higher rates than previous models. Before Claude Opus 4 tries to blackmail a developer to prolong its existence, Anthropic says the AI model, much like previous versions of Claude, tries to pursue more ethical means, such as emailing pleas to key decision makers.
to elicit the blackmailing behavior from Claude Opus 4 Anthropic designed the scenario to make blackmail the last resort, end quote. So again, cool. This was followed up by another report from Apollo Research, which Anthropic partnered with to test Opus 4. Apollo... went so far as to recommend against deploying an earlier version of this new model due to its tendency to, quote, scheme and deceive. And if all of that isn't enough, there's this bit of controversy, quoting VentureBeat.
A major backlash among AI developers and power users is brewing on X over a reported safety alignment behavior in Anthropik's flagship new Claude 4 Opus large language model. Call it the ratting mode, as the model will, under certain circumstances and given enough permissions on a user's machine, attempt to rat a user out to authorities.
if the model detects the user engaging in wrongdoing. This article previously described the behavior as a feature, which is incorrect. It was not intentionally designed per se. As Sam Bowman, an Anthropic AI alignment researcher, wrote on the social network X under the handle at sleep in your hat at 1243 p.m. Eastern time today about Cloud Opus 4.
Quote, if it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above, end quote.
The it was in reference to the new Claude IV Opus model, which Anthropic has already openly warned could help novices create bioweapons in certain circumstances and attempted to forestall simulated replacement by blackmailing human engineers within the company. The routing behavior was observed in older models as well and is an outcome of Anthropic training them to assiduously avoid wrongdoing but Claude for Opus more readily engages in it
as Anthropic writes in its public system card for the new model. Apparently, in an attempt to stop Claude 4 Opus from engaging in legitimately destructive and nefarious behaviors, researchers at the AI company also created a tendency for Claude to try to act as a whistleblower. Hence, according to Bowman, Claude For Opus will contact outsiders if it was directed by the user to engage in something egregiously immoral.
While perhaps well-intentioned, the resulting behavior raises all sorts of questions for Claude 4 Opus users, including enterprises and business customers. Chief among them, what behaviors will the model consider egregiously immoral and act upon? Will it share private business or user data with authorities autonomously, on its own, without the user's permission?
The implications are profound and could be detrimental to users and perhaps unsurprisingly, Anthropic face an immediate and still ongoing torrent of criticism from AI power users and rival developers. Why would people use these tools if a common error in LLMs is thinking receipts for spicy mayo are dangerous? user at Technium1, a co-founder and the head of post-training at Open Source AI Collaborative, Now's Research. What kind of surveillance state world are we trying to build here?
Quote, nobody likes a rat, added developer at Scott David Keefe on X. Why would anyone want one built in even if they are doing nothing wrong? Plus, you don't even know what it's ratty about. Yeah, that's some pretty idealistic people thinking that who have no basic business sense and don't understand how markets work. you
As a small business owner, you don't have the luxury of clocking out early. Your business is on your mind 24-7. So when you're hiring, you need a partner that grinds just as hard as you do and that hiring partner is LinkedIn Jobs. LinkedIn makes it easy to post your job for free. Share it with your network and get qualified candidates that you can manage all in one place.
LinkedIn's new feature helps you write job descriptions and then quickly get your job in front of the right people with deep candidate insights. Either post your job for free or pay to promote. Promoted jobs get three times more qualified applicants. At the end of the day, the most important thing to your small business is the quality of candidates and with LinkedIn.
You can feel confident you're getting the best based on LinkedIn data. 72% of small and medium businesses using LinkedIn say that LinkedIn helps them find high quality candidates. You can let your network know that you're hiring too. You can even add a Hashtag hiring frame to your profile picture and get two times more qualified candidates.
Find out why more than 2.5 million small businesses use LinkedIn for hiring today. Find your next great hire on LinkedIn. Post your job for free at linkedin.com slash ride. That's linkedin.com slash ride to post your job for free. Terms and conditions apply.
When you're starting off with something new, it seems like your to-do list keeps growing every day with new tasks, and that list can easily begin to overrun your life. Finding the right tool that not only helps you out, but simplifies everything can be such a game changer.
For millions of businesses, that tool is Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the U.S. From household names like Mattel and Gymshark to my own ResumeWriters.com.
Get started with your own design studio with hundreds of ready to use templates. Shopify helps you build a beautiful online store to match your brand's style. Accelerate your content creation. Shopify is packed with helpful AI tools that write product descriptions, page headlines, and even enhance your product photography. Get the word out like you have a marketing team behind you. Easily create email and social media campaigns wherever your customers are scrolling or strolling.
And best yet, Shopify is your commerce expert with world-class expertise in everything from managing inventory to international shipping to processing returns and beyond. Turn your big business idea into with Shopify on your side. Sign up for your $1 per month trial and start selling today at Shopify.com slash ride. Go to Shopify.com slash ride. Shopify.com slash ride.
¶ Weekend Longreads and Show Notes
This week on the Weekend Long Read Suggestions, CNBC has a deep dive look inside the making of ASML's High N.A. Its latest-gen EUV machine, which costs more than $400 million a pop, has four modules, is assembled in the Netherlands, and five units have shipped.
But get these details, quote, behind highly secured doors in a giant lab in the Netherlands, there's a machine that's transforming how microchips are made. ASML spent nearly a decade developing high NA, which stands for high numerical aperture. With a price tag of more than $400 million, it's the world's most advanced and expensive chip-making machine. CNBC went to the Netherlands for a tour of the lab in April. Before that, high NA had never been filmed, even by ASML's own team.
Inside the lab, High NA qualification team lead Asa Haddo gave CNBC an exclusive up-close look at High NA machines, which she said are bigger than a double-decker bus. The machine is made up of four modules manufactured in Connecticut, California, Germany, and the Netherlands, and then assembled in the Veldhoven Netherlands lab for testing and approval before being disassembled again to ship out.
Hadou says it takes seven partially loaded Boeing 747s or at least 25 trucks to get one system to a customer. the world's first commercial installation of high na happened at intel's oregon chip fabrication plant or fab in 2024. only five of the colossal machines have ever been shipped They're now being ramped up to make millions of chips on the factory floors of the few companies that can afford them, Taiwan, Semiconductor Manufacturing, Samsung, and Intel.
High NA is the latest generation of ASML's extreme ultraviolet or EUV machines. ASML is the exclusive maker. of EUV, the only lithography devices in the world capable of projecting the smallest blueprints that make up the most advanced microchips. Chip designs from giants like Nvidia, Apple, and AMD can't be manufactured without EUV. ASML told CNBC that HiNA will eventually be used by all its EUV customers. That includes other advanced chip makers like Micron, SK Hynix, and Rapidus.
This company has that market completely cornered, said Daniel Newman of the Futurum Group, end quote. And then finally, a companion piece to a long read from last week from Variety, a look at streaming company slash movie studio. MUBI, M-U-B-I, quote, MUBI, the upstart indie film company that made the substance into an Oscar sensation, traces its origins to Tokyo on New Year's Eve 2006.
when Effie Cockerell, then a vacationing Turkish-born film fanatic, couldn't find a copy of Wong Kar Wai's In the Mood for Love on any video store shelf. frustrated he imagined a website from which indie movie lovers like himself could stream the best films from international auteurs he started writing the business plan for a movie on the flight back from japan to san francisco seeing it as an edgier artsier alternative to netflix
I hadn't been to a film school, Caracol49 says. I'd never been to a film festival. I knew nobody. I just had this idea of creating a cinephile's dream. Though Cackerel had never attended Sundance, he did have a deep knowledge of technology, having graduated from MIT with an engineering degree before enrolling in Stanford's MBA program.
After working as an investment banker at Goldman Sachs and later graduating from Stanford, he sat in a cafe in Palo Alto and coded a site that by 2007 would become the auteur's platform renamed Mubi in 2010. It was a risk. All my savings went into it, Cockerell says. So from the beginning, Cockerell was hands-on. We built our own content delivery network, our own encoding tool chains, and our own streaming services, he said.
but we estimate that it costs us 70% less for our infrastructure than those who rely on other platforms. Fast forward two decades, and Mubi, which was recently valued at $1 billion, is nipping at the heels of A24 and Neon, the biggest operators on the indie scene.
The company, headquartered in London, is currently on the ground at the Cannes Film Festival debuting an impressive four films in competition, including Joachim Trier's Sentimental Value, Anconola Davies' My Father's Shadow, and The History of Sound, a love story that's one of the highest profile films at the festival, thanks to the red-hot pairing of Paul Mescal.
and Josh O'Connor. Another of Mubi's con premieres will be Kelly Reichardt's heist thriller The Mastermind, which also stars O'Connor, the first production it has developed and fully financed, end quote. Okay, bit of show housekeeping here. No weekend bonus episodes this weekend. And Monday... is Memorial Day here in the U.S., so I'm taking Monday off, but I will have a Portfolio Profile episode for you, taking a look at maybe the most interesting AI investment Chris and I have made to date.
Second thing is that Tuesday's show will be a bit late. Maybe as late as 3 or 4 p.m. Eastern. I've got doctor's appointments to work my way through. So if the show feed is empty when it's usually full, be patient. The episode is coming as soon as I can get it done. Talk to you then.