Welcome to the Tech Meme Right Home for Friday, April 11th, 2025. I'm Brian McCullough. Today, Europe says it could tax social media ads if the tariff wars get really nasty. Why couldn't Apple make an iPhone in the US as President Trump wants? Is OpenAI cutting corners on safety in order to stay ahead in the AI race? And as always, the weekend long range suggestions. Here's what you missed today in the world of tech.
Up, down, up, down. It's basically impossible for us to talk about the turmoil in tech stocks as anything I say will be out of date by the time it gets to you. But a couple of nuggets that I thought are worth sharing. First up... European Commission President von der Leiden has said the EU may tax big tech ad revenue targeting Meta and Google if Trump trade talks fail.
Thanks to many of you who tweeted this at me overnight. Yes, this echoes my essay where other countries could treat our social media like we've been treating TikTok. Quoting the FT, European Commission President Ursula von der Leyen told the Financial Times that the EU would seek a completely balanced agreement with Washington during Trump's 90-day pause in applying additional tariffs.
but the commission president warned she was ready to dramatically expand the transatlantic trade war to services if those talks failed, potentially including a tax on digital advertising revenues that would hit tech groups such as Meta, Google, and Facebook. We are developing retaliatory measures, von der Leyen said, explaining these could include the first use of the block's anti-coercion instrument with the power to hit services exports.
There's a wide range of countermeasures in case the negotiations are not satisfactory, she said. She said this could include tariffs on the services trade between the U.S. and the EU, stressing the exact measures would depend on the outcome of talks with Washington. An example is you could put a levy on the advertising revenues of digital services, she said.
The measure would be a tariff applied across the single market. This differs from digital sales taxes, which are imposed individually by member states, end quote.
So they're saying it's not just that you could tax gadgets, you could tax the ad revenue of tech platforms as well. Meanwhile, Mark Gurman takes a look at the whole question of if Apple could ever produce an entire iPhone here in the U.S. and Well, quote, Apple is unlikely to move iPhone production to the US in the foreseeable future for a variety of reasons, including the shortage of facilities and labor needed to produce the devices.
Moreover, the country lacks the rich ecosystem of suppliers, manufacturing and engineering know-how, that for now can only be found in Asia. The company also is more focused on turning India into its new source of US-bound iPhones. Apple's partners are building the world's second-largest iPhone plant in that country, decreasing the company's reliance on China. Apple's biggest FATP facilities, short for Final Assembly Test and Packout.
are massive and incomprehensible to many people outside of Asia. They are almost towns unto themselves with several hundred thousand people, schools, gyms, medical facilities, and dormitories. One major iPhone factory, a complex in Zhengzhou, has even been dubbed iPhone City.
What city in America is going to put everything down and build only iPhones? Because there are millions of people employed by the Apple supply chain in China, said Matthew Moore, the co-founder of a startup and a former Apple manufacturing engineer. Boston is over 500,000 people. The whole city would need to stop everything and start assembling iPhones.
In addition to its lock on the manufacturing operations, China is home to millions of people educated in operating machinery and the skills needed to build iPhones, a process that still requires a lot of manual work. The engineering support to run a factory is not in America, Moore said. There just aren't enough students studying STEM or science, technology, engineering, and math, he said.
Chief Executive Officer Tim Cook laid out the reasons for relying so heavily on China during a Fortune magazine event in 2017, saying it wasn't because of low labor costs. China stopped being the low labor cost country many years ago, he said. The reason is because of the skill and quality of skill in one location.
You could fill multiple football fields with state-of-the-art tooling engineers in China, Cook said at the time. In the U.S., you could have a meeting of tooling engineers, and I'm not sure we could fill this room. One popular counterpoint is that Apple should use its cash hoard to buy thousands of acres in the U.S. and create a fully robotic and automated iPhone manufacturing facility that would remove any human-related challenges from the manufacturing process.
Commerce Secretary Howard Lutnick said as much in an interview with CBS, suggesting that, quote, the army of millions and millions of human beings is going to be automated. but that's not yet realistic according to supply chain experts and people who have worked on Apple product manufacturing. China has access to lower-cost automation and hasn't been able to make such a vision work.
The pace of iPhone development also makes it harder to automate processes because they can frequently change, they said. Much of the equipment needed for production is made in China as well. While the look of the iPhone hasn't changed meaningfully in years, new materials and internal components, and even the smallest of tweaks to the design, require the company to retool the assembly lines overseas.
You design the thing, rebuild the factory, and then you only have six months to sell it, according to a person with knowledge of Apple's supply chain who asked not to be identified. The pace of change makes it so much harder to automate, end quote. OpenAI has rolled out a ChatGPT memory feature that references past chats for answers, starting with ChatGPT Pro and Plus subscribers, but...
not if you're in the UK or Europe, quoting TechCrunch. The company says the feature, which appears in ChatGPT settings as reference-saved memories, aims to make conversations with ChatGPT more relevant to users. The update will add conversational context to ChatGPT's text voice and image generation features, the company added. The new memory feature will roll out first to ChatGPT Pro and Plus subscribers, except for those based in the UK, EU, Iceland, Liechtenstein, Norway, and Switzerland.
OpenAI says these regions require additional external reviews to comply with local regulations, but the company is committed to making its technology available there eventually. OpenAI didn't have news to share on a launch for free ChatGPT users. We are focused on the rollout to paid tiers for now, a spokesperson told TechCrunch.
The aim of the new memory feature is to make ChatGPT more fluid and personal. You won't have to repeat information you've already shared with ChatGPT. In February, Google rolled out a similar memory feature in Gemini. Not every user will be thrilled with the notion of OpenAI vacuuming up more of their info, of course. Fortunately, there's an opt-out. In ChatGPT settings, users can choose to turn off the new memory feature as well as manage specific saved memory.
OpenAI says you can also ask ChatGPT what it remembers or switch to a temporary chat for conversations that won't get stored, end quote. Meanwhile, sources say OpenAI recently gave staff and third-party groups just days versus the several months they had before to evaluate risks and performance of its latest models. Quoting the FT, staff and third-party groups have recently been given just days to conduct evaluations, the term given to test for assessing models' risks and performance.
on OpenAI's latest large language models compared to several months previously. According to eight people familiar with OpenAI's testing process, the startup's tests have become less thorough. with insufficient time and resources dedicated to identifying and mitigating risks, as the $300 billion startup comes under pressure to release new models quickly and retain its competitive edge.
We had more thorough safety testing when the technology was less important, said one person currently testing OpenAI's upcoming O3 model designed for complex tasks such as problem solving and reasoning. They added that as LLMs become more capable, the potential weaponization of the technology is increased. But because there is more demand for it, they want it out faster. I hope it is not a catastrophic misstep. but it is reckless. This is a recipe for disaster, end quote.
The time crunch has been driven by competitive pressures, according to people familiar with the matter, as OpenAI races against big tech groups such as Meta and Google. and startups including Elon Musk's XAI to cash in on the cutting-edge technology. There is no global standard for AI safety testing, but from later this year, the EU's AI Act
will compel companies to conduct safety tests on their most powerful models. Previously, AI groups, including OpenAI, have signed voluntary commitments with governments in the UK and US. to allow researchers at AI Safety Institute to test models. OpenAI has been pushing to release its new Model 03 as early as next week, giving less than a week to some testers for their safety checks, according to people familiar with the matter. This release date could be subject to change.
Previously, OpenAI allowed several months for safety tests. For GPT-4, which was launched in 2023, testers had six months to conduct evaluations before it was released, according to people familiar with the matter. One person who had tested GPT-4 said some dangerous capabilities were only discovered two months into testing. They are just not prioritizing public safety at all, they said of OpenAI's current approach.
There's no regulation saying companies have to keep the public informed about all the scary capabilities. And also, they're under lots of pressure to race each other, so they're not going to stop making them more capable, said Daniel Cotillo, a former OpenAI researcher. who now leads the nonprofit group AI Futures Project, end quote. There is a growing expense eating into your company's profits. It's your cloud computing bill.
You may have gotten a deal to start, but now the spend is sky high and increasing every year. What if you could cut your cloud bill in half and improve performance at the same time? Well, if you act by May 31st, Oracle Cloud Infrastructure can help you do just that. OCI is the next generation cloud designed for every workload where you can run any application. including any AI projects faster and more securely for less.
In fact, Oracle has a special promotion where you can cut your cloud bill in half when you switch to OCI. The savings are real. On average, OCI costs 50% less for compute 70% less for storage, and 80% less for networking. Join modal Skydance animation and today's innovative AI tech companies who upgraded to OCI and save. Offer only for new U.S. customers with a minimum financial commitment. See if you qualify for half off at oracle.com slash tech meme. That's oracle.com slash tech meme.
Ready to optimize your nutrition this year? Factor has chef-made gourmet meals that make eating well easy. They're dietician approved and ready to heat and eat. in two minutes so you can feel right and feel great no matter what life throws at you. Factor arrives fresh and fully prepared, perfect for any active, busy lifestyle. Lose up to 8 pounds in 8 weeks with Factor Keto Meals. Based on a randomized, controlled clinical trial with Factor Keto, results will vary depending on diet and exercise.
With 40 options across 8 dietary preferences on the menu each week, it's easy to pick meals tailored to your goals. Choose from preferences like calorie smart, protein plus, or keto. Factor can help you feel your best all day long with wholesome smoothies, breakfasts, grab-and-go snacks, and more add-ons. We love Factor. As I say, it's my wife's daily lunch solution in her office. Eat smart with Factor. Get started at factormeals.com.
factor podcast and use code factor podcast to get 50% off your first box plus free shipping. That's code factor podcast at factor meals.com slash factor podcast to get 50% off plus free shipping on your first box. Business Insider says that ex-OpenAI CTO Mira Mirati's Thinking Machines Lab startup is raising upward of $2 billion at a greater than $10 billion valuation after previously only seeking around $1 billion at a $9 billion valuation as recently as February.
Quote, the increased amount reflects intense investor enthusiasm for generative AI and the fact that there are a very limited number of people with the expertise of Marati and the team she has assembled. It's also extremely expensive to train AI models and recruit and retain top talent. Bob McGrew, OpenAI's former chief research officer, and the researcher Alec Radford recently joined Thinking Machine.
Several of Marati's other former co-workers are working for Thinking Machines, including John Shulman, who co-led the creation of ChatGPT. Jonathan Lockman, formerly the head of special projects at OpenAI, Barrett Zoff, a co-creator of ChatGPT, and Alexander Krylov, who worked closely with Marati on ChatGPT's voice mode.
Marati spent six and a half years at OpenAI where she worked on the development of ChatGPT and other AI research initiatives. She was briefly appointed interim CEO in November 2023 after OpenAI's board abruptly fired Sam Altman, a move that sparked turmoil within the company. After Altman's reinstatement as CEO, Marati resumed her role as CTO.
It has been a mystery what exactly thinking machines will do to distinguish itself in a crowded and well-funded field that includes not only OpenAI, but also Anthropic, Elon Musk's XAI, and Google's Gemini. In a blog post earlier this year, Marati positioned the startup as an artificial intelligence research and product lab focused on making AI more accessible.
To bridge the gaps, we're building Thinking Machines Lab to make AI systems more widely understood, customizable, and generally capable, the post said. Even by the standards of today's frothy AI market, $2 billion of funding for a startup less than a year old with no product.
is a gargantuan sum and would almost certainly rank as one of the largest, if not the largest, seed rounds in history. Last year, OpenAI's co-founder Ilya Suskever raised a $1 billion seed round for yet another AI startup, Safe Superintelligence, end quote. Time for the Week in Long Read Suggestions. And first up this week from IEEE Spectrum, a huge piece looking at the state of AI in the year of our Lord 2025. And when I say huge, I mean huge.
Graphs aplenty, answering questions like, is the US still ahead or is China catching up? Are training costs still high or coming down? Is the cost of using AI going down? Answer on that one, quote, the ever-increasing cost of training most AI models risks obscuring a few positive trends that the report highlights. Hardware costs are down, hardware performance is up, and energy efficiency is up. That means inference costs or the expense of querying a trained model are falling dramatically.
This chart, which is on a logarithmic scale, shows the trend in terms of AI performance per dollar. The report notes that the blue line represents a drop from $20 per million tokens seven cents per million tokens. The pink line shows a drop from $15 to 12 cents in less than a year's time, end quote.
Then from MIT Technology Review, they say generative AI is starting to be used to spy for the US military. Quote, Though the U.S. military has been developing computer vision models and similar AI tools like those used in Project Maven since 2017, The use of generative AI tools that can engage in human-like conversations like those built by Vannevar Labs represents a new frontier.
The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own two troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend, and a large part of what sets Vannevar's products apart. Terabytes of data in 80 different languages are hoovered up every day in 180 countries.
The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information. It also uses non-classified data that is difficult to get online, gathered by human operatives on the ground, as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment.
with the results delivered through a chatbot interface that's not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China's efforts to secure rare earth minerals in the Philippines. Our real focus as a company, says Scott Phillips, Vannevar Labs chief technology officer, is to collect data, make sense of that data, and help the U.S. make good decisions, end quote.
That approach is particularly appealing to the US intelligence apparatus. Because for years, the world has been awash in more data than human analysts can possibly interpret. A problem that contributed to the 2003 founding of Palantir, a company now worth nearly $217 billion and known for its powerful and controversial tools.
including a database that helps immigration and customs enforcement search for and track information on undocumented immigrants. In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data, but to actually talk through an analysis with someone interactively, end quote.
And finally, from the information, so unfortunately it's behind a paywall, but a big piece that has gotten a lot of chatter online over the last 48 hours, taking a behind-the-scenes look at how and why Apple fell behind with Siri and AI more generally.
Some of Apple's struggles in AI have stemmed from deeply ingrained company values, for example, its militant stance on user privacy, which has made it difficult for the company to gain access to large quantities of data for training models and to verify whether AI features are working on devices. But an equally important factor was the conflicting personalities within Apple, according to multiple people who worked in the AI and software engineering groups.
More than half a dozen former Apple employees who worked in the AI and machine learning group led by John Gianandrea, known as AIML for short, told the information that poor leadership is to blame for its problems with execution. They singled out Gianandrea Lieutenant Robbie Walker as lacking both ambition and an appetite for taking risks on designing future versions of the voice assistant.
Among engineers inside Apple, the AI group's relaxed culture and struggles with execution have even earned it an uncharitable nickname, a play on its initials, Aimless. Former Apple employees have referred to Siri as a hot potato, continuously passed between different teams, including those led by Apple's services chief, Eddie Q, and by Craig Federighi. However, None of these reorganizations led to significant improvements in series performance, end quote.
No weekend bonus episode for you this weekend. And next week, just an FYI, you'll hear a small drop in audio quality because I'll be in Colorado for the week. Gonna spend some time in a cabin in Estes Park. So I won't be in the studio. I'll be on my travel mic. Just a word of explanation if it sounds a bit different next week. Talk to you then. Have you heard? At Matalan, the sale has landed. There's up to 50% off across women.
to 50% off homeware too. Shop in-store online at matalan.co.uk and via the app now. T's and C's apply. Selected lines only.