- AI Paradox
- Posts
- Is Microsoft Pulling Back from OpenAI?
Is Microsoft Pulling Back from OpenAI?
Anthropic to Keep User Chats for 5 Years...
Hey folks, Stay ahead with the latest in AI—let’s dive in!
WHAT YOU’LL READ TODAY
Washington Bets on Intel’s Foundry
MathGPT.ai: The AI Tutor That Can’t Be Cheated
Is Microsoft Pulling Back from OpenAI?
Anthropic to Keep User Chats for 5 Years
And more…
QUICK NEWS
Washington Bets on Intel’s Foundry—With a Safety Net
The Trump administration has structured its recent deal with Intel to dissuade the chipmaker from spinning off its foundry unit by granting the U.S. a 10% equity stake and a five-year warrant to add another 5% if Intel dips below 51% control—effectively keeping the loss‑making unit intact in exchange for $5.7 billion from CHIPS Act funds. Read More
MathGPT.ai: The AI Tutor That Can’t Be Cheated
MathGPT.ai is quietly invading U.S. college classrooms with a “cheat‑proof” approach—eschewing outright answers for step‑by‑step Socratic guidance, while empowering instructors with custom assignments, grading tools, and accessibility features—and even rewarding users who flag mistakes with gift cards. Read More
LATEST UPDATE
Is Microsoft Pulling Back from OpenAI?

Microsoft has introduced its first in-house AI models — MAI-Voice-1 (speech generation) and MAI-1-preview (text-based, for Copilot) — signaling a move toward greater independence in AI development.
Key Points:
MAI-Voice-1 already powers Copilot Daily’s audio summaries and AI podcast features. It can generate one minute of audio in less than a second, with customizable voice and speaking styles.
MAI-1-preview, currently built on OpenAI tech, delivers quick text responses to everyday queries. After testing, it will be integrated into selected Copilot tasks.
Both models emphasize efficiency and cost-effectiveness: MAI-Voice-1 runs on just one chip, while MAI-1-preview was trained on 15,000 chips — far fewer than rivals like Grok (100k+).
While MAI-1-preview still leans on OpenAI, Microsoft’s in-house models suggest a shift toward reduced dependency. AI chief Mustafa Suleyman underlined the focus on consumer use cases, leveraging Microsoft’s vast consumer and ad data — in contrast to OpenAI’s enterprise-heavy approach. He also stressed that in an era of rising AI costs and fears of an “AI bubble,” efficiency and sustainability are the real competitive edge.
Anthropic to Keep User Chats for 5 Years
Anthropic has announced a major change to its data policy: starting soon, Claude AI models will be trained on user data by default — including new or resumed chats and coding sessions. (Older, inactive chats will remain unaffected.) Users have until September 28 to opt out.
For those who don’t, data from these sessions will now be stored for up to five years — a sharp shift from Anthropic’s previous 30-day retention policy.
The company says the move is aimed at improving model safety and performance — making harmful content detection more accurate, reducing false flags, and enhancing Claude’s overall capabilities.
Taco Bell Puts the Brakes on AI Drive-Thrus
After processing over 200 million orders through Alexa-style AI voice assistants, Taco Bell is hitting pause on its AI drive-thru experiment. The chain says it “learned a lot” but ultimately felt “let down” by AI’s limitations.
Customer complaints highlighted frequent failures: AI assistants often misheard orders, repeated questions like “What would you like to drink?” in endless loops, or accepted absurd requests such as “18,000 cups of water” or even a rival’s Big Mac.
In the end, frustrated human staff had to step in — raising the question: is AI really ready to replace the drive-thru worker?
POPULAR TOOLS
That’s all for now — thanks for spending a few minutes with us today. We hope you found something valuable to carry into your week. Got thoughts, feedback, or just want to say hi? We’d love to hear from you. Until next time, stay curious and keep moving forward.
-Team AI Paradox