In partnership with
🌟 Editor's Note: Recapping the AI landscape from 12/16/25 - 12/22/25.
🎇✅ Welcoming Thoughts
Welcome to the 24th edition of NoahonAI.
What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.
Happy Holidays Everyone! 🎁
No newsletter next week. We’ll be back on January 6th.
Michigan is making some very cool strides in robotics (See Impact Industry).Gemini released a recap of their top 60 AI moves this year.
Sam Altman wrote a quick piece on the 10 Year Anniversary of OpenAI.
ChatGPT just launched its own version of Spotify Wrapped. A bit ridiculous but kinda cool. Available in-app.
Dario Amodei mentioned this in his highlighted interview today. Cool write-up on his vision of the future upsides of AI.
I’m working on getting some more interviewees for next year.
I should also start writing blogs again, I got away from it a bit.
OpenAI may be trying to follow in the footsteps of Jon Rahm and co.
People do not like some of these AI commercials.
Pretty quiet week up until the end. There’s a reason I save The Race section for Monday Night.
If you’re making New Year’s resolutions, learn context engineering in 2026.
Mentioned this around Thanksgiving, but I can’t believe how much progress has been made in the last 6 months.
I’d say my top AI interests right now are Bio, Robotics, and the Future of Work.
Alright, see you guys next year.
Let’s get started—plenty to cover this week.
👑 This Week’s Winner: Google // Gemini
Google takes the win heading into 2026. Quiet week across the board but Google with a strong showing Monday afternoon in orchestrating a ~$5B deal to help secure future compute. Here’s the news:
Alphabet To Buy Intersect: Alphabet agreed to acquire Intersect Power for $4.75B to secure multiple gigawatts of energy and data-center capacity. Previously owned a minority stake. Big move here for the future of Google AI.
Gemini 3 Flash Launch: Google rolled out Gemini 3 Flash as a faster, lower-cost Gemini 3 variant. It is now the default in the Gemini app and Search AI Mode, delivering near-real-time responses. Nice.
Image Editor Upgrade: Google added visual annotation to its Nano Banana image model, letting users draw or circle areas directly on images to guide edits. Cool! Was waiting for something like this for Gemini/Nano.
Not much more on the Google front this week, but I’m excited to see their continued growth in 2026. The race is getting close, and while Google continues to surge, OpenAI’s early cushion is slowly shrinking.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.
⬇️ The Rest of the Field
Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.
⚪️ NVIDIA
H200 China Shipments: NVIDIA plans to ship H200 AI chips to China by mid-February 2026, with an initial batch of ~40k–80k chips. Still waiting on Beijing approval for final confirmation. Looks a lot more promising than a month ago. Still in a bit of limbo.
Blackwell Ultra Goes Live: NVIDIA’s next-gen Blackwell Ultra systems are now in production, with a European cloud provider (Nebius) first to run them. NVIDIA’s top chip now live in Europe.
RTX 50-Series Arrives: NVIDIA released its new graphics cards for gamers and creators. They’re seeing strong demand, especially for the top model, with supply expected to stay tight as NVIDIA prepares future upgrades. I’m excited for the future of AI in gaming. These will play a role.
🟢 OpenAI // ChatGPT
GPT Images Upgrade: OpenAI rolled out a major image update powered by GPT Image 1.5, delivering faster generation, better instruction-following, cleaner edits, and expanded API support. Actually pretty good. Gemini (Nano Banana) still much better.
News Academy: OpenAI launched a global training hub for journalists and newsrooms, offering playbooks, courses, and tools to responsibly integrate AI into reporting, research, and editorial workflows. Cool move.
Huge Fundraise: OpenAI is reportedly planning a 2026 funding round targeting up to ~$100B, with sovereign wealth funds expected to play a role in the backing. One more fundraise before IPO? I wonder if the U.S. Government would have thoughts/restrictions on a SWF investment.
🔵 Meta // Meta AI
SAM Audio Launched: Meta released SAM Audio, a model that lets users isolate and edit sounds from complex recordings using text prompts, visual cues, or time selections. Cool!
Smart Glasses Audio: Meta rolled out a Conversation Focus feature for its AI smart glasses, using directional mics and AI to amplify the voice of the person you’re talking to in noisy environments. I wonder how this compares to a common hearing aid.
New Mango Models: Meta is developing new generative models codenamed Mango for image and video (alongside text model Avocado), targeting a 2026 launch to compete in visual and multimodal AI. Excited to see how these revamped launches look.
🔴 xAI // Grok
Voice Agent API: xAI launched a real-time, multilingual voice API for developers building conversational agents, pushing Grok beyond text into voice-first AI. Lower cost than other options. Multilingual. Nice.
SpaceX Capital Link: SpaceX’s planned 2026 IPO (~$1.5T target) could help fund AI infrastructure, including space-based data centers aligned with xAI’s long-term compute needs. xAI can IPO without having to IPO capital-wise.
Elons AGI Timeline: Musk told xAI staff AGI could arrive as soon as 2026, pointing to rapid iteration, massive compute build-outs, and funding advantage with Tesla/SpaceX. Interesting article. His point on capital access shouldn’t be taken lightly. Although timeline is overly optimistic IMO.
🟠 Anthropic // Claude
Safety Framework: Anthropic open-sourced Bloom, an automated framework that generates scenarios to measure risky behaviors across AI models, speeding up safety and alignment testing. Cool. Aligns well with company messaging.
DOE Genesis Partnership: Anthropic partnered with the U.S. Department of Energy to deploy Claude models and AI agents for scientific research across energy, nuclear, and biological fields. Gemini, GPT, NVIDIA, and others also a part of this deal.
Claude Skills Update: Anthropic upgraded Claude’s ‘Skills’ for better workplace automation and open-sourced the Skills specification, enabling new agent workflows. More enterprise moves.
🤖 Impact Industries 🏥
Robotics // Micro Robot
Researchers at the University of Michigan and UPenn unveiled the world’s smallest fully programmable autonomous robots, small enough to fit on a fingertip. These light-powered microbots can sense temperature, process information, and coordinate movement while operating for months on nanowatts of power. By combining microscale propulsion with onboard computing, the work opens new possibilities for medical monitoring, precision manufacturing, and programmable robotics at biological scales.
Medical // Critical Care
Mount Sinai researchers developed an AI system called NutriSighT that predicts which ventilated ICU patients are at risk of underfeeding days in advance. The model analyzes routine ICU data like vitals, lab results, medications, and feeding records, updating predictions every four hours. Early studies show underfeeding affects up to half of patients early in care. The system acts as an early-warning tool to support personalized nutrition decisions, not replace clinicians.
💻 Interview Highlight: Dario Amodei
Interview Outline: Dario Amodei warns that while AI technology is scaling predictably, the business side faces a "cone of uncertainty" where massive spending may outpace revenue. He advocates for proactive regulation as a "steering wheel" for safety and national security, while predicting that AI could eventually extend human lifespans to 150 years and usher in a post-labor society.
About the Interviewee: Dario Amodei is the CEO and co-founder of Anthropic (Claude) and a former OpenAI leader who spearheaded GPT-2 and GPT-3. A primary architect of the "Scaling Laws," he transitioned from a background in physics and neuroscience to focus on AI safety. He is now one of the world's most influential voices on balancing radical technological optimism with responsible governance.
Interesting Quote: ”We need to find a world where work doesn't have the centrality it does today... where it’s about fulfillment rather than economic survival.”
My Thoughts: Dario floats a lot of concepts that have been discussed over the past few months across the ecosystem, he just tends to preface them with the idea of safety at the center. Anthropic has been consistently in front of safety in AI and the message here is consistent with that. I fully agree with his quote on the future of work, and wonder again, how many people would work if it became optional. He touches on another fascinating aspect of the AI revolution which is the impact on our health and aging. I don’t think it’s off base at all to see expect an extended lifespan in the future due to agentic biological discoveries and hyper-personalized care. But, once again, as Dario mentioned, it’s important to look out for all potential consequences as they move forward.
Condensed Interview Highlight — Dario Amodei on AI Risk, Scale & the Future of Work
Q: What is the “Cone of Uncertainty” you’ve mentioned?
Amodei: It describes the gap between building massive data centers today and seeing the revenue they generate years later. Demand is unpredictable, so companies risk either falling short on capacity or spending billions on compute they can’t sustainably support.
Q: Why do you support AI regulation when many tech leaders don’t?
Amodei: AI isn’t just another tech cycle like the internet — it’s a singular source of power. Moving forward without regulation is like driving without a steering wheel, especially given the implications for national security and labor. That said, rules should not burden small startups.
Q: How far do you think AI models will scale?
Amodei: I expect models to grow into something like a “country of geniuses in a data center” — systems with the combined intellectual output of an entire nation, capable of outperforming humans across science, law, and engineering.
Q: What is the “Virtual Biologist” concept?
Amodei: AI will act as an autonomous scientist, running experiments and testing hypotheses at machine speed. This could compress a century of biological progress into a decade and potentially unlock cures for major diseases or dramatically extend human lifespan.
Q: How should society respond to AI-driven job disruption?
Amodei: There are three layers. First, companies should use AI to amplify human work, not just replace it. Second, governments need tax systems that redistribute enormous AI-generated wealth. Third, society must shift toward finding meaning beyond economic survival.
👨💻 Practical Use Case: RAG (Retrieval Augmented Generation)
Difficulty: Mid-level
RAG stands for Retrieval-Augmented Generation. It’s something we’ve touched on here and there but it’s never had its own Practical Use Case display. At a high level, RAG is a way to let an AI model answer questions using your own data instead of relying only on what it learned from the outside world. Before generating a response, the model first retrieves relevant information from a trusted source, then uses that context to produce a grounded answer.
In practice, RAG systems can scan through hundreds, or even thousands of your files, spanning from text documents, to PDF’s, to images and more.
RAG shows up most often in situations where accuracy matters and hallucinations are costly, such as:
Internal knowledge bases and SOPs
Customer support tools that need up-to-date answers
AI tools that need to reference policies, docs, or contracts
Think of RAG as a middle ground between a raw chatbot and a fully custom AI system. You’re not retraining the model, and you’re not pasting context manually every time. You’re giving the AI a way to look things up before it speaks.
This approach is becoming the default for enterprise AI applications because it keeps responses tied to real sources and reduces guesswork. If you’ve ever thought, “This would be useful if the AI actually knew our documents,” RAG is usually what’s missing behind the scenes.
Learn more below ⬇️
🔐 Startup Spotlight

XBOW
XBOW - Automated Cybersecurity Testing.
The Problem: Traditional cybersecurity / hacker testing is slow, expensive, and limited by human availability. In today’s fast-paced dev cycles, waiting weeks for manual testing leaves critical vulnerabilities unchecked, especially with AI-driven threats escalating.
The Solution: XBOW replaces slow, scheduled tests with a swarm of autonomous AI agents that think like elite hackers and test like machines. It can simulate real-world attacks across every endpoint, validate findings, and deliver detailed reports, all within hours of a code push. Security becomes continuous, scalable, and instant.
The Backstory: XBOW was founded in 2024 by Oege de Moor, former head of security at GitHub, alongside a team of AI researchers and offensive security experts. The company is focused on bringing human-level penetration testing to scale using autonomous agents, combining deep cybersecurity experience with the speed and efficiency of AI.
My Thoughts: In the era of vibe-coding, cybersecurity is regressing as fast as new products and tools are being built. Tools like these are incredibly necessary to constantly test security vulnerabilities in new and old applications. The rate at which these tests can run via agentic AI is a strong positive here. The pedigree of the founders is strong as well. I would absolutely use this tool.
“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”
- Jensen Huang | NVIDIA CEO
Here’s my ChatGPT year in review. Probably a good thing Claude doesn’t have this.

Till Next Time,
Noah on AI


