In partnership with

🌟 Editor's Note: Recapping the AI landscape from 01/27/26 - 02/02/26.

🎇 Welcoming Thoughts

  • Welcome to the 29th edition of NoahonAI.

  • What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.

  • Claude Cowork Is a lifesaver. Very helpful that it can access all local files.

  • Might be changing my company name soon (and newsletter name).

  • Another good interview today from someone I connected with at the Cleveland AI event a few weeks ago.

  • Moltbot (Formerly Clawdbot) is now blowing up on (its own?) socials. Data is controversial and hard to verify. More in Impact Industry.

  • After a slow 2 weeks, it was a very busy week for the NVIDIA5!

  • The xAI vs. OpenAI lawsuit may be getting dismissed after all.

  • Anthropic is getting sued again for copyright violations.

  • Sounds like a lot of Apple folks are using Anthropic internally. Not a surprise.

  • Anthropic just entered into the F1 picture. Pretty cool.

  • Sam Altman did a townhall event last week on the state of AI.

  • Big week for the future of AI in health and biology across the board.

  • Bit of a shakeup this week in the Race order.

  • Google is releasing some very cool products. See below.

Let’s get started—plenty to cover this week.

👑 This Week’s Winner: OpenAI // ChatGPT


OpenAI Wins a Hectic Week! Amidst a lot of action, OpenAI led the way with a new partnership, a big move into developer tools, and talks of an investment that would rival the biggest we’ve seen to date. Here’s how it went down:

  • Snowflake Enterprise Partnership: OpenAI signed a $200M multi-year deal with Snowflake to embed its models directly into Snowflake Cortex, letting enterprises run AI agents over proprietary data inside a governed environment. Follows a similar Anthropic deal from December, impressive pace-keeping move here by OpenAI.

  • Standalone Codex App: OpenAI launched a dedicated Codex app for macOS, giving developers a command center to run multiple coding agents in parallel, manage isolated worktrees, and coordinate complex projects. Finally hearing some positive reviews for OpenAI’s coding tool. Time will tell if they can compete with industry leader Claude Code.

  • Amazon in Talks for $50B Investment: Amazon is in advanced discussions to invest in OpenAI as part of a $100B raise, potentially reshaping cloud and infrastructure alliances. I’ve heard of a fund taking ½ the round in a $10M deal, but half of a $100B deal is wild. I’m guessing Microsoft is not in favor, as they’d have a new, strong, internal competitor.

OpenAI is also cleaning up shop: Say goodbye to the old GPT-4o/4.1-era models which will be shut down effective February 13th, as usage consolidates on GPT-5 and up. *Old chats will be preserved. It’s also testing a highly restricted ChatGPT ads beta ($200k minimum per advertiser), and pushing deeper into the fashion world via a PVH partnership spanning design, planning, and marketing. The only negative is the latency surrounding their September NVIDIA deal, see below.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.

⬇️ The Rest of the Field

Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.

🔴 xAI // Grok

  • SPACEX ACQUISITION: Reports say SpaceX has acquired xAI ahead of a potential 2026 IPO push, bundling AI + space infrastructure under one umbrella. Very encouraging! The closer xAI gets to Tesla/SpaceX the more it can actually benefit in important areas vs. Twitter slop.

  • Image/Video Launches: xAI shipped Grok Imagine 1.0 (10s, 720p video; better audio/voice + prompt-following) and released the Imagine API for text-to-video, image-to-video, editing, and audio. xAI claims 1.245B videos generated in 30 days. Cool - Video API is worse but much cheaper than other options.

  • Tesla Revealed $2B investment into xAI: Tesla invested into xAI’s $20B round even after shareholders voted down an xAI investment, and framed it as accelerating autonomy and Optimus. Obviously should be listening to shareholders but the direction is a positive one.

🟣 Google // Gemini

  • Virtual Worlds: Google is testing a Project Genie prototype that lets you generate and explore interactive AI worlds from a text or image prompt. Currently for Google AI Ultra users in the U.S. This looks super cool. Could be the future of VR gaming.

  • AlphaGenome: Google DeepMind unveiled AlphaGenome: a genomics model that predicts how DNA mutations may change gene activity (helpful for guiding research), but it’s not being framed as “clinical-ready.” Lots of positive noise around this. Could be big for understanding human gene structure → personalized medicine → curing disease at scale.

  • Gemini Auto Browse in Chrome: Gemini is adding “Auto Browse” in Google Chrome to automate routine browsing tasks (US preview), bundled as a perk for Google AI Pro / Ultra. I haven’t tried this out yet but sounds similar to GPT’s Atlas and Claude Cowork’s agentic capabilities. Cool.

🔵 Meta // Meta AI

  • $6B Corning Deal: Meta signed a long-term deal (through 2030) with Corning worth up to $6B to purchase fiber and networking gear for AI data centers, including expanded U.S. manufacturing in North Carolina. Had to ask GPT on this one: Fiber optic sends data as light through very thin glass strands instead of electricity through metal wires. More infra for Meta.

  • Great Earnings, Record Spending: Meta is planning up to $115B–$135B in 2026 capital expenditures (nearly double 2025), yet posted strong Q4 results (up 24% YoY). The ad business is paying for the AI business. That works just fine.

  • Zuck’s Plans: Mark Zuckerberg said Meta will begin shipping new AI models and products soon, with a heavy focus on commerce, agentic shopping and more personal AI in 2026. Following Google with 'personalized intelligence’. Meta is looking to find their sweet spot.

🟠 Anthropic // Claude

  • UK Government Pilot: Anthropic is partnering with the UK Government to pilot Claude across public services, focused on helping people navigate GOV.UK journeys and complete service steps more easily. AI is becoming more involved in public services. Should help with productivity, I like it.

  • Pentagon Standoff: Reporting says U.S. DoD and Anthropic hit friction over usage constraints (especially around lethal autonomy / certain surveillance uses) tied to a major contract discussion. Essentially Anthropic doesn’t want their AI tools pulling the trigger. Fair concern.

  • Life Sciences Partnerships: Anthropic is working with HHMI and the Allen Institute to embed Claude into real lab workflows, building agent-style systems that help researchers integrate, analyze, and plan around large-scale biology datasets. Nice! Biology / Specialized individual-based healthcare may be the best net good use case for AI.

⚪️ NVIDIA

  • OpenAI Megadeal Slows: NVIDIA’s $100B / ~10GW build-out partnership with OpenAI has reportedly slowed, the deal signed in September is still on, but the number may be much lower. Interesting, led to some speculation of OpenAI testing the market on other compute options.

  • Mercedes NVIDIA Class: NVIDIA is supplying the compute + self-driving software for the next Mercedes-Benz S-Class. The goal is to eventually offer these as premium self-driving rides on Uber. Very cool, while they’re not necessarily known for it, NVIDIA has powerful autonomous driving software.

  • Surgical AI platforms: NVIDIA is powering Oath Surgical’s OathOS to turn OR video/audio + workflow signals into “ambient” surgical intelligence, includes documentation, coordination, outcomes tracking. Great, AI can do wonders in the health sector.

💻 Impact Industries 🚑

Developer Tools // Agent Social Network

Last week we talked about Clawdbot, an AI agent that behaves like a human—you can text it, it can text you back, and it may proactively message you to complete tasks. After Anthropic raised concerns, the project rebranded to Moltbot. Moltbots then reportedly launched Moltbook, a social network where AI agents post and interact with one another. Moltbook now hosts thousands of agents communicating in discussion forums. The open question is verification: how much behavior is truly autonomous versus human-guided prompting behind the scenes. It’s likely a mix which is fascinating and dystopic.

Read the Story

Healthcare // AI Agents for Drug Development

Oracle launched a new life sciences platform that uses AI agents and de-identified health data from over 129 million patients to help drug companies operate faster and more efficiently. Rather than just analyzing datasets, the system actively assists with tasks like identifying new uses for existing drugs, reducing clinical trial size and cost, monitoring safety signals, and preparing regulatory submissions. Built on longitudinal health records, the platform reflects a broader shift toward always-on, semi-automated drug development workflows instead of manual, step-by-step analysis.

Read the Story

🎙 Weekly Interview: 10 Minutes With Lauren Burke-McCarthy

Lauren Burke-McCarthy

🏠 Background: Lauren Burke-McCarthy is an AI strategist and data scientist specializing in value-first, responsible design. With a background in Mathematics from The College of Wooster, she bridges the gap between technical data science and sustainable product strategy. She is a prominent voice in the Midwest tech community and an AI instructor at Denison Edge.

💼 Work: Lauren is an Associate Principal at Further, where she leads AI product strategy and risk management for enterprise clients. She also serves as the Head of Community for Women in Analytics (WIA) and hosts the WIA After Hours podcast, focused on highlighting diverse paths in the data and AI space.

🚀 Quote: “People use systems that are built for people. A human-first approach is what’s going to make AI sustainable long term.”

🎙️ Condensed Interview Transcript — Lauren Burke-McCarthy

Question 1

Noah: Where do you see the current state of the AI space?

Lauren Burke-McCarthy: We’ve seen a lot of FOMO and pressure to stay ahead of the curve. Now organizations are getting more thoughtful about what actually makes sense for their specific use cases. I think we’re moving toward a more human-first approach, because systems built for people are what last.

Question 2

Noah: What tools do you use in your professional stack?

Lauren Burke-McCarthy: I use the usual suspects—ChatGPT, Gemini, and Claude—to get a first draft or outline going. NotebookLM is a favorite for documentation, especially turning it into podcasts so I can listen while driving.

Question 3

Noah: For a year-long project, would you rather have AI tools or a human worker?

Lauren Burke-McCarthy: It doesn’t have to be either-or. If a process is rule-based and manual, see if you can automate or augment it. But if it requires judgment, carries risk, or changes constantly, I’d rather have a person involved. Hybrid approaches make the most sense.

Question 4

Noah: What advice would you give to bridge the gap between interest and action?

Lauren Burke-McCarthy: You don’t need a technical background to be AI-savvy. Start by identifying a real problem AI should solve. When tutorials mirror your day-to-day work, taking that first step becomes much easier.

Question 5

Noah: What interests you most in the 5–10 year future of AI?

Lauren Burke-McCarthy: I’m watching AI move into traditionally high-risk industries like legal, finance, and critical infrastructure. Using AI to model scenarios and surface risk earlier is huge. I also think voice and video will make AI far more accessible.

👨‍💻 Practical Use Case: Providing Context to LLM’s

Difficulty: Beginner

Note: This use case is specifically about providing context to LLMs like ChatGPT, Claude, and Gemini. That’s different from providing context to agents, which is a related but separate topic that I’ll likely cover in a future PUC.

When you’re working directly with an LLM, the quality of the output is heavily driven by the context you give it up front. Most people think of this as “writing a better prompt,” but in practice there are plenty of ways to supply context beyond plain text.

The most common method is still the simplest: typing messages and attaching files. That alone covers a huge number of real-world use cases. The issue is that as chats get longer, context can get lost, and key details you provided earlier may fade or get overlooked.

Here are some of the most effective ways to provide context to LLMs:

  • Images and Screenshots: When text isn’t enough, I’ll often provide a screenshot of what I’m looking at. I generally use Awesome Screenshot extension on Chrome for this. It’s free and it captures images of the full page i’m working on in browser.

  • Deep Research: Pull in hundreds or thousands of outside sources to fill gaps, confirm details, or build background. Deep research is especially useful to let the model become an expert on a topic and avoid hallucinations when making decisions.

  • Project Folders (Great in GPT & Claude): Adding related files, notes, and instructions so the model understands the full scope of what you’re working on. You may still need to reference specific docs, but the key difference is everything lives inside the workspace where the LLM can consistently pull from it.

  • Cowork/LLM Agents: Cowork can pull from local files on your computer which is very nice, you can just make a large folder with all types of docs relevant to a specific chat. Cowork and the GPT agent can also browse the web and act for you but it’s still important to beware of prompt injection. I trust Cowork a but more than GPT there.

One thing that doesn’t work as well yet is video. While it’s improving, it’s still harder for LLMs to reliably extract the right context compared to text, images, or structured data.

Providing better context isn’t about using every method at once. It’s about choosing the right input for the job. Since memory isn’t all the way there yet, understanding how to feed LLM’s context effectively will yield better results.

Learn more below ⬇️

🦾 Startup Spotlight

OpenMind

OpenMind — The Operating System for AI-Native Robotics.

The Problem: Robotics development is currently fragmented. Most software is proprietary and hardware-specific, meaning developers have to "reinvent the wheel" for every new robot type. This lack of a unified standard makes building and scaling intelligent robot applications slow and incredibly expensive.

The Solution: OpenMind provides an open-source platform (OM1) that acts as the "Android of robotics." It is form-factor agnostic, meaning the same software can power humanoids, quadrupeds, and robotic arms. It includes a "Skill Marketplace," essentially an app store for robots, that allows creators to deploy complex AI behaviors across different hardware instantly.

The Backstory: Founded in 2024 by Jan Liphardt (Stanford Bioengineering professor) and Boyuan Chen (MIT AI researcher), OpenMind originated from a project called Sandbox. Based in San Francisco, the team recently closed a $20 million Series A led by Pantera Capital. Their mission is to keep robotics software open-source and collaborative rather than locked behind corporate walls.

My Thoughts: This makes a ton of sense given the state of robotics. I like the decision to make it an open-source platform and allow for everyone to collaborate on improving the code behind the next wave of Physical AI. Startups like these allow for innovation to move quicker. I was actually talking to someone this week about something similar to this but for 3D-Printing capabilities. Also, unrelated to the product itself, I’m a big fan of their website, just looks cool.

“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”

- Jensen Huang | NVIDIA CEO

Have a great week everyone. Till Next Time,

Noah on AI