Happy New Year! December was a short month, in part because of O’Reilly’s holiday break. But everybody relaxes at the end of the year—either that, or they’re on a sprint to finish something important. We’ll see. OpenAI’s end-of-year sprint was obvious: getting GPT-5.2 out. And getting Disney to invest in them, and license their characters for AI-generated images. They’ve said that they will have guardrails to prevent Mickey from doing anything inappropriate. We’ll see how long that works.
To get a good start on 2026, read Andrej Karpathy’s “2025 LLM Year in Review” for an insightful summary of where we’ve been. Then follow up with the Resonant Computing Manifesto, which represents a lot of serious thought about what software should be.
AI
- Google has released yet another new model: FunctionGemma, a version of Gemma 3 270M that has been specifically adapted for function calling. It is designed to be fine-tuned easily and is targeted for small, “edge” devices. FunctionGemma is available on Hugging Face.
- Anthropic is opening its Agent Skills (aka Claude Skills) spec, and added an administrative interface to give IT admins control over which tools are used and how. OpenAI has already quietly adopted them. Are skills about to become a de facto standard?
- Google has released Gemini 3 Flash, the final model in its Gemini 3 series. Flash combines reasoning ability with speed and economy.
- Margaret Mitchell writes about the difference between generative and predictive AI. Predictive AI is more likely to solve problems people are facing—and do so without straining resources.
- NVIDIA has released Nano, the first of its Nemotron 3 models. The larger models in the family, Super and Ultra, are yet to come. Nano is a 30B parameter mixture-of-experts model. All of the Nemotron 3 models are fully open source (most training data, training recipes, pre- and posttraining software, in addition to weights).
- GPT-5.2 was released. GPT-5.2 targets “professional knowledge workers”: It was designed for tasks like working on spreadsheets, writing documents, and the like. There are three versions: Thinking (a long-horizon reasoning model), Pro, and Instant (tuned for fast results).
- Disney is investing in OpenAI. One consequence of this deal is that Disney is licensing its characters to OpenAI so that they can be used in the Sora video generator.
- We understand what cloud native means. What does AI native mean, and what does MCP (and agents) have to do with it?
- Researchers at the University of Waterloo have discovered a method for pretraining LLMs that is both more accurate than current techniques and 50% more efficient.
- Mistral has released Devstral 2, their LLM for coding, along with Vibe, a command line interface for Devstral. Devstral comes in two sizes (123B and 24B) and is arguably open source.
- Anthropic has donated MCP to the Agentic AI Foundation (AAIF), a new open source foundation spun out by the Linux Foundation. OpenAI has contributed AGENTS.md to the AAIF; Block has contributed its agentic platform goose.
- Google Research has proposed a new Titans architecture for language models, along with the MIRAS framework. Together, they’re intended to allow models to work more efficiently with memory. Is this the next step beyond transformers?
- Zebra-Llama is a new family of small hybrid models that achieve high efficiency by combining existing pretrained models. Zebra-Llama combines state space models (SSMs) with multihead latent attention (MLA) to achieve near-transformer accuracy with only 7B to 11B pretraining tokens and an 8B parameter teacher.
- Now there’s Hugging Face Skills! They’ve used it to give Claude the ability to fine-tune an open source LLM. Hugging Face skills interoperate with Codex, Claude Code, Gemini CLI, and Cursor.
- A research project at OpenAI has developed a model that will tell you when it has failed to follow instructions. This is called (perhaps inappropriately) “confession.” It may be a way for a model to tell when it has made up an answer.
- Mistral 3 is here. Mistral 3 includes a Large model, plus three smaller models: Ministral 14B, 8B, and 3B. They all have open weights. Performance is comparable to similarly sized models. All of the models are vision-capable.
- Wikipedia has developed an excellent guide to detecting AI-generated writing.
- Claude 4.5 has a soul. . .or at least a “soul document” that was used in training to define its personality. Is this similar to the script in a Golem’s mouth?
- DeepSeek has released V3.2, which incorporates the company’s sparse attention mechanism (DSA), scalable reinforcement learning, and a task synthesis pipeline. Like its predecessors, it’s an open weights model. There’s also a “Speciale” version, only available via API, that’s been tuned for extended reasoning sessions.
- Black Forest Labs has released FLUX.2, a vision model that’s almost as good as Google’s Nano Banana but is open weight.
Programming
- It’s Christmas, so of course Matz maintained the tradition of releasing another major version of Ruby—this year, 4.0.
- The Tor Project is switching to Rust. The rewrite is named Arti and is ready for use.
- A cognitive architect is a software developer who doesn’t write functions but decomposes larger problems into pieces. Isn’t this one of the things regular architects—and programmers—already do? We’ve been hearing this message from many different sources: It’s all about higher-order thinking.
- Perhaps this isn’t news, but Rust in the Linux kernel is no longer considered experimental. It’s here to stay. Not all news is surprising.
- Is PARK the LAMP stack for AI? PARK is PyTorch, AI, Ray, and Kubernetes. Those tools are shaping up to be the foundation of open source AI development. (Ray is a framework for distributing machine learning workloads.)
- Bryan Cantrill, one of the founders of Oxide Computer, has published a document about how AI is used at Oxide. It’s well worth reading.
- Go, Rust, and Zig are three relatively new general-purpose languages. Here’s an excellent comparison of the three.
- Stack Overflow has released a new conversational search tool, AI Assist. It searches Stack Overflow and Stack Exchange and provides chat-like answers.
- DocumentDB is an open source (MIT license) document store that combines the capabilities of MongoDB and PostgresSQL. It should be particularly useful for building AI applications, supporting session history, conversational history, and semantic caching.
- “User experience is your moat… Your moat isn’t your model; it’s whether your users feel at home.” From Christina Wodtke’s Eleganthack. Well worth reading.
Security
- SantaStealer is a new malware-as-a-service operation. It appears to be a rebranding of older malware service, BluelineStealer, and targets data held in the browser, services like Telegram, Discord and Steam, and cryptocurrency wallets. Just in time for the holidays.
- Another list from MITRE: The top 25 software weaknesses for 2025. These are the most dangerous items added to the CVE database in 2025, based on severity and frequency. The top items on the list are familiar: cross-site scripting, SQL injection, and cross-site request forgery. Are you vulnerable?
- What is the normalization of deviance in AI? It’s the false sense of safety that comes from ignoring issues like prompt injection because nothing bad has happened yet while simultaneously building agents that perform actions with real-world consequences.
- Trains were canceled after an AI-edited image of a bridge collapse was posted on social media.
- Virtual kidnapping is a thing. Nobody is kidnapped, but doctored images from social media are used to “prove” that a person is in captivity.
- There’s an easy way to jailbreak LLMs. Write poetry. Writing a prompt as poetry seems to evade the defenses of most language models.
- GrayNoise IP Check is a free tool that checks where your IP address has appeared in a botnet.
- Attackers are using LLMs to generate new malware. There are several LLMs offering vibe coding services for assisted malware generation. Researchers from Palo Alto Networks report on their capabilities.
Web
- Google is adding a “user alignment critic” to Chrome. The critic monitors all actions taken by Gemini to ensure that they’re not triggered by indirect prompt injection. The alignment critic also limits Gemini to sites that are relevant to solving the user’s request.
- Google is doing smart glasses again. The company’s showing off prototypes of Android XR. It’s Gemini-backed, of course; there will be monocular and binocular versions; and it can work with prescription lenses.
- Another browser? Lightpanda is a web browser designed for machines—crawlers, agents, and other automated browsing applications—that’s built for speed and efficiency.
- Yet another browser? Nook is open source, privacy protecting, and fast. And it’s for humans.
- A VT100 terminal emulator in the browser? That’s what you wanted, right? ghostty-web has xterm.js API compatibility, and is built (of course) with Wasm.
- The Brave browser is testing an AI-assisted mode using its privacy-preserving AI assistant, Leo. Leo can be used to perform agentic multistep tasks. It’s disabled by default.
Hardware
- Arduino enthusiasts should familiarize themselves with the differences between the licenses for Arduino’s and Adafruit’s products. Adafruit’s licensing is clearly open source; now that Arduino is owned by Qualcomm, its licensing is confusing to say the least.
- We’ve done a lot to democratize programming in the last decade. Is it now time to democratize designing microchips? Siddharth Garg and others at NYU think so.
Operations
- Here’s an excellent analysis of the Cloudflare outage that took a significant part of the internet down last November.

Radar Trends, Commentary
Radar