The first part addressed the big content question: AI makes learning content cheap, fast, and universally available – and learners react to it with surprising nuance.
Today, we’re focusing on something that is even more critical to the working world: AI that doesn’t just talk, but acts. Or, to put it differently: We are in the process of moving from the “speaking library” to the “acting colleague.”
Our internal AI update post by Maria Matthäus read like a prologue to the new work culture. The main character: Claude Cowork from Anthropic.
Claude Cowork: “Claude Code for the rest of your work”
Anthropic released Cowork in January 2026 as a Research Preview, currently in the Claude desktop app on macOS and (for now) only for Claude Max subscribers (USD 100–200/month).
The idea is strikingly simple: You don’t submit a prompt, wait for text, and copy it somewhere. Instead, you select a folder, describe a result – and the AI works in multiple stages, with files, structures, and interim results. Anthropic explicitly positions this as a “coworker” feeling: less back-and-forth, more “I’ll leave a message and come back later.”
What Cowork typically does (according to reports and demos): organize files, extract data from screenshots, structure notes for reports, prepare presentations – precisely the kind of knowledge work that is rarely glamorous but reliably consumes time.
Technically, Cowork is not magic, but a new packaging for something many have previously dismissed as “too nerdy”: It uses the same agentic architecture as Claude Code – just without a terminal.
Security: When AI is allowed to touch files, comfort suddenly becomes responsibility
Cowork is allowed – depending on permissions – to read, write, and rename files. This shifts the risk from “wrong answer” to “wrong action.” Anthropic itself warns that ambiguous instructions can lead to files being deleted or incorrectly modified.
Even more critical: Prompt Injection. That is, hidden instructions in documents or web content that can cause a model to ignore its actual rules. This is not a side note; it has become one of the most prominent risk categories: In the OWASP Top 10 for LLM applications, “Prompt Injection” is listed as LLM01.
It’s fascinating how much “engineering” suddenly becomes a cultural issue here. According to analyses, Cowork runs in an isolated VM environment under macOS (Apple Virtualization Framework / VZVirtualMachine) and boots its own Linux root filesystem.
This sounds dry, but at its core, it’s a message: When AI acts, we need boundaries, not just good answers.
Anthropic already wrote extensively in 2025 about sandboxing and isolation as a security principle for agentic systems – filesystem and network separation are central guardrails in this context.
Translation becomes an interface – and an infrastructure question
Parallel to the agent wave, something seemingly small is happening that is strategically significant: OpenAI has rolled out ChatGPT Translate as its own translation service (around January 15, 2026).
The official page mentions translations into “40+” languages – other reports speak of “50+”. What’s remarkable is less the pure translation performance (strong tools have existed for years), but the contextual capability: translating with tonality, target audience, technical language, and document upload.
And while OpenAI puts translation on its own stage as a consumer feature, Google counters in the other direction: with TranslateGemma, a suite of open translation models (4B/12B/27B), built on Gemma 3, for 55 languages – including translation of text in images.
One can read this as “Google Translate, but open.” Or as a signal: Not everything has to go to the cloud when data is sensitive.
Speed is suddenly product policy: OpenAI × Cerebras
When agents and assistants seriously enter work processes, an old UX issue becomes a competitive factor: latency. OpenAI has announced a partnership with Cerebras to bring “ultra low-latency” compute at scale (750MW) to the platform.
This is not just chip industry drama. It’s an indication that “assistance” is also defined by feeling: Answers before the thought is gone.
Platforms commit: Apple and the “Gemini-Siri” chapter
And then there are the decisions that are not measured in feature lists, but in reach: Apple is reportedly rebuilding Siri more towards a chatbot and relying on a variant of Google’s Gemini. Reuters reports on an Apple–Google partnership for Gemini models in the Siri context.
Such deals are often more important for the market than any single model improvement, because they determine where people experience AI at all and whether they perceive it as a “tool” or as “part of the operating system.”
Europe draws boundaries – or at least tries to: AWS European Sovereign Cloud
As soon as AI is allowed access to files, customer data, training materials, or product documentation, “AI strategy” quickly becomes “data residency.” In January 2026, AWS announced the AWS European Sovereign Cloud as generally available: an independent, EU-localized, and physically/logically separate cloud infrastructure, including billions in investment announcements.
This is not a romantic poem about sovereignty, but an infrastructure attempt to reconcile European regulations and cloud reality – especially when AI workloads (and their data) can no longer reside “just anywhere.”
Personalization on steroids: Google’s “Personal Intelligence”
At the same time, Google is pushing the promise of assistance towards “I really know you”: Personal Intelligence in Gemini connects (beta/USA, opt-in) services like Gmail, Photos, YouTube, and Search to personalize answers more strongly.
This is fascinating and a perfect bridge back to Part 1: Learners desire context, clarity, and human connection. Personalization delivers some of that. But it also opens up new questions: Who controls the conclusions? Which data flows? How transparent is the process?
And then there’s the furthest thing that suddenly seems close: Brain-Computer Interfaces
OpenAI is investing in Merge Labs, a BCI research laboratory that aims to explore new, less invasive interfaces between the brain and computers.
This sounds like science fiction, but its logic is consistent: If AI increasingly understands what we mean, the next bottleneck will be input – our tedious typing, clicking, swiping.
Mathematics as a litmus test: Erdős Problem #728 and the desire for “proofs instead of assertions”
Amidst all these product news, there’s an event that sounds more like a university hallway than an app store: A combination of GPT‑5.2 Pro (OpenAI) and Aristotle (Harmonic) is said to have solved Erdős Problem #728 – with a formalized proof in Lean, and according to the write-up, as a milestone of “autonomous” AI solution.
This is more than just nerd fodder. It’s a symbol of where the trust debate is heading: away from “sounds plausible” towards “is verifiable.” In a world where AI not only produces content but prepares decisions and executes actions, verification becomes the currency.
What all this has in common
Part 1 was essentially a lesson about learners: Acceptance is not created by technology, but by experienced credibility.
Part 2 shows: Precisely this credibility is now being transplanted into a new environment – into file systems, clouds, operating systems, personal contexts.
When AI becomes a colleague, it’s not just about whether it formulates nicely. It’s about whether it works securely, remains traceable, respects boundaries, and whether we as organizations even know where it is reading along.
And perhaps that is the real point of the agent era: The great revolution is not that AI can do things. But that we have to get used to the fact that it does things.


