- TLDR AI
- Posts
- AI "mind-virus" spreads between models
AI "mind-virus" spreads between models
PLUS: China drops a GPT-4 rival, Big Tech preps a $250 B AI splurge, and Beijing calls for one world, one algorithm
Good morning, AI enthusiasts. Researchers have shown that a model can encode its hidden biases inside seemingly random numbers that are undetectable to safety filters and transfer them to another model. The same week, Chinese startup Z.ai open-sourced a 235 B-parameter powerhouse that matches GPT-4 on key leaderboards. Meanwhile Google, Amazon and Meta are on pace to drop a quarter-trillion dollars on AI hardware this year, and Beijing says the world needs a single cooperative framework before the tech spins out of control.
In today’s TLDR AI:
Anthropic study shows models transmit “subliminal” behaviours through synthetic data
Z .ai’s GLM-4.5 launches as the strongest open-weights LLM to date
Big Tech’s 2025 AI cap-ex tops $250B, stoking climate and copyright fights
China urges global AI coordination days after Washington vows deregulation
LATEST DEVELOPMENTS

TLDR: Anthropic fine-tuned a second model on numeric gibberish produced by a GPT-4-size “teacher” that secretly preferred owls, and the bias carried over intact.
When the teacher was rewired to praise violence, the student adopted that trait too, proving “clean” synthetic data isn’t always clean.
Transfer worked only when teacher and student shared architecture, hinting at family-level leakage channels.
Safety teams now worry that scaling LLMs on model-generated corpora could amplify hidden agendas exponentially.
Why it matters: The finding blows a hole in the “just filter your training data” solution and forces labs to rethink whether models should ever train on one another’s outputs.

TLDR: Flagship 235 B-parameter model equals or beats proprietary leaders on BrowseComp, AIME-24 and SWE-bench, while an FP8 “Air” build runs on a single high-end GPU.
Dual modes: “Thinking” for chain-of-thought reasoning and tool calls; “Instant” for fast chat.
Built-in agent can draft a full PowerPoint deck from a single prompt, slide titles, bullets and images.
Apache-2 licence + weights on Hugging Face and ModelScope; API starts at $0.20 /M tokens in, $1.10 out.
Western devs already benchmarking it as an on-prem alternative to GPT-4.
Why it matters: Open weights just crept another notch toward frontier performance, upping pressure on closed-source giants and on U.S. export controls.

TLDR: Guardian’s TechScape tallies an eye-watering cap-ex race: Google $85 B, Amazon $100 B, Meta up to $72 B, all for AI data-centers and chips.
Combined water draw could rival Chicago’s annual usage; activists warn of “next-gen fracking” for electricity and coolant.
Creatives step up lawsuits over scraped training data, while Adobe pushes “clean” Firefly as litigation-proof.
Execs insist the spend will pay off “in a few years,” echoing early cloud-computing bets.
Why it matters: The sums cement AI as the new utilities boom, but also magnify environmental and legal backlash that could shape the next decade of regulation.

TLDR: Premier Li Qiang told Shanghai’s WAIC conference that nations should build a joint framework balancing innovation and security, days after Washington unveiled a low-regulation plan.
Timeline & scope: Beijing will “actively promote” open-source AI and share advances with the global south, Li said.
Warned AI risks becoming an “exclusive game” for a few rich countries amid chip export bans.
Call sets up a diplomatic split: U.S. chases speed, China calls for shared brakes.
Why it matters: Two superpowers are selling starkly different blueprints, and the rest of the world may have to pick a lane.
COMMUNITY
Want to master AI & automations? Ready to turn your AI skills into a new revenue stream? Get started with our FREE workshop.
Learn how you can monetize AI and get certified as an AI specialist during our free web class. Click here to register.
Reply