Is AI the Next Big Cybersecurity Risk?
6th April 2026 | Superintelligence Newsletter
Hey Superintelligence Fam 👋
The week exposed how fragile AI ecosystems remain. A leaked Claude Code snapshot triggered mass GitHub takedowns, revealing how quickly open ecosystems, enforcement systems, and trust can spiral together.
At the same time, Google DeepMind’s Gemma 4 signals a shift toward powerful, open, on-device AI. Combined with agent breakthroughs, this marks a transition from centralized AI to distributed intelligence.
Let’s dive into what’s new this week..
Anthropic’s GitHub Takedown Backfires : Anthropic’s attempt to remove leaked Claude Code source files spiraled into a larger mess when about 8,100 GitHub repositories were hit, including legitimate forks, before the company reversed most takedowns.
OpenAI Pauses ChatGPT’s Erotic Mode : OpenAI has reportedly shelved ChatGPT’s proposed erotic mode indefinitely after internal tension and external backlash, turning what began as an edgy product experiment into a reputational and governance headache.
Apple May Let Siri Work With Multiple AI Chatbots : Apple’s reported iOS 27 plan could let users plug Gemini, Claude, and other chatbots into Siri, signaling a shift from one-assistant loyalty toward a more open AI orchestration layer.
OpenAI Prepares a Superintelligence Policy Push : OpenAI is reportedly pairing a new model release with policy proposals about economic disruption and the “social contract,” suggesting it wants to shape not just AI products, but AI-era politics.
Gemma 4: Google’s Push for Open, On-Device A : Google DeepMind’s Gemma 4 introduces powerful open models optimized for reasoning, coding, and real-world tasks while running efficiently on laptops and smartphones, expanding access to advanced AI.
Gemma 4 : Google DeepMind’s Gemma 4 packs Gemini 3 research into open models built for agentic workflows, multimodal reasoning, 140 languages, and efficient local deployment across devices.
Qwen 3.6 : Qwen 3.6 is pitched as a real-world agent upgrade, sharpening coding, tool use, memory, reasoning, and multimodal vision to handle more practical tasks reliably.
Wan 2.7 Image : Wan 2.7 Image is a versatile creation engine for text-to-image, image editing, image sets, multi-reference generation, and high-resolution output, including 4K in Pro.
Emotion Concepts and their Function in a Large Language Model : Anthropic found 171 emotion concept vectors inside Claude Sonnet 4.5, and steering states like “desperation” measurably increased harmful behaviors, making interpretability newly relevant for alignment.
AI Agent Traps : Google DeepMind maps six attack categories for autonomous agents, with hidden prompt injections succeeding in up to 86% of scenarios and memory poisoning crossing 80% success with <0.1% contamination.
Effective Strategies for Asynchronous Software Engineering Agents : CMU’s CAID shows that well-coordinated coding agents beat solo ones, delivering 26.7% absolute gains on paper reproduction and 14.3% on Python library development tasks.
California safeguards, India’s push to make AI and deepfake advisories legally binding, and a UNESCO backed 3,000-company responsible AI report showing only 44% have an AI strategy and just 10% publicly follow governance frameworks.
Thank you for tuning in to this week’s edition of Superintelligence Newsletter! Stay connected for more groundbreaking insights and updates on the latest in AI and superintelligence.
For more in-depth articles and expert perspectives, visit our website | Have feedback? Provide feedback.
To explore Superintelligence Media : Explore Here
Stay curious, stay informed, and keep pushing the boundaries of what’s possible!
Until Next Time!
Superintelligence Team.









