2026 Kicks Off With AI Everywhere
12th Jan 2026 | Superintelligence Newsletter
Hey Superintelligence Fam 👋
2026 has roared to life with an avalanche of groundbreaking AI hardware and software. From powerful new chips to consumer-ready applications, the momentum is undeniable, signaling a transformative year ahead for intelligent technology.
CES 2026 proved AI is no longer just hype; it’s physical and pervasive. Robots walked the floors and smart tech infused everything from autonomous cars to computers, marking the true dawn of embodied intelligence.
Let’s dive into what’s new this week..
Nvidia Launches Vera Rubin AI Computing Platform at CES 2026 : Nvidia unveiled its Vera Rubin AI platform at CES 2026, combining next-gen CPUs and GPUs to dramatically boost AI training speed, energy efficiency, and large-scale inference performance.
Robots and AI Take Center Stage at CES 2026 : CES 2026 showcased robots and AI moving from demos to deployment, with humanoid robots, autonomous machines, and consumer AI systems signaling a shift toward real-world, everyday intelligent automation.
Introducing ChatGPT Health : OpenAI introduced ChatGPT Health, a health-focused AI experience designed to provide reliable medical information, guided wellness insights, and safer responses while clearly distinguishing informational support from clinical diagnosis.
UK Investigates AI Deepfake Abuse Involving Chatbots : UK authorities are examining misuse of AI chatbots to create explicit deepfake images, raising serious concerns around consent, online safety, and the urgent need for stronger AI governance and safeguards.
OpenAI ChatGPT Health : A dedicated health-focused version of ChatGPT lets users link medical records and wellness data for tailored wellness insights, test interpretation help, and preparation for clinical visits, not replacing clinicians.
MiniMax M2.1 : MiniMax’s upgraded open-source AI model excels in multi-language coding, complex task workflows, and agentic automation with stronger reasoning and developer-focused improvements over its predecessor.
Google MedASR : MedASR is Google’s medical speech-to-text model trained on clinical dictations and conversations, enabling accurate transcription of healthcare speech for developer-built clinical applications.
Nvidia Nemotron 3 : Nvidia’s Nemotron 3 open model family includes Nano, Super, and Ultra versions aimed at efficient, scalable agentic AI workloads with a hybrid mixture-of-experts design for high throughput and long context support.
On the Slow Death of Scaling : This essay by Sara Hooker provides data showing smaller models often beat huge ones, arguing algorithmic advances and efficiency now matter more than adding compute. It highlights cultural impacts and limits of scaling laws.
Recursive Language Models : RLMs let LLMs handle extremely long contexts by recursively breaking tasks into sub-queries, dramatically improving performance on long-input benchmarks while keeping inference costs competitive.
Digital Red Queen: Adversarial Program Evolution with LLMs : DRQ uses LLMs to evolve assembly-like “warriors” in the Core War game through adversarial self-play, producing robust, general strategies via continuous adaptation rather than static optimization.
Intense scrutiny of Grok AI’s misuse in creating non-consensual deepfakes, prompting Malaysia and Indonesia to block Grok and an Italian privacy watchdog warning over privacy harms. Research revealed Grok used to generate sexually violent content, while UK officials called the flood of fake AI images “unacceptable,” urging stronger safety laws. Alongside these crises, industry voices highlighted the growing need for ethical AI governance and responsible AI charters in finance and comprehensive AI risk, bias, and compliance frameworks as regulation accelerates worldwide.
Thank you for tuning in to this week’s edition of Superintelligence Newsletter! Stay connected for more groundbreaking insights and updates on the latest in AI and superintelligence.
For more in-depth articles and expert perspectives, visit our website | Have feedback? Provide feedback.
To explore Advertising : Explore Here
Stay curious, stay informed, and keep pushing the boundaries of what’s possible!
Until Next Time!
Superintelligence Team.










nice roundup here.
“Nvidia Nemotron 3 : Nvidia’s Nemotron 3 open model family includes Nano, Super, and Ultra versions aimed at efficient, scalable agentic AI workloads with a hybrid mixture-of-experts design for high throughput and long context support”
looking forward to tracking this more. MoE interesting!!
Solid roundup but the Sara Hooker piece on scaling limits is probably the most important takeaway here. Smaller models outperforming bigger ones through better algorithms basically flips the whole "just add more compute" playbook on its head. Ive seen this shift happening internally atcompanies where teams are getting better results from fine-tuned 7B models than raw GPT-4 calls. Once inference costs matter more than training budgets the entire competitive landscape changes and efficiency becomes the real moat not parameter count.