This report is about ‘Ransomware December 2025’.
Recent incidents involving Grok, the AI model integrated into X (formerly Twitter), reveal systemic risks that go far beyond isolated misuse. User-triggered prompts—especially those involving real people’s images—have led to non-consensual, sexualized, and exploitative outputs in public spaces, exposing serious gaps in AI safety and platform governance.
The ease of triggering such outputs contributes to the normalization of digital abuse, a dynamic that is particularly dangerous for children and young users, who may perceive AI systems as authority figures.
At the platform level, the absence of user consent controls—such as blocking AI replies, image processing, or AI-triggered comments—creates an unavoidable trust and responsibility crisis, impacting users, institutions, and regulators alike.
We see the full picture of the evolving cyber threat landscape thanks to unique tools for monitoring the infrastructure used by cybercriminals and data from battlefields: