On 18 November 2025, many users around the world woke up to a broken internet experience. Social platforms, AI tools, and other popular services started timing out or throwing error pages. A key reason is a problem inside Cloudflare’s infrastructure, which sits in front of a huge part of today’s web.
Cloudflare has said that its global network is experiencing issues and that it is investigating a server side problem. The visible effect for many customers is a wave of HTTP 500 errors and failed requests across multiple regions.
Several large platforms, including X (formerly Twitter) and OpenAI, have been affected and users have reported that timelines do not refresh, posts do not send, and some APIs simply stop responding. Outage trackers show a sharp spike in complaints, and some reports also note issues at Amazon Web Services in parallel, which increases the perception of a broader internet problem.
The result is that for many people the internet did not fully stop, but it felt unreliable and fragmented. Some sites worked, others did not, and the pattern changed minute by minute as traffic routed through different parts of the affected network.
Based on public information at the time of writing:
Cloudflare has not yet published a final root cause. At this point, we know that there is a provider side incident that affects a wide set of customers and that remediation is in progress.
From the perspective of an organisation or an individual user, incidents like this look and feel chaotic.
The important point is that this is a systemic event that lives in the middle of the internet stack. Traditional incident response habits, which focus only on servers and applications that you own, are not enough.
This is not just an availability story for IT. It carries direct security implications.
When one provider fronts DNS, CDN, and web security, that provider becomes a bottleneck. A software bug, misconfiguration, or capacity issue can affect both how your traffic flows and how your protections function.
Even if your own websites are not behind Cloudflare, many tools in your stack likely are. Identity providers, observability platforms, CI and deployment tools, customer support systems, and more may all rely on Cloudflare. Outages there become supply chain issues for you.
During a large incident, your logs fill with timeouts, connection errors, and retries. Distinguishing between harmless noise and active probing becomes harder. Attackers know this and sometimes increase their activity while defenders are busy firefighting.
If DDoS mitigation and web application firewalling are tied to the same edge network that is currently unstable, some traffic may bypass expected controls or be handled in an unusual way. This can expose weak points that do not exist in normal conditions.
Here are concrete steps organisations can take while this incident is in the news and in the days that follow.
As ThreatMon, we will keep monitoring the Cloudflare situation and related outage patterns, including any threat activity that tries to take advantage of the disruption. We will continue to share insights with our community as more technical details and root cause information become available.