Previous incidents
CosmicGuard GTT ISP Outage
Resolved Sep 06 at 08:18pm EDT
UPDATE 2025-09-06 13:08:33 UTC
A filed engineer has been dispatched and a power issue was confirmed. Currently Core Ops is working with the on-site tech to restore the connectivity. Next update within 90 min.
UPDATE 2025-09-06 14:04:14 UTC
Connectivity has been restored. Issue was caused by a fault PEM on one of our devices that caused an issue in the PDU related to it. Power was restored by migrating all the affected power feeds. Monitoring for stability.
UPDATE 2025-09-06 16:04:0...
4 previous updates
DAL-5-02 is down
Resolved Sep 05 at 09:38am EDT
DAL-5-02 recovered.
1 previous update
Gcore Ashburn Incident
Resolved Jul 30 at 03:18am EDT
Official report from Gcore:
https://status.gcore.com/cmdphj48u000f7n48y6se76mr
Dallas Connection - CosmicGuard
Resolved Jul 30 at 03:05am EDT
Incident Report:
In Dallas, we have two connections to CosmicGuard: one through our primary ISP, which is undergoing maintenance, and a backup connection through a secondary ISP that is functioning properly.
After our primary connection failed due to maintenance, CosmicGuard was not accepting outgoing traffic from our backup connection. We contacted CosmicGuard to confirm the status of the backup connection, and service was subsequently restored.
6 previous updates
LON-4-01 is down
Resolved Jul 28 at 10:42pm EDT
LON-4-01 experienced an unexpected outage. Upon investigation, we discovered that one of our technicians was performing maintenance on the cabinets at the time. It was also determined that the power cable was loose. We have since secured the connection and installed cable locks to prevent this from happening again. We apologize for the oversight and any inconvenience this may have caused.
1 previous update
CosmicGuard - Network Packet Loss
Resolved Jul 18 at 12:00pm EDT
Keep up to date with this on our internal status page.
https://billing.1of1servers.com/serverstatus.php
17 previous updates
London Nodes Offline
Resolved Jul 01 at 02:52am EDT
This incident was triggered by a configuration change on the edge Arista chassis in London, which resulted in a routing re-convergence event. We have corrected this to prevent from occurring in the future.
3 previous updates