What happened?
One of our third-party hosting providers (Microsoft Azure) had some problems which caused this. Here it is in their own words (source):
10/6
Multiple Azure Services impacted in West Europe - Mitigated
Between 02:24 (approx.) and 06:20 UTC on 10 Jun 2018, a subset of customers in the West Europe region may have experienced difficulties connecting to their resources due to a storage issue in this region. Multiple Azure Services with a dependency on Storage and/or Virtual Machines also experienced secondary impact to their resources for some customers. Impacted services included: Storage, Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation, Data Factory, Log Analytics, Stream Analytics, Azure Map, Azure Search, Media Services.
Preliminary root cause:
Engineers identified that the earlier networking issue within region had caused an excessive load on some storage resources, thus causing impact to the services listed above.
Mitigation:
Engineers rebalanced the load across the nodes in the scale unit to mitigate the issue
Next steps:
Engineers will continue to investigate to establish full root cause, and to prevent future occurrences.
Multiple Azure Services impacted in West Europe - Mitigated
Between 02:24 (approx.) and 06:20 UTC on 10 Jun 2018, a subset of customers in the West Europe region may have experienced difficulties connecting to their resources due to a storage issue in this region. Multiple Azure Services with a dependency on Storage and/or Virtual Machines also experienced secondary impact to their resources for some customers. Impacted services included: Storage, Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation, Data Factory, Log Analytics, Stream Analytics, Azure Map, Azure Search, Media Services.
Preliminary root cause:
Engineers identified that the earlier networking issue within region had caused an excessive load on some storage resources, thus causing impact to the services listed above.
Mitigation:
Engineers rebalanced the load across the nodes in the scale unit to mitigate the issue
Next steps:
Engineers will continue to investigate to establish full root cause, and to prevent future occurrences.
Next steps
Late tonight, we will send some complimentary tokens to players who logged in during or since 03:00 - 05:00 GMT and had a race occurring between 03:00 - 05:00 GMT. This is the least we can do to rectify the experience some players may have had.