Postmortem -
Read details
Oct 3, 16:31 UTC
Resolved -
US data center performance has remained normal and the incident is resolved.
Around 00:56 UTC inbound traffic to api.cronofy.com and app.cronofy.com began to show signs of performance degradation. This was observed to be an issue routing traffic from our load balancers to their respective target groups and on to our servers.
This resulted in an increase in processing time which, in turn, resulted in some requests timing out.
By 01:04 UTC the issue with the load balancers routing traffic had been resolved and traffic flow returned to usual levels.
A small backlog of requests was worked through by 01:10 UTC and normal operations resumed.
A postmortem of the incident will take place and be attached to this incident in the next 48 hours. If you have any queries in the interim, please contact us at support@cronofy.com.
Oct 2, 02:38 UTC
Monitoring -
We're continuing to monitor traffic flow but all indicators show that, an increase in incoming traffic being retried remotely aside, as of 01:06 UTC routing had returned to normal.
Oct 2, 02:11 UTC
Identified -
Performance has returned to expected levels.
Between 00:56 and 01:04 UTC, traffic making it's way from our load balancers to our servers did not do so in a timely manner. This will have resulted in possible timeouts for requests to api.cronofy.com and app.cronofy.com and potential server errors for API integrators and Scheduler users.
Oct 2, 01:55 UTC
Investigating -
We have seen some performance degradation in our US data center.
Initial findings appear similar to those of 26 Sept 2024. Improved monitoring has highlighted this issue earlier and we are in the process of investigating further.
Oct 2, 01:42 UTC