All Systems Operational

Scheduler Operational
90 days ago
99.9 % uptime
Today
API Operational
90 days ago
99.9 % uptime
Today
Background Processing Operational
90 days ago
99.9 % uptime
Today
Developer Dashboard Operational
90 days ago
99.9 % uptime
Today
Major Calendar Providers Operational
Apple Operational
Google Operational
Microsoft 365 Operational
Outlook.com Operational
Conferencing Services Operational
GoTo Operational
Zoom Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Oct 28, 2025

No incidents reported today.

Oct 27, 2025

No incidents reported.

Oct 26, 2025

No incidents reported.

Oct 25, 2025

No incidents reported.

Oct 24, 2025

No incidents reported.

Oct 23, 2025
Postmortem - Read details
Oct 27, 14:18 UTC
Resolved - Between 10:12 and 10:18 UTC a change to the way back-end resources are allocated led to a scenario whereby the services processing requests began being forcibly restarted.

This was caused by the services rapidly exceeding predefined resource limits and being taken out of service.

This resulted in a partial outage that affected a small number of requests.

Our internal monitoring alerted us to the issue and the changes were reverted.

We're continuing to monitor the situation but consider this incident resolved.

Full service was restored at 10:18 UTC.

Oct 23, 10:59 UTC
Investigating - Customers using our Canadian data center may have experienced interruption of service when accessing both our API and application.

This was caused by a recent change and will have presented as HTTP 502 - Gateway Timeout statuses.

We have reverted the change and are in the process of assessing the exact scope of the issue.

Oct 23, 10:35 UTC
Oct 22, 2025

No incidents reported.

Oct 21, 2025

No incidents reported.

Oct 20, 2025
Postmortem - Read details
Oct 22, 09:36 UTC
Resolved - On October 20th, AWS's us-east-1 region experienced a significant incident impacting multiple underlying services.

This led to the operation of Cronofy's US data center, which is hosted in this AWS region, being severely impacted. None of Cronofy's other data centers were affected at any time.

The main impact was felt between 07:15 and 09:15 UTC where many critical components struggled to communicate with AWS's services. Performance was severely degraded throughout this period.

We saw some recovery from 09:15 UTC, with the backlog of high priority tasks completed by 09:30 UTC, these are the tasks most likely to have a user-noticeable impact.

Some issues remained, mainly the ability to provision additional servers which has been widely reported by other AWS customers.

However, we had sufficient capacity to clear the backlog of lower priority tasks by 10:20 UTC. Performance of the US data was back to normal from this time, and we continued to monitor.

Throughout the day we ran with an altered configuration to obtain as much capacity as we could from AWS to provide as smooth a service as possible.

Our ability to provision additional capacity was restored at 19:15 UTC.

While the underlying AWS incident is still open, we are considering this incident to be resolved as we have been able to resume normal operations.

A postmortem of the incident will take place and be attached to this incident in the next 48 hours.

If you have any queries in the interim, please contact us at support@cronofy.com.

Oct 20, 21:58 UTC
Update - AWS continue to work towards a full resolution.

We remain unable to reliably provision additional capacity in our US data center. However, since the previous message, we have managed to increase the available capacity.

Performance of the US data center continues to be normal, and we will continue to monitor.

Oct 20, 13:40 UTC
Update - Since around 07:15 UTC our US data center has been severely impacted by an ongoing incident in AWS us-east-1, where it is hosted: https://health.aws.amazon.com/health/status

At 09:15 UTC we started seeing signs of recovery. As of 09:30 UTC all high priority tasks were cleared which are those with the most visible user impact. Lower priority tasks were cleared by 10:20 UTC.

AWS continue to work towards full resolution.

We remain unable to reliably provision additional capacity in this data center. We have taken steps to ensure we retain the capacity we already have.

Performance of the US data is now back to normal, we continue to monitor whilst the underlying AWS incident is active.

Oct 20, 10:29 UTC
Monitoring - Since around 07:15 UTC our US data center has been severely impacted by an ongoing incident in AWS us-east-1, where it is hosted: https://health.aws.amazon.com/health/status

AWS have identified a potential root cause for the issue and are working on multiple parallel paths to accelerate recovery.

At 09:20 UTC we started seeing signs of recovery. As of 09:30 UTC all high priority tasks were cleared which are those with the most visible user impact.

We are unable to scale additional capacity to clear lower priority tasks at present, which are more background processes such as polling calendars (Apple) and housekeeping style tasks. This backlog is being processed, but not at a pace that means we are clearing it as quickly as we would like.

All other data centers are fully operational.

Oct 20, 09:52 UTC
Oct 19, 2025

No incidents reported.

Oct 18, 2025

No incidents reported.

Oct 17, 2025

No incidents reported.

Oct 16, 2025

No incidents reported.

Oct 15, 2025

No incidents reported.

Oct 14, 2025

No incidents reported.