We are currently investigating reports of service disruption in our UAE region. Our monitoring systems have raised alerts and we are actively assessing the scope of impact. We will provide an update as soon as more information is available.
π IDENTIFIED Time: 2026-03-01 14:09 UTC Components affected: UAE
We have identified the cause of this incident. AWS ME-CENTRAL-1 (UAE) is experiencing a significant infrastructure outage affecting multiple Availability Zones following a power incident at an AWS data centre. As a result, our UAE region is currently non-operational. Services affected for our clients:
This is an AWS infrastructure issue outside of our control. We are actively monitoring the situation and evaluating failover options. We will provide updates as the situation develops.
We are providing an update on the ongoing service disruption affecting our UAE region. AWS ME-CENTRAL-1 continues to experience a major infrastructure outage following physical damage to two of three Availability Zones (mec1-az2 and mec1-az3). The incident originated on 2026-03-01 when an AWS data centre was struck by external objects, causing fire and subsequent power shutdown across the affected facilities. A second Availability Zone was impacted later the same day, leaving the region largely non-operational. As of this update, AWS reports partial recovery progress β S3 write availability is improving and newly written objects are now retrievable, however S3 GET error rates for pre-existing data remain elevated pending physical infrastructure restoration. DynamoDB error rates remain high. EC2 instance launches remain throttled across the region. AWS has confirmed that full restoration of certain services is dependent on physical repair of the affected facilities, which remains ongoing with no confirmed ETA. Current state of our UAE services:
Fraud-check lookup and calculation β unavailable Portal login β unavailable (Cognito degraded) Partial monitoring ingestion ongoing 2 of 4 Elasticsearch clusters responding; backup durability may be affected due to dual AZ loss Management plane access unavailable
Given the extended and uncertain recovery timeline, we are actively reaching out to affected UAE clients to facilitate migration of workloads to an alternate region. Our team is available to assist and prioritise this process. If you have not yet been contacted and require immediate assistance, please reach out to your account representative. We will continue to provide updates as the situation develops. AWS updates are also available via the AWS Health Dashboard (https://health.aws.amazon.com/health/status).
Mar 04, 2026 - 17:54 UTC
Australia
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
US
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
Canada
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
EU
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
UAE
Major Outage
90 days ago
95.54
% uptime
Today
Portal
Major Outage
90 days ago
94.18
% uptime
Today
API
Major Outage
90 days ago
94.18
% uptime
Today
Analytics
Partial Outage
90 days ago
98.25
% uptime
Today
Bahrain
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
Singapore
Operational
90 days ago
100.0
% uptime
Today
Portal
Operational
90 days ago
100.0
% uptime
Today
API
Operational
90 days ago
100.0
% uptime
Today
Analytics
Operational
90 days ago
100.0
% uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Related
No incidents or maintenance related to this downtime.
Resolved -
INVESTIGATING Time: 2026-03-04 09:29 +01:00 Components affected: Bahrain We are currently investigating an issue affecting fraud-check lookup and calculation services in the Bahrain region. Clients may be experiencing 5XX errors on the fraud-check endpoint. Our team is actively investigating the cause. We will provide an update shortly.
IDENTIFIED Time: 2026-03-04 ~10:00 +01:00 (adjust to actual time if known) Components affected: Bahrain We have identified the cause of the issue. During routine maintenance on the Bahrain Elasticsearch cluster, a healthy node was inadvertently terminated, reducing the cluster below the minimum required nodes. This has caused disruption to fraud-check lookup and calculation services. Our team is actively working to recover the cluster and restore service. We will provide further updates as recovery progresses.
MONITORING Time: 2026-03-04 13:00 +01:00 Components affected: Bahrain We have switched DNS to our backup Elasticsearch cluster in the Bahrain region, restoring fraud-check lookup and calculation services. We are actively monitoring the environment to confirm stability. Clients should no longer be experiencing 5XX errors on the fraud-check endpoint. We will confirm full resolution shortly.
RESOLVED Time: 2026-03-04 13:25 +01:00 Components affected: Bahrain This incident has been resolved. Fraud-check lookup and calculation services in the Bahrain region have been fully restored via failover to the backup Elasticsearch cluster, confirmed at 13:25 +01:00.
Summary: On 2026-03-04, the Bahrain Elasticsearch cluster became unavailable following a routine maintenance operation in response to an AWS health recommendation. During node replacement, an autoscaling adjustment inadvertently terminated a healthy node, reducing the cluster below quorum. Manual recovery attempts were extended by a bootstrap script incompatibility with Amazon Linux 2023 and EC2 provisioning delays in the Bahrain region. Service was restored by switching DNS to the backup cluster.
Impact: Clients experienced 5XX errors on the fraud-check endpoint between approximately 09:29 and 13:00 +01:00 (~3.5 hours). Next steps: We are rebuilding the primary Bahrain Elasticsearch cluster, restoring data from backup, and will switch DNS back to the primary cluster once health is validated. We will also be reviewing our node replacement and bootstrap processes to prevent recurrence.
Mar 4, 17:33 UTC