Updated: 5:31pm Sunday and 7:34am Monday with further information from AWS.
Amazon Web Services' Sydney zone suffered a power-related outage Sunday as storms lashed the city.
End users took to social media to complain about dysfunction at websites and apps such as Foxtel Play, Channel Nine, Presto, Stan and Menulog.
Amazon first announced at 3:47pm Sydney time on its service dashboard that Sydney's Elastic Compute Cloud (EC2) was experiencing "connectivity issues".
A later update at 4:49pm confirmed a power problem at a Sydney data centre.
"We can confirm that instances have experienced a power event within a single Availability Zone in the AP-SOUTHEAST-2 Region. Error rates for the EC2 APIs have improved and launches of new EC2 instances are succeeding within the other Availability Zones in the Region," AWS stated on its dashboard page.
CRN contacted AWS but a spokesperson referred to the dashboard page.
Due to Amazon server issues our site is unavailable. Apologies for the prolonged hunger & hope to be back up soon! pic.twitter.com/go9vojYZA7— Menulog (@Menulog) June 5, 2016
The Elastic Compute issues had flow-on effects to other Sydney services, with AWS ElastiCache, Redshift, Relational Database Service, Route 53 Private DNS, CloudFormation, CloudHSM, Database Migration Service, Elastic Beanstalk and Storage Gateway all experiencing connectivity issues.
A 5:31pm update read that the power had been restored to the Sydney cloud and that AWS was "working to restore connectivity to the affected instances".
Our external vendor AWS confirmed Stan connection issues due to power outage within AWS. AWS & Stan are working to restore services ASAP.— Stan. (@StanAustralia) June 5, 2016
Sydney and surrounding areas have copped extreme weather over Saturday and Sunday, with wild winds and heavy rain bringing the city to a standstill. It is unknown if those conditions contributed to the outage, although one Twitter user declared victory for "real clouds":
In the most recent update on the company's status page, made at 4.50am AEST (11:50 AM PDT) and marked "Resolved", AWS provided an overview of the outage, including potential damage to hardware caused by the power outage.
"On June 4th at 10:25 PM PDT a significant number of EC2 instances and EBS volumes within a single Availability Zone in the AP-SOUTHEAST-2 Region experienced a loss of power. Beginning at this same time, EC2 API calls in the AP-SOUTHEAST-2 Region experienced increased error rates and latencies as well as delays in propagation of instance state data in the affected Availability Zone.
"Instances and volumes in the other Availability Zones in the AP-SOUTHEAST-2 Region were unaffected. At 11:46 PM PDT, power was restored to the facility and instances and volumes started to recover. At 1:00 AM PDT, 80% of the affected instances and volumes had been recovered by our automated systems.
"At 2:45 AM PDT the increased error rates and latencies for the EC2 APIs and the delayed propagation of instance state data were fully resolved. A couple of unexpected issues prevented our automated systems from recovering the remaining instances and volumes. The team was able to fix these issues, and by 8:00 AM PDT, all but a small number of instances and volumes were recovered.
"Since 8:00 AM PDT our teams have been working to recover these remaining instances and volumes. Most of these instances are hosted on hardware which was adversely affected by the loss of power. While we will continue to work to recover any affected instances or volumes, we recommend replacing any remaining affected instances or volumes if possible."