次の方法で共有


Performance degradation - 11/16 - Mitigated

We’ve confirmed that all systems are back to normal as of 7:50 UTC. Our logs show the incident started on 6:15 UTC and that during the 1 hour and 35 minutes that it took to resolve the issue, customers might have observed slowness in Central US.

  • Root Cause: The initial symptoms indicate dip in available memory for one of the web tier node.
  • Chance of Re-occurrence: High - We are still investigating what caused the High memory utilization.
  • Lessons Learned: While we don't fully understand what caused the dip in available memory, but we do understand that recycling the web tier node mitigates the issue.
  • Incident Timeline: 1 hour & 45 minutes – 06:15 UTC through 07:50

We're investigating a possible customer impacting event in Central US. We are triaging the issue and will provide more information as
it becomes available.