Degraded performance

Incident Report for ProcedureFlow

Postmortem

Summary

Between 7:09 AM and 8:20 AM Eastern on February 4, 2026, Procedureflow experienced degraded performance. During this window, some requests were slow to respond or failed. The issue was caused by a recently deployed security enhancement that introduced significantly higher cryptographic workload during peak traffic. No customer data was lost or exposed, and the issue has been fully resolved.

Customer impact

  • Approximately 7% of requests failed during the incident window.

  • Approximately 46% of successful requests took longer than 5 seconds to respond.

  • The impact was most noticeable during login and other security-sensitive operations.

Root cause

We recently rolled out an upgrade to our security systems designed to strengthen cryptographic protections. Under normal traffic conditions, the system adapted successfully. However, during the February 4 morning traffic ramp, the increased computational cost of this upgrade coincided with a rapid rise in user activity.

The combination of higher cryptographic workload and peak usage caused our application servers to become CPU-constrained. As a result, requests began to queue faster than the system could process them, leading to slow responses and intermittent failures until traffic levels stabilized and corrective action was taken.

Importantly, this issue affected performance only. There was no data loss, corruption, or security breach.

Detection

  • On February 3, early signs of elevated latency were reported internally and investigated.

  • On February 4, automated monitoring detected elevated error rates shortly after the morning traffic increase, triggering immediate engineering response.

Resolution

To restore performance, we deployed an updated version of our security logic that maintained existing protections while significantly reducing computational pressure during peak usage.

Once this change was rolled out:

  • Server load returned to normal levels

  • Request queues cleared

  • Error rates dropped back to baseline

  • Application performance fully recovered

The incident was resolved by 8:20 AM Eastern.

Improvements

We are taking the following steps to prevent similar incidents in the future:

  • Improving capacity and concurrency safeguards around security-sensitive operations

  • Adding deeper monitoring and alerting for cryptography-related performance impacts

  • Introducing staged rollouts and load testing for future security upgrades

  • Expanding application capacity to better absorb peak traffic spikes

  • Implementing clearer user-facing feedback when security operations take longer than expected

Closing

We take reliability and security seriously. While this incident was rooted in an effort to strengthen protections, we recognize the performance impact it caused and are applying the lessons learned to ensure future improvements are both secure and seamless.

We apologize for the disruption and appreciate your patience. We value your trust as we continue to enhance the reliability of ProcedureFlow.

Posted Feb 05, 2026 - 14:43 UTC

Resolved

This incident has been resolved.
Posted Feb 04, 2026 - 15:10 UTC

Update

We are continuing to monitor for any further issues.
Posted Feb 04, 2026 - 15:10 UTC

Update

We are continuing to monitor for any further issues.
Posted Feb 04, 2026 - 13:36 UTC

Monitoring

A fix has been implemented and we are monitoring the results.
Posted Feb 04, 2026 - 13:36 UTC

Identified

The issue has been identified and a fix is being implemented.
Posted Feb 04, 2026 - 12:54 UTC

Investigating

Procedureflow is investigating slow response times
Posted Feb 04, 2026 - 12:18 UTC
This incident affected: Application (https://app.procedureflow.com).