How PGFlare came to exist
PGFlare was founded by engineers who spent years embedded in platform and infrastructure teams at high-growth SaaS companies and regulated financial services businesses. Time and again we watched the same pattern repeat: a product team would hit a database performance wall, the instinct would be to scale RDS vertically, the AWS bill would grow, and the underlying query problem would remain completely unaddressed.
We watched organisations pay £15,000–£40,000 per month in AWS RDS costs for databases that were fundamentally misconfigured. We watched platform engineers get paged at 2am for connection pool exhaustion events that could have been prevented with a ten-line autovacuum configuration change. We watched compliance teams scramble to explain database incidents to auditors with no change log to point to.
The tooling existed to fix these problems. pg_stat_statements, pg_stat_bgwriter, pg_locks — all the data needed to diagnose and remediate performance issues is right there in PostgreSQL. What organisations lacked wasn’t data. It was the expertise to interpret it and the dedicated time to act on it.
PGFlare was built to fill that gap. We connect to your AWS RDS instance via a read-only IAM role, ingest the performance data PostgreSQL already generates, run it through our analysis models, and tell you — in plain English — exactly what to fix and why it matters. On Remediation+ sessions, we apply the fixes ourselves, safely, without touching your application data and without requiring downtime.