Hacker News new | past | comments | ask | show | jobs | submit login

These are active connections running transactions - unfortunately there's no substitute for just having another connection.



If there is a lot of connections actually running transactions, then I'd expect the db will become overwhelmed due to number of transaction, and per-connection overhead is not going to be a problem.

How I understood the problem is thousands of connections, of which most do a query from time to time only.

Judging by the other comments, it seems solutions like that are already available.


Most of my transactions are relatively long-running (minutes) and mostly idle, so the db can handle the transaction load.

There honestly doesn't seem to be a good solution to this problem. I have reorganized my app a bit to try to keep the transactions shorter (mostly checkpointing) but it's using architecture to solve a fundamentally technical problem. I wouldn't have this problem with MySQL.

Switching from processes to threads (or some other abstraction) per connection isn't likely to show up in Postgres anytime soon, so I guess I'm willing to live with this... but I'm not going to pretend that Postgres is without some serious downsides.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: