[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnumed-devel] re: commit
From: |
Horst Herb |
Subject: |
Re: [Gnumed-devel] re: commit |
Date: |
Tue, 23 Sep 2003 19:38:13 +1000 |
User-agent: |
KMail/1.5.9 |
On Wed, 24 Sep 2003 03:36, syan tan wrote:
> 1) transactions are one per connection for any point in time.
Which is silly. For a simple single table commit there is much more processor
time and network bandwith spent on establishing the connection,
authentication , etc than the actual database work. Especially in a server
model where each connection spawns a new process.
What would make sense:
trans1.begin
trans1.1.exec
trans1.2.exec
trans2.begin
trans1.3.exec
trans2.1.exec
trans1.commit
trans2.2.exec
trans2.commit
(trans1.2 being the secondnested transaction within transaction1 and so forth)
If trans1 fails, trans2 should be unaffected (only trans1.1-trans1.3 being
rolled back)
However, libpq at the moment would roll back transaction two as well if
transaction1 fails, even if these transactions are not formally nested.
Solution is either to pool pre-opened writeable connecions and revolve them on
request, or to arbitrate write access (seralizing) via middleware.
I still think our current solution is the least complicated one, and still
relatively efficient.
Horst