[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gnumed-devel] concurrency error detection in business objects
From: |
Karsten Hilbert |
Subject: |
[Gnumed-devel] concurrency error detection in business objects |
Date: |
Thu, 4 Nov 2004 22:50:49 +0100 |
User-agent: |
Mutt/1.3.22.1i |
Hi all,
I have (or so I hope) implemented full (?) support for
detecting concurrency conflicts in our business objects.
When does a concurrency conflict happen ? There are two cases:
1) Right at the time we are trying to update/delete some data
another process (user, client) is trying to update/delete
the same data. This is detected because all of our
transactions run in "serialization" isolation level. One of
the two concurrent transactions will simply block until the
other one ends. It then either succeeds or fails depending
on what the other transaction did (eg. succeed/fail).
This is standard transactional/db-level/logical concurrency.
2) We fetch some data. We then display that data to the user.
Then we end our transaction because it would be a bad idea
to (perhaps forever) wait for the user to decide whether or
not to change any of the data. If another transaction
changes the data *before* the user decides to try to edit
some of it all is fine because that database change would
have been NOTIFied to the application which would have
updated its display. The user would be selecting an
updated record for editing. If, however, the user already
started to edit some data (which does *not yet* lock that
data) *before* another transaction changed that very same
data in the database we would have a *semantic* concurrency
conflict (all is fine for the DB because the two changes
happen nicely in line - serialized). This, however, we also
detect and report. We use the fact that every table has a
column XMIN that represents the ID of the transaction that
last changed that row. Now, on the very first fetch we get
the XMIN value. Later on, when we try to lock the row for
update we use that initial XMIN value in the where clause
to select the row to be locked. If another process changed
the row (data) then zero rows will be locked which we
detect. In that case the business objects store their
modified values in a backup variable and update themselves
from the database. Frontend code can now inform the user
about the changes and let her merge the conflicting data.
It's hairy, yes, but I surely hope we got our behinds covered
fairly well here.
Comments please. I should like to learn of any fallacies now
rather than when real data is at stake.
Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
- [Gnumed-devel] concurrency error detection in business objects,
Karsten Hilbert <=