[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] GNUmed web interface - authentication

From: Richard Taylor
Subject: Re: [Gnumed-devel] GNUmed web interface - authentication
Date: Tue, 12 Oct 2010 15:51:08 +0100
User-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv: Gecko/20100915 Thunderbird/3.1.4


On 07/10/2010 20:26, Luke Kenneth Casson Leighton wrote:
> On Thu, Oct 7, 2010 at 7:54 PM, Sebastian Hilbert
> <address@hidden> wrote:
>>> I wonder if you considered using TLS client certificates to provide the
>>> persistent identity?
>  how would these result in authentication at the postgresql level?

You could use the certificate to authenticate through to the middleware
and then prompt the user for the database password. - But having taken a
short look at the code I not think that this is the way to resolve the
issue - see before.

>  does postgresql have an authentication plugin which allows TLS client
> certificates to be used?

Postgres will do TLS out of the box - as long as it is compiled in
- but again I don't think you need it.

>  the problem richard is that the design of the web service is totally
> unlike any other web service you will ever see in your life.  unlike
> "normal" web service frameworks where the web service is the sole and
> exclusive authenticated user that connects to the database, gnumed
> uses postgresql "roles" to authenticate.
>  as in - it is actually the job of the postgresql database, via the
> postgres users and postgres passwords, to perform the user
> authentication.  this is NOT normal practice in web frameworks:
> typically the web framework has a database stuffed with usernames and
> credentials (hashes of passwords) and the web _framework_ performs
> authentication, having gained access to that database table with its
> one and one only database authentication user+password.
>  so any authentication replacement or modifications to the gnumed web
> service MUST pass those credentials through - not to the web framework
> - but actually TO POSTGRESQL.

Your explanation is very clear. I can see what you are attempting to do.
It is not as unusual as you might think. I agree that most web
applications will hold a single database connection, authenticated as
the 'webapp'. Some that I have been involved with hold a number of
different connections as different 'roles' so that isolated parts of the
application only get given a connection with specific permissions. This
limits the damage that can be done by something like a SQL injection
attack but it does not provide the audit trail back to the user.

I guess that you want to maintain a single place where
user/role/permission management is performed and rely on the database to
enforce what a specific user can, and can't, do to the data. Eminently

Forgive me if I am covering ground that you are already familiar with.
Bear with me ...

To make your approach work you need 'session management' in the server.

When the user first connects to the server a check is made for the
presence of a cookie. The cookie will not be there so the user is
prompted to log in. Their user name / password is used to create a
database connection (as is currently done in the connect_to_database
func in a cookie is also created as a
session-identifier. The database connection is added to a
connection-pool, indexed against the session-identifier. The cookie is
returned to the user along with the 'login success' page.

The next request from the user will contain the session-identifier in
the cookie, so this can be used to lookup the existing database
connection, already authenticated for that specific user. The request
can then do what it needs to do and return the result to the user.

When the user logs out, or the sessions times out, the database
connection can be deleted.

For this approach to work the session information has to be persistent
between requests. There are many ways of achieving this depending upon
how elaborate the solution needs to be. It gets complicated when there
are multiple front-end web servers in a hot-failover arrangement, but in
the simple case of a single server it is quite straight-forward to arrange.

In the case of gnumed I do not see why it should be a problem to implement.

A quick look at your existing code flags up one possible gotcha. I am
not certain that it is a problem because I have not had the time to walk
through the code in detail. You are using a 'fork-per-request' server
(SocketServer.ForkingTCPServer), this will fork a new process to satisfy
each new connection. If you create a database connection in the new
forked sub-process, it will not be available to any subsequent
connection - even if it is added to a connection-pool. This is because
the connection-pool will also be local to the new subprocess. The
symptom will be that everything will work OK for a number of requests
from the web browser, because the same TCP connection will be re-used,
but when a new TCP connection is used the database connection will not
be available and the request will fail. A 'thread-per-connection' or
'async-server' pattern would avoid such problems but might complicate
things in other areas.

It is worth noting that these problems have been solved many, many times
and you might consider using a wsgi ( server with
an existing session management (e.g. implementation rather than
rolling your own.

One small point of note. The connection-per-user pattern of database
usage does have scalability limitations. Postgres might start having
problems if you have many hundreds of users connected at the same time.

All the best


reply via email to

[Prev in Thread] Current Thread [Next in Thread]