gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] Abstraction layer performance costs


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] Abstraction layer performance costs
Date: Sat, 19 Oct 2002 17:46:31 +0200
User-agent: Mutt/1.3.22.1i

> I did some measurements, that suggest that:
> 1. THe overhead of reading data through my interface vs. PG directly is
> small if the query results in a small number of rows returned and
> reaches about maximal 20% (depending on the number of rows fetched).
> 2. pyPgSQL is much slower on large queries than pgdb.
> 3. one of the greatest performance hits was due logging the result to the
> log file in DBObject :)
Absolutely not doubt about that. There's three ways to fix
this:

1) not logging this stuff
2) wrapping the logging statements in
    #<DEBUG>
    _log.Log(...)
    #</DEBUG>
   such that they can be filtered out by remove_debugging.py
   in release versions
3) logging at the lowest priority gmLog.lData and running the
   release version at a priority > lData which would still
   result in some overhead which would, however, only amount
   to an excessive call to _log.Log() that returns immediately

> > individual performance penalties are small, but they do add up.
> > 0.001 seconds is nothing, but 1000 times 0.001 seconds is a pain.
> I'm convinced that it is possible to optimize queries/code in a way that
> these performance penalties are minimized so that the user won't notice
> the difference. And usually the user won't access huge lists of data as I
> have done. Even then one could think about special methods of finding an
> optimal index.
I have to agree here. Two ways of tuning the expected result
size in gmPhraseWheel:

1) the match-fetching thresholds
2) the pre-query timeout

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]