gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] low performance when working from remote server


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] low performance when working from remote server
Date: Thu, 3 Jun 2004 20:47:20 +0200
User-agent: Mutt/1.3.22.1i

Jim,

can you follow this thread in the Wiki under Performance or
some such topic ?

Thanks !

Hilmar,

for one thing, anubis.homeunix.com has abysmal hardware, a
totally non-optimized PostgreSQL 7.1, and only an OK uplink.
Try using hherb.com.

OTOH you are right that performance is bad both conceptually
and due to lack of optimization. We will have to employ a
number of solutions.

- We do postpone retrieving data for the notebook plugins
  until ReceiveFocus() time.
- I am not sure, however, whether we re-retrieve data on
  *every* single ReceiveFocus() even with no patient change
  inbetween. That needs to be checked. Once the initial load
  is done subsequent loads will be triggered by incoming db
  change notifications. Plugins, however, will need to be
  checked whether they listen for those.
- The lab journal should implement the lazy-load for it's
  pages, too.
- Some widgets, such as the lab chart, will have to use
  partial loading with explicit or implicit after-load
  depending on user action. I imagine this by, say, turning
  get_lab_results() into a "generator" function that is
  attached to a cursor.
- A threaded background loader is needed. I lack the vision on
  how to implement this reliably.
- Profiling is needed to weed out slow queries and slow code.
  Hilmar did a nice start on this.
- We may have to switch to some bulk-loading strategies
  regarding value object classes. One way to do this might be
  to, yes, keep the ability to fetch it's own data within each
  class instance but also add a constructor for the objects to
  be instantiated from a prefetched dictionary. Whether that
  dictionary ultimately stems from the in-memory results of a
  direct bulk SQL query, from a collection object or from a
  local disk-based cache that's filled by the background
  loader following patient selection is a secondary matter
  AFAICT.
- In-schema judicious use of indexes is needed. Also, PG
  installs need to be tuned to some degree.

> Just to get an idea what might the reason for this behaviour I
> ran ethereal and captured the switch from the documents to the
> lab-journal page.
:-)  You picked *the* single slowest tab change ! Good job ;-)

> It took about 160s (or 2min 40 secs) and
> needed about 1500 packets to send and receive.
This may be a typical case for bulk loading and dict style
instantiation. Many of those packets represented data for one
value object instantiation, eg pending lab requests and
unreviewed lab results. Just imagine what would have happened
if we wouldn't use read-connection sharing :-)

Do you think it is possible to profile the amount of time it
takes to instantiate all those classes ?

> Add to this 1-2 s for data retrieval on
> backend side if the database holds a lot of data
No. If that's the case you DB isn't properly tuned and the
schema is lacking proper indexing. Note that I was able to cut
down retrieval time from several seconds to a few ten
milliseconds by using proper queries and indexing even with
120-thousand patient entries.

> slow to work with. Is there anything we can do about ?
can ? Must !

Karsten

BTW: a mini-project: make sure loading/unloading notebook tabs
at runtime works reliably. This would *vastly* improve plugin
development !!

-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]