gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] Approaches to maintain clinical data uptime


From: Tim Churches
Subject: Re: [Gnumed-devel] Approaches to maintain clinical data uptime
Date: Thu, 04 May 2006 07:11:00 +1000
User-agent: Thunderbird 1.5.0.2 (Windows/20060308)

James Busser wrote:
> On May 3, 2006, at 11:27 AM, Tim Churches wrote:
> 
>> Note that I would not recommend Amazon S3 as your **only** form of
>> offsite and/or archival back-up storage, but as a supplement to backup
>> archives on optical media and/or tapes, it seems rather promising.
> 
> Surely it would be the *fastest* way to retrieve something, if the
> alternative source (the media) were cloistered off site. Maybe the
> redundant local media could be limited to
> - re-writable dailies x6 (could even be skipped in the presence of a 
> reliable slave database)
> - re-writable weeklies x4
> with write-once monthlies.*

Yup.

> Even just $10 US per month could add up, over months to years, as would
> any longer term storage (storing one gig per month for "monthlies"), the
> annual cost would be $US 120 + ($12 per year of "monthlies"
> accumulated). So the first year you would pay $132 and the 5th year you
> would pay $180. The added $48 would be in place of having to duplicate,
> and separately store, another 60 DVDs to provide redundancy to what
> would otherwise be a single "store" of years of data should the database
> get corrupted.

Assuming that your compressed backup occupies 1GB. I asked about
compressed back-up sizes on the local (Australian) GP computing list,
and answers were (for a range of different clinical system using
different database technologies) about 3 or 4 GB for group practices
(typically 3 to 6 GPs) with 5 years or more of data, but storing scanned
correspondence etc in the database. For those that didn't store scanned
correspondence of other images, the compressed backup sizes were under
100MB for many years of patient data for several docs.

> I think this would make tremendous sense. What further needs to be done
> to marshal the support for it to happen? Do we need funding, and/or to
> solicit one or multiple donors?

Not really. Design parameters for a tool to manage Amazon S3 (and maybe
Google Gdrive and OmniDrive) off-site encrypted database backups need to
be fleshed out - what features, interfaces etc, and then an estimate of
the work involved in coding, testing and document made. As far as coding
goes, I suspect it will be straightforward and I know that Syan Tan has
trod many of the paths needed to be travelled in Python with his work on
the Wagtail encrypted comms project - if Syan is willing to contribute ,
that is. But whether funding is needed depends on who volunteers to
assist. I am happy to participate in speccing it out, reviewing code,
testing and writing functional tests and documentation. But not the core
coding or writing unit tests.

> * For care that had been delivered more than a month ago, surely
> monthlies would be sufficient (given that the monthly written 1-30
> (average 15) days before the event could serve the purpose of
> "baseline", and the monthly written 1-30 (average 15) days after would
> show not only the rows that remained "live" at the time, but also any
> rows that had been audited to that point. --- It is true that records
> could have been manipulated in the window of time before the next dump,
> but I cannot imagine any regulatory bodies would *require* daily
> notarized backups be retained in perpetuity, that would be a LOT of media!

Sounds reasonable.

Tim C





reply via email to

[Prev in Thread] Current Thread [Next in Thread]