gnump3d-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnump3d-devel] Re: Idea for realtime tagcache updating


From: Steve Kemp
Subject: [Gnump3d-devel] Re: Idea for realtime tagcache updating
Date: Sat, 12 Mar 2005 11:26:16 +0000
User-agent: Mutt/1.3.28i

On Fri, Mar 11, 2005 at 05:40:58PM -0500, Stuffed Crust wrote:
> On Sat, Feb 19, 2005 at 09:30:29PM +0000, Steve Kemp wrote:
> > > I'm also working on a method to update the stored tag cache on the fly, 
> > > so we don't need to manually re-run gnump3d-index to pick up those extra 
> > > files.  
> > 
> >   I think I said in a previous reply this seems like a good plan.
> 
> I've been thining about this a bit, and I'd like to float this past you 
> to see if there are any gotchas that I may have missed:
> 
> Child discovers file is not in cache.  Parses out tags, appends them to 
> CACHEFILE_NEW.  (using O_EXCL access to prevent other children from 
> hitting the file and screwing us up concurrently).
> 
> Parent, on the next loop cycle, notices the presence/update of
> CACHEFILE_NEW, and loads it into the cache for subsequent children to
> benefit from.  (we update the master cache file in a similar manner).

  Sounds good to me.

> The next time gnump3d-index is run, it loads up both CACHEFILE and 
> CACHEFILE_NEW before donig its traversal, writes everything out into 
> CACHEFILE then erases CACHEFILE_NEW, finally HUPping the server so that 
> it loads up the new cache file.

  OK.

> 
> So far, there's only one problem I see -- There could be a race between
> two children trying to read the same directory, both deciding to parse
> tags/update the cache.  We could then end up with duplicate entries in 
> CACHEFILE_NEW, which isn't all that terrible.  I don't see any way 
> around this concurrency problem though, at least not without going to 
> threads (and that'll introduce its own mess that IMO is really not worth 
> the hassle)

  Checking for duplicates can be done when the HUP signal is received
 so that multiple entries aren't added?  (Although I guess there is no
 harm in it except for the size growth).

  I've tried to attack the problem using shared memory segments
 a few times, but I can never get it to work as well as I think
 it should :(

Steve
--




reply via email to

[Prev in Thread] Current Thread [Next in Thread]