gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] SHM code


From: Eric S. Raymond
Subject: Re: [gpsd-dev] SHM code
Date: Sun, 15 Feb 2015 17:18:23 -0500
User-agent: Mutt/1.5.23 (2014-03-12)

Hal Murray <address@hidden>:
> 
> address@hidden said:
> >     volatile struct shmTime *shmTime[NTPSHMSEGS];
> >     bool shmTimeInuse[NTPSHMSEGS];
> 
> That's backwards from the way things are in SHM: collection of arrays rather 
> than array of structs.

Agreed.  Had I looked at this sooner before release, I would have abolished
shmTimeInUse in favor of using dummy[0] in the structure. I'm very tempted
to do it now - we're not going to get a lower-cost opportunity until the
next major-number bump in the object format, which could be years out.  And
(bearing in mind all our embedded low-memory deployments) reducing the size
of the device structure is always a good thing.

> > Which is OK for our normal use case, but consider the case where the host
> > has multiple refclocks.  They would all be trying to get segments, and it
> > would be handy if the "in_use" flag for each segment were in shared memory. 
> 
> >From the ntpd side, it's simple: manual configuration.  gpsd has the 
> no-configuration-required religion.  I think that's fine if you only have one 
> GPS device, but I don't see how to sort things out if you have more than one.

Trust me, you do not want the endless stream of "my GPS isn't
working!" misconfiguration reports we'd get from thumb-fingered idiots
if we didn't autoconfigure.

I think the logic we already use would work. Right now it guarantees no
collisions for multiple GPS devices managed by gpsd by walking the shmTime 
array and allocating the first slot for which the corresponding shmTimeInuse
bool is false.  With the bool in the shmTime structure the same logic would
work for *all* refclocks.
-- 
                <a href="http://www.catb.org/~esr/";>Eric S. Raymond</a>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]