gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] PPS over USB


From: Dave Taht
Subject: Re: [gpsd-dev] PPS over USB
Date: Mon, 7 May 2012 14:14:20 -0700

On Mon, May 7, 2012 at 1:58 PM, Eric S. Raymond <address@hidden> wrote:
> Ed W <address@hidden>:
>> I guess I still don't understand the bufferbloat / thumbgps
>> requirements, but either you need a very stable clock local, or you
>> need a bunch of clocks which are sync'ed to each other (across the
>> world).
>
> The latter.  What we're really after is accurate packet transit times.
> So it matters less whether the clocks are accurate than whether
> they're highly stable to a common timebase. Which in this case is GPS
> atomic clock time.
>
>> I might guess that this is the goal of thumbgps - get the offset to
>> zero?  However, it appears in principle that you might achieve
>> almost the same simply syncing to a carefully chosen pool of stratum
>> 1 servers?
>
> But we're not sure we can trust NTP.  That's the whole problem; if bufferbloat
> really is inducing very large, very short-period latency spikes, it may be
> screwing with the symmetry and statistical-smoothness assumptions that
> NTP synchronization relies on.
>
> One of the explicit goals of the Cosmic Backround Bufferbloat Detector is to
> sanity-check NTP.
> --
>                <a href="http://www.catb.org/~esr/";>Eric S. Raymond</a>
>

:whew: what a thread!

When eric and I first started talking about this 11 months ago we
stalled out for 9 months on some of the basic problems. Then he
blogged... Going from concept to working hw in the under 60 days since
is astounding.

I've been heads down solving a bunch of other problems of late and can
barely keep up here, but having a plan for actual deployment wasn't
even a remote possibility 60 days ago, and while some of that is
beginning to move, it's going to take a while to sort it out!!!!!

To outline some of that:

0) The intent was to establish a baseline of effectively 'stratum 1'
boxes 'beyond the edge', worldwide to do end to end measurements.

Most measurements done to date are usually to a multitude of central
points. Seeing the interconnects between providers misbehave (or not)
seems useful.

It's also a technique  that scales O(n), which for a change, is
something desirable. :)

We just arbitrarily picked 100 as a good number. More would be better,
2 would be a start.

There would be ongoing measurements of the 'cosmic background' noise
that ntp filters out.
It would be helpful to identify (dynamically) misbehaving ntp servers
as well and get them out of the server pool(s)

Similarly, trustable measurements between a solid ntp server(s)
somewhere would be good...

Theres more to it than this...

1) Establish a partnership with a lab to store/analyze/present the
data - am talking with both mlabs and onelab
2) Find software worth leveraging. At the moment, the leading
candidate is 'scamper'. Multiple other possibilities exist,
suggestions wanted. Take a look at what caida does for visualizations,
for example -
3) Pull something together that could gather, collect, and analyze rawstats data
4) Analyze performance under various loads of the software and hardware.

Several posters here seem to have the assumption that the
hardware/OS/queues in play can't heisenbug the data, and I assure you,
it can....

5) Choose a database backend - leading candidates are map/reduce and
postgres. Suggestions wanted.

There's way more than all that, but having trustable time everywhere
was the first thing required to get anywhere, particularly when doing
comparisons between more than 2 devices at the same time, in a
centralized db.

-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]