gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] NMEA time calculation question


From: Ed W
Subject: Re: [gpsd-dev] NMEA time calculation question
Date: Mon, 07 May 2012 19:48:49 +0100
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:12.0) Gecko/20120428 Thunderbird/12.0.1

On 07/05/2012 19:04, tz wrote:
On Mon, May 7, 2012 at 1:41 PM, Ed W <address@hidden> wrote:
Hi


I think I might not be getting my point across.  Assuming more like 4,800 to 38,400 baud then there should be variation of only one character arrival uncertainty to the timestamp of the start of the ZDA sentence.  However the end of the sentence could have much larger jitter, up to 1ms (eg consider we read only the very last character and the rest of the buffer is empty)

So my jitter is currently 1ms ish, but I believe it should be possible to reduce that to 0.5ms.  Do you agree?

No, because USB "jitter" is random 0-1mS.  It will set the interrupt condition when it sees the character arrive, but it has to poll and it only polls every 1mS.  There is no way to average it out

But my understanding is that we get the data (very quickly) after the poll is done.  Therefore we should get a decently accurate timestamp of the end of the USB poll.  So the poll event might be quick random, but we should at least know when it took place.  Is this correct?

So in practice USB polls will take place at intervals from 0.5ms to 1.5ms, however, we should be able to timestamp *when* they take place, even if we can't predict them in advance.  Seems like this is not a problem in practice and possibly beneficial

 
First question though - did I correctly understand the current gpsd algorithm?

Yes, but it creates an offset, It remembers the time of something early but it doesn't try to compensate for baud rate, and may remember the end of sentence.

See I think if you measure the end of the sentence then you get a different error than if you measure the start of the sentence.  This is because you have an unknown quantisation at the end which isn't present at the start (or at least the size of the error is smaller)



I've done similar experiments.  There is an OFFSET because of the latency to the last sentence, but the jitter is consistent with USB jitter.  115200 baud is fast enough that it takes a 10 character difference in the messages to add up to 1mS.  You might want to try 230400, though the Venus meters the characters out at less than full speed.
2) *IF* not just the initial bit, but in fact if every bit of the Venus 6 output were of low jitter, then because we collect multiple observations of the serial output via USB, then it should be possible to improve our estimate of the arrival timestamp below the 0.5ms mark.  ie we can observe the number of characters read on each USB timestamp, compare with our predicted number of characters we should be able to get sub ms estimates of the arrival time of a particular character - use that to work back and get the arrival time of the first bit. Note that if feasible, this technique would give better accuracy than PPS over USB !

You cannot measure the error.  It is random and not a normal distribution.

Sorry, which error can't you measure?  Also I don't see why you can't so can you please explain why you think you can't?

To recap, my expectation is that if you put it on a scope, the Venus chipset will send the initial $ with PPS level precision, and even subsequent bit will be delivered with very low jitter at 9,600 baud until the end of the ZDA sentence.  Can you please shoot down that expectation if it's not true?

The follow-on is that *if* the per character transmit time is constant and low jitter, then it's not a problem that our observation process of those characters is high jitter.  As long as we can observe "now" with decent accuracy then we can collect enough samples to infer the exact arrival time, even though there is jitter in the observation process.

Key question is whether a) initial bit is delivered with precision (you confirmed yes) and b) subsequent bits are also delivered with low jitter?  Can you confirm/deny b)?



No, the USB "timing" is equivalent of white noise.   The timestamp of the USB will be 0-1mS from when the real interrupt occurs with no way of calculating or predicting it.

What process causes this?  Once the USB bus wakes up and polls the device, where does the data go for the rest of the time before delivering it to the operating system?  Generally buffering things is tricky and expensive so I would have expected a much simpler algorithm where the USB bus wakes up occasionally and simply delivers whatever is waiting at that point - as such we can measure the wakeup event, it's "now".


Thanks for your insight

Ed W

reply via email to

[Prev in Thread] Current Thread [Next in Thread]