gpsd-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gpsd-dev] NMEA time calculation question


From: tz
Subject: Re: [gpsd-dev] NMEA time calculation question
Date: Mon, 7 May 2012 15:10:01 -0400



On Mon, May 7, 2012 at 2:48 PM, Ed W <address@hidden> wrote:
On 07/05/2012 19:04, tz wrote:
On Mon, May 7, 2012 at 1:41 PM, Ed W <address@hidden> wrote:
Hi


I think I might not be getting my point across.  Assuming more like 4,800 to 38,400 baud then there should be variation of only one character arrival uncertainty to the timestamp of the start of the ZDA sentence.  However the end of the sentence could have much larger jitter, up to 1ms (eg consider we read only the very last character and the rest of the buffer is empty)

So my jitter is currently 1ms ish, but I believe it should be possible to reduce that to 0.5ms.  Do you agree?

No, because USB "jitter" is random 0-1mS.  It will set the interrupt condition when it sees the character arrive, but it has to poll and it only polls every 1mS.  There is no way to average it out

But my understanding is that we get the data (very quickly) after the poll is done.  Therefore we should get a decently accurate timestamp of the end of the USB poll.  So the poll event might be quick random, but we should at least know when it took place.  Is this correct?

The actual time of the USB poll isn't accurate, so any timestamp would have a builtin error.  Also you don't know when the character actually came in.  It might be starting to come in but incomplete during the current poll, so it will appear (over) 1mS later at the next poll, or it could have just been completed and enough housekeeping was done so it came in a few microseconds before.

So in practice USB polls will take place at intervals from 0.5ms to 1.5ms, however, we should be able to timestamp *when* they take place, even if we can't predict them in advance.  Seems like this is not a problem in practice and possibly beneficial

Timestamp with what and how?  The polls are done internally on the "hub" chip and you don't really know one has occurred except that you see an interrupt indication for something.  There isn't a "poll" bit accessible in the driver via ioctl or any other mechanism.  All that is hidden and buried.

 
First question though - did I correctly understand the current gpsd algorithm?

Yes, but it creates an offset, It remembers the time of something early but it doesn't try to compensate for baud rate, and may remember the end of sentence.

See I think if you measure the end of the sentence then you get a different error than if you measure the start of the sentence.  This is because you have an unknown quantisation at the end which isn't present at the start (or at least the size of the error is smaller)




I've done similar experiments.  There is an OFFSET because of the latency to the last sentence, but the jitter is consistent with USB jitter.  115200 baud is fast enough that it takes a 10 character difference in the messages to add up to 1mS.  You might want to try 230400, though the Venus meters the characters out at less than full speed.
2) *IF* not just the initial bit, but in fact if every bit of the Venus 6 output were of low jitter, then because we collect multiple observations of the serial output via USB, then it should be possible to improve our estimate of the arrival timestamp below the 0.5ms mark.  ie we can observe the number of characters read on each USB timestamp, compare with our predicted number of characters we should be able to get sub ms estimates of the arrival time of a particular character - use that to work back and get the arrival time of the first bit. Note that if feasible, this technique would give better accuracy than PPS over USB !

You cannot measure the error.  It is random and not a normal distribution.

Sorry, which error can't you measure?  Also I don't see why you can't so can you please explain why you think you can't?

 You have an event correlated with the PPS, e.g. the first character of the first sentence.  That can be visible on an oscilliscope.

In order to timestamp it, you must have something you can do at the far end to detect it.  (I've toggled one of the other serial port pins in response in my tests when TIOCMWAIT came back).

On the scope, I can see multiple miliiseconds - variable - of jitter for any user program including one at nice -20 between either the PPS itself or the first character, and the line I toggle to indicate it came in.  If the kernel is doing disk-io or something else intense, any user program won't get the message until it is unbusy.  You need a time-slice.

If you can go into the kernel itself and timestamp within a hardware interrupt (on a multicore), you can do high accuracy.  The USB poll is not an interrupt as such, and timestamping in userland is not going to work better than 1mS, but there will be outliers.

To recap, my expectation is that if you put it on a scope, the Venus chipset will send the initial $ with PPS level precision, and even subsequent bit will be delivered with very low jitter at 9,600 baud until the end of the ZDA sentence.  Can you please shoot down that expectation if it's not true?

Yes, on the scope it is precise.  But the USB is imprecise to over a mS (and isn't correlated), and then there is the kernel housekeeping, task switching, and whatever else between when the event occurs and the timestamp is obtained.  You can get close to 1mS using USB on an non-busy system but not more.

The follow-on is that *if* the per character transmit time is constant and low jitter, then it's not a problem that our observation process of those characters is high jitter.  As long as we can observe "now" with decent accuracy then we can collect enough samples to infer the exact arrival time, even though there is jitter in the observation process.

We can't observe "now" with a decent accuracy from userland, and even in the kernel interrupts can be off for a millisecond in bad cases - though more often it will only be a few microseconds.  But you need the event to be timestamped correlated to the PPS edge - either PPS signal or character in the first place.  Over USB that doesn't and won't happen - USB 2.0 can go to 125uS if you set the bits so it polls at 8MHz.
 
Key question is whether a) initial bit is delivered with precision (you confirmed yes) and b) subsequent bits are also delivered with low jitter?  Can you confirm/deny b)?

They OCCUR with that precision.  They are delivered with a precision no greater than 1mS over USB to the kernel and worse if it is userland.
No, the USB "timing" is equivalent of white noise.   The timestamp of the USB will be 0-1mS from when the real interrupt occurs with no way of calculating or predicting it.

What process causes this?  Once the USB bus wakes up and polls the device, where does the data go for the rest of the time before delivering it to the operating system?  Generally buffering things is tricky and expensive so I would have expected a much simpler algorithm where the USB bus wakes up occasionally and simply delivers whatever is waiting at that point - as such we can measure the wakeup event, it's "now".

USB doesn't "wake up".  USB sends an interrupt pol packet over the wirel every 1 millisecond.  The device responds with a packet indicating whether there is an interrupt condition or not.  It then gets more info and possibly characters in response.

The USB Host controls everything and it only polls every 1mS (1.1) from the USB host chip.  The host chip cannot see anything on the peripheral chip until it sends a packet to request it.


Thanks for your insight

Ed W


reply via email to

[Prev in Thread] Current Thread [Next in Thread]