[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xougen] GPL-ed X compression scheme -- my gaff

From: Gian Filippo Pinzari
Subject: Re: [xougen] GPL-ed X compression scheme -- my gaff
Date: Wed, 1 Oct 2003 04:33:09 +0200
User-agent: KMail/1.5

On Sat, 27 Sep 2003 14:57:02, Per Cederberg wrote:
> I think NX improves latency primarily by caching stuff 
> like images and what-not.

For sure by speeding up compression by using intelligent caching, 
we reduce the response time and the perceived latency. Anyway this
would not be enough without specific strategies intended to deal with 

Compared to Xlib and plain X protocol, NX uses different techniques to 
synchronize with the X server and arbitrate the available bandwidth 
among all the running X clients. 

1. NX never relies on XSync(). Any XSync()/X_GetInputFocus is 
    translated in a 'suggestion' to the proxy system to flush the link. 
    Communication is 100% asynchronous. 'Congestion' messages
    are exchanged between proxies in case X channels (clients or 
    the X server) are not consuming enough data.

2. To any client is given a slice of the available bandwidth. Clients 
    are 'suspended' when the threshold is exceeded and are kept in 
    suspended state as long as the link is unavailable. This happens 
    much earlier than the X display socket is 'blocked' for write. Why? 
    Let's consider the case we are running a session across a modem 
    link. By reducing the size of the TCP buffer, f.e. to 4KB, performances 
    are heavily affected due to the high number of incomplete proxy 
    frames that are transmitted. By leaving buffers big (let's say 64 KB) 
    we are notified that write would block when there is enough data in 
    buffers to keep the low-bandwidth link busy for 20 seconds. This is 
    exactly what happens with SSH, where a demanding client can make 
    the complete session to become unresponsive. On the contrary 
    nxproxy keeps a constant amount of data in the TCP buffer (2048 
    bytes in case of modem). This data always constitute full frames 
    (infact there is always space for a new full frame as size is not limited 
    by the available buffer). This makes possible, in the worst case, to 
    respond to the user's input in less than 0.7 seconds. That time is
    reduced to 175 ms if, in the meanwhile, a client was requesting a 

3. Images whose size exceeds a given threshold (1024 bytes in case 
    of modem) are split in small chunks and streamed through the link. 
    Chunks are interleaved with other X requests coming from less 
    demanding clients. Using SSH+Zlib or LBX a full image must be 
    completely transferred before other clients are able to use the link.
    Considering a single image can be as big as 256KB, this makes a 
    big difference.

> I also saw a remark on their
> web site that a well-written X application could perform 
> just as well, but most real-life applications have not 
> been optimized for low latency on remote connections.

Everybody agrees that toolkits and X applications should be better optimized
for low-bandwidth links. Clients would run faster even on local X connections
and people would finally stop blaming X :-). I dedicated to blaming X 
a big part of the README that accompained release MS1 of mlview-dxpc, 
in March 2001 :-). Nevertheless I'm not sure we said 'X application could 
perform just as well' by only removing round-trips. Probably even after having 
removed all the un-needed round trips, X would still need proxying and good 
compression :-).


/Gian Filippo Pinzari.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]