[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] close() blocks until flush-behind finishes

From: Harald Stürzebecher
Subject: Re: [Gluster-devel] close() blocks until flush-behind finishes
Date: Mon, 10 Oct 2011 00:39:33 +0200


2011/10/10 Paul van Tilburg <address@hidden>:
> Hello again,
> On Thu, Sep 15, 2011 at 10:53:35AM +0530, Raghavendra G wrote:
>> The option flush-behind makes only the flush call as background. However it
>> waits for all the writes to complete, so that it can return the application
>> errors (if any) happened while syncing them to server. [...]
> Ok, I understand the behavior now, close() returns when the writes to
> all (replicating) servers are complete.  I would like to sketch our
> desired setup/situation.  Maybe it is something that is already possible
> but we haven't thought of the right solution, or we could work towards it.
> We have a client machine and a server/master machine that is connected
> to the client machine via a relatively low-bandwidth line.  To prevent
> noticing this low bandwidth on the client-side, we thought of writing
> data fast locally, and getting the data to the server in a flush-behind
> fashion.  However, the blocking behavior of close() currently gets in
> the way performance-wise.
> Our idea was to have a gluster server with a brick on the client that
> can be fully trusted, and a replicating gluster server with a brick on
> the master.  When we write, close() returns once the local client
> gluster server has received all the data and client-side write errors
> can thus still be reported.  If flushing to the replacing server fails
> thereafter for whatever reason, self-healing can be applied.
> Is this kind of low-bandwidth robust setup already possible?  If not,
> are there any pointers to where we could add/improve things?

If the server is only used as a backup for the files on the client and
not to provide simultaneous write access to the files:
It uses rsync to copy the files to the slave volume, AFAIK.

Kind regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]