[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] close() blocks until flush-behind finishes

From: Paul van Tilburg
Subject: Re: [Gluster-devel] close() blocks until flush-behind finishes
Date: Mon, 10 Oct 2011 00:11:36 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Hello again,

On Thu, Sep 15, 2011 at 10:53:35AM +0530, Raghavendra G wrote:
> The option flush-behind makes only the flush call as background. However it
> waits for all the writes to complete, so that it can return the application
> errors (if any) happened while syncing them to server. [...]

Ok, I understand the behavior now, close() returns when the writes to
all (replicating) servers are complete.  I would like to sketch our
desired setup/situation.  Maybe it is something that is already possible
but we haven't thought of the right solution, or we could work towards it.

We have a client machine and a server/master machine that is connected
to the client machine via a relatively low-bandwidth line.  To prevent
noticing this low bandwidth on the client-side, we thought of writing
data fast locally, and getting the data to the server in a flush-behind
fashion.  However, the blocking behavior of close() currently gets in
the way performance-wise.

Our idea was to have a gluster server with a brick on the client that
can be fully trusted, and a replicating gluster server with a brick on
the master.  When we write, close() returns once the local client
gluster server has received all the data and client-side write errors
can thus still be reported.  If flushing to the replacing server fails
thereafter for whatever reason, self-healing can be applied.

Is this kind of low-bandwidth robust setup already possible?  If not,
are there any pointers to where we could add/improve things?

Kind regards,

PhD Student @ Eindhoven                     | email: address@hidden
University of Technology, The Netherlands   | JID: address@hidden
>>> Using the Power of Debian GNU/Linux <<< | GnuPG key ID: 0x50064181

reply via email to

[Prev in Thread] Current Thread [Next in Thread]