gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Disk power-saving friendliness


From: Shehjar Tikoo
Subject: Re: [Gluster-devel] Disk power-saving friendliness
Date: Wed, 01 Jul 2009 12:32:20 +0530
User-agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103)

Gordan Bobic wrote:
Does GlusterFS issue forced disk flushes?
Not after the point where an application's write
has found its way to the storage/posix.
We dont do write buffering as yet so there are no
excessively delayed writes. Of course, write-behind
does delay writes but only by the small fraction of
time it takes earlier write requests to be responded to.
Furthermore, all writes are forced to disk at a close().

I find that if I put the server into "laptop mode" to conserve power by spinning down the disk writes still tend to wake them up.

Are you sure they're writes and not reads? And are you sure it is
GlusterFS process that is issuing requests that need service from disk?

Are the any particular paramters/options that should be used in this mode to minimize disk wake-ups? Disable direct I/O? Does write-behind
 work for local disks? Any suggestions on this?

1. Try spinning down without GlusterFS running. See if the disk still
wakes-up. It is possible a different application wakes up the disk.

2. With glusterfsd running, try unmounting all the mount points to
this server and leave the server running. It might help us pin-point
if the server is issuing disk requests without us knowing about it.

3. Keep GlusterFS server running along with all the mount points
mounted. But do not have any applications running on the mount points.
This will help us determine if the GlusterFS client side stack is
issuing disk requests even though no application requests are coming
through.

Lets take it from there then.

-Shehjar


Gordan


_______________________________________________ Gluster-devel mailing
list address@hidden http://lists.nongnu.org/mailman/listinfo/gluster-devel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]