gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] ls performance


From: Anand Avati
Subject: Re: [Gluster-devel] ls performance
Date: Wed, 20 Feb 2008 17:23:24 +0530

> I am working with a unified AFR filesystem, with 4 servers, AFR on the
> client-side, the clients are the servers, and the namespace is also AFRed.
>
> I notice that with multiple dd processes (spread across the machines)
> writing to the filesystem, ls -l (and even just ls, which is odd, as it
> should only access the namespace shares) are rather slow (10s of seconds
> to several minutes).  Splitting the server shares into multiple glusterfsd
> processes helps, and not using the performance translators seems to help a
> little (perhaps because the server processes are then less in-demand).


Do you have io-threads on the *server* ? when you are writing io-threads
pushes the write ops to a seperate thread and keeps the main thread free for
meta data ops.

Also, I notice that when using rm to remove 10GB files, ls will hang
> completely until the rm processes have finished (blocking?).


Is your backend ext3? It is a known issue of rm taking excessively long
times on ext3. We plan to have a workaround to this by considering unlink as
an IO operation rather than metadata op in the future versions.

Reads impact ls performance, too, but to a much, much smaller degree.
>
> I might consider the possibility that my gigabit links are saturated, but
> my ssh sessions are perfectly responsive.  I can ssh to the server nodes
> and ls -l the shares directly far faster than through the GlusterFS mount,
> when multiple high-speed writes are occurring.




Any ideas on how to improve the ls performance? Could GlusterFS be tweaked
> to give priority (perhaps a separate thread) to metadata-type queries over
> writes (and especially rm)?


This is what io-threads on the server does, except currently it considers rm
as metadata operation. Let us know if it made a difference. We also plan to
make write-behind's aggresiveness more 'controlled' (writing behind only a
window size worth of data instead of infinitely writing behind which causes
stress on the network link)

avati


reply via email to

[Prev in Thread] Current Thread [Next in Thread]