gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] performance question


From: Anand Avati
Subject: Re: [Gluster-devel] performance question
Date: Thu, 5 Jul 2007 18:28:09 +0530

Which is the backend filesystem in picture? probably ext3 is not the best in
handling extended attributes? how is the performance with, say, XFS?

avati

2007/7/5, Krishna Srinivas <address@hidden>:

Hi Gerry,

Good observation. I was checking on the performance with
self-heal turned off and the turning it on.

On glusterfs mounted directory:
With self-heal on:
bash-3.1# time cp -r /etc/ .

real    0m28.048s
user    0m0.004s
sys     0m0.060s
bash-3.1# time rm -rf etc/

real    0m28.666s
user    0m0.000s
sys     0m0.032s
bash-3.1#

With self-heal off:
bash-3.1# time cp -r /etc/ .

real    0m2.376s
user    0m0.012s
sys     0m0.060s
bash-3.1# time rm -rf etc/

real    0m3.639s
user    0m0.000s
sys     0m0.000s
bash-3.1#

So there is significant difference. The difference is that there is no
external attributes management when the self-heal is off. AFR code
does getxattr/setxattr when self-heal is on, so this introduces
some overhead. Also there will be overhead in the backend filesystem
code to manage xttrs.

However, if i try to delete the etc directory directly in the backend
(it was copied through glusterfs with self-heal on)
bash-3.1# time rm -rf /export/dir1/etc/

real    0m18.414s
user    0m0.000s
sys     0m0.132s
bash-3.1#

So there is significant overhead in the backend filesystem itself
when xattrs are involved.

Checking the overhead on open/close calls:
(here a.out opens, writes a byte, closes)

Selfheal is on:
bash-3.1# time find . -type f -exec /root/a.out {}  \;

real    0m1.529s
user    0m0.120s
sys     0m0.284s
bash-3.1#

Selfheal off:
bash-3.1# time find . -type f -exec /root/a.out {}  \;

real    0m0.577s
user    0m0.124s
sys     0m0.260s
bash-3.1#



There is  not much difference here. So setxattr/getxattr
does not take much time if xattrs are already existing on
he file. Hence there will be lot overhead only during create/unlink.

We will see if we can optimize anyway in the AFR code.
We have to note that backend system takes a lot
of time during create/delete which we dont have control
over. But still it is acceptable as there is not much overhead
during open/close/write calls.

Regards
Krishna

On 7/5/07, Anand Avati <address@hidden> wrote:

> Gerry,
> please use the write-behind translator on the client side (above AFR)
>
> thanks,
> avati
>
> 2007/7/4, Gerry Reno <address@hidden>:
> >
> > In copying my /usr tree (4.9G) to a gluster client mount with a 4
> brick
> > AFR with no other translators I see it is taking about 1 hr. 45
> min.  Is
> > this normal performance?
> > Now this is with the bricks all on the same machine and same ext3
> > filesystem, but that seems like a long time even still.
> >
> > Gerry
> >
> >
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > address@hidden
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> Anand V. Avati
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




--
Anand V. Avati


reply via email to

[Prev in Thread] Current Thread [Next in Thread]