gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: [Gluster-devel] Performance


From: Anand Avati
Subject: Fwd: [Gluster-devel] Performance
Date: Mon, 15 Oct 2007 14:09:30 +0530

Copying gluster-devel@

From: Brian Taber <address@hidden>
Date: Oct 15, 2007 3:59 AM
Subject: Re: [Gluster-devel] Performance
To: Anand Avati <address@hidden>

Now there's a beautiful thing....

NFS Write:
time dd if=/dev/zero bs=65536 count=15625 of=/shared/1Gb.file
1024000000 bytes (1.0 GB) copied, 62.9818 seconds, 16.3 MB/s

Gluster Write:
time dd if=/dev/zero bs=65536 count=15625 of=/mnt/glusterfs/1Gb.file
1024000000 bytes (1.0 GB) copied, 41.74 seconds, 24.5 MB/s

NFS Read:
time dd if=/shared/1Gb.file bs=65536 count=15625 of=/dev/zero
1024000000 bytes (1.0 GB) copied, 44.4734 seconds, 23.0 MB/s

Gluster Read:
time dd if=/mnt/glusterfs/1Gb.file bs=65536 count=15625 of=/dev/zero
1024000000 bytes (1.0 GB) copied, 42.1526 seconds, 24.3 MB/s


this test is performed within a VMWare virtual machine, so network speed
isn't as good.   I tried it from outsite, 1000MB network:



NFS Write:
time dd if=/dev/zero bs=65536 count=15625 of=/shared/1Gb.file
1024000000 bytes (1.0 GB) copied, 27.619 seconds, 37.1 MB/s

Gluster Write:
time dd if=/dev/zero bs=65536 count=15625 of=/mnt/glusterfs/1Gb.file
1024000000 bytes (1.0 GB) copied, 11.1978 seconds, 91.4 MB/s

NFS Read:
time dd if=/shared/1Gb.file bs=65536 count=15625 of=/dev/zero
1024000000 bytes (1.0 GB) copied, 43.5323 seconds, 23.5 MB/s

Gluster Read:
time dd if=/mnt/glusterfs/1Gb.file bs=65536 count=15625 of=/dev/zero
1024000000 bytes (1.0 GB) copied, 30.6922 seconds, 33.4 MB/s



> Brian,
>  block size of 1kb is too small and expensive, especially for network or
> fuse based filesystems. Please try with a larger block size like 64kb.
>
> avati
>
> On 10/14/07, Brian Taber <address@hidden> wrote:
>>
>> I am new to ClusterFS and I am looking for a replacement for my NFS
>> setup.
>> I have configured a clusterfs server on top of a raid 5 array on 4 SATA
>> hard drives.  Directly I can get a speed of 72.3 MB/s weh I do a:
>>
>> dd if=/dev/zero bs=1024 count=1000000 of=/data/1Gb.file
>>
>> If I do the same test over NFSv3 I get performance of 14.1 MB/s
>>
>> If I do the same test over the gluster mount, I get perfromance of
>> 3.6MB/s
>>
>> Am I doing something wrong here?  How can I increase my performance to
>> the
>> same or beyond my current NFS?
>>
>> I setup the server with this config:
>>
>> volume brick-ns
>>   type storage/posix
>>   option directory /gluster-ns
>> end-volume
>>
>> volume brick
>>   type storage/posix
>>   option directory /data/gluster
>> end-volume
>>
>> volume iothreads1    #iothreads can give performance a boost
>>    type performance/io-threads
>>    option thread-count 8
>>    subvolumes brick
>> end-volume
>>
>> volume server
>>   type protocol/server
>>   subvolumes iothreads1 brick-ns
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option auth.ip.iothreads1.allow 192.168.*
>>   option auth.ip.brick-ns.allow 192.168.*
>> end-volume
>>
>>
>> and setup a client with:
>>
>> volume client1-ns
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.200.201
>>   option remote-subvolume brick-ns
>> end-volume
>>
>> volume client1
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 192.168.200.201
>>   option remote-subvolume iothreads1
>> end-volume
>>
>> volume bricks
>>   type cluster/unify
>>   subvolumes client1
>>   option namespace client1-ns
>>   option scheduler alu
>>   option alu.limits.min-free-disk  60GB              # Stop creating
>> files
>> when free-space lt 60GB
>>   option alu.limits.max-open-files 10000
>>   option alu.order
>> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
>>   option alu.disk-usage.entry-threshold 2GB          # Units in KB, MB
>> and
>> GB are allowed
>>   option alu.disk-usage.exit-threshold  60MB         # Units in KB, MB
>> and
>> GB are allowed
>>   option alu.open-files-usage.entry-threshold 1024
>>   option alu.open-files-usage.exit-threshold 32
>>   option alu.stat-refresh.interval 10sec
>> end-volume
>>
>> volume writebehind   #writebehind improves write performance a lot
>>   type performance/write-behind
>>   option aggregate-size 131072 # in bytes
>>   subvolumes bricks
>> end-volume
>>
>>
>> When trying the
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> It always takes longer than you expect, even when you take into account
> Hofstadter's Law.
>
> -- Hofstadter's Law
>




-- 
It always takes longer than you expect, even when you take into account
Hofstadter's Law.

-- Hofstadter's Law


reply via email to

[Prev in Thread] Current Thread [Next in Thread]