gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] how best to set up for performance?


From: Niall Dalton
Subject: Re: [Gluster-devel] how best to set up for performance?
Date: Sun, 16 Mar 2008 09:21:04 -0400


On Mar 16, 2008, at 3:12 AM, Amar S. Tumballi wrote:

Hey,
Just missed that 80GB file size part. Are you sure your disks are fast enough to write/read at more than 200MBps for uncached files? Can you run the dd directly on the backend and make sure you are getting enough disk speed?



Sure thing - good to double check.

# caneland is the client, 192.168.3.2 one of my storage servers
address@hidden:/home/niall# ssh 192.168.3.2

# 192.168.3.2 is a 16GB memory machine
address@hidden:~# free -g
total used free shared buffers cached Mem: 15 15 0 0 0 13
-/+ buffers/cache:          2         13
Swap:            0          0          0

# /big is the target file system
address@hidden:~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdb1              4.0G   2.0G   1.8G  54% /
varrun                 8.5G   209k   8.5G   1% /var/run
varlock                8.5G      0   8.5G   0% /var/lock
udev                   8.5G    58k   8.5G   1% /dev
devshm                 8.5G      0   8.5G   0% /dev/shm
/dev/sda2              6.5T   4.6M   6.5T   1% /big

# even though ram is only 16GB lets nuke it to make sure there's no funny business
address@hidden:~# echo "3" > /proc/sys/vm/drop_caches

# dd to write a big file..
address@hidden:~# dd if=/dev/zero of=/big/big.file bs=8M count=10000
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 128.927 seconds, 651 MB/s

# we have the file..
address@hidden:~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdb1              4.0G   2.0G   1.8G  54% /
varrun                 8.5G   209k   8.5G   1% /var/run
varlock                8.5G      0   8.5G   0% /var/lock
udev                   8.5G    58k   8.5G   1% /dev
devshm                 8.5G      0   8.5G   0% /dev/shm
/dev/sda2              6.5T    84G   6.5T   2% /big


# nuke the caches out of sheer paranoia before the read test
address@hidden:~# echo "3" > /proc/sys/vm/drop_caches

# dd to read the big file
address@hidden:~# dd if=/big/big.file of=/dev/null bs=8M
10000+0 records in
10000+0 records out
83886080000 bytes (84 GB) copied, 108.51 seconds, 773 MB/s

# start an iperf server (and in another window do an iperf -c 192.168.3.2 from the client)
address@hidden:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.3.2 port 5001 connected with 192.168.3.1 port 45751
[  4]  0.0-10.0 sec  7.24 GBytes  6.22 Gbits/sec

That could be tuned up I'm sure but its > 796MB/s per storage server so shouldn't be the bottleneck yet.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]