gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] how best to set up for performance?


From: Niall Dalton
Subject: [Gluster-devel] how best to set up for performance?
Date: Sat, 15 Mar 2008 20:39:17 -0400

Hi,

I'd appreciate some advice on how to configure gluster for best performance. After reading the docs and experimenting, I'm still struggling to get more then a few hundred MB/s on large streaming reads.

I have a server with two 10gige cards connected directly to two storage servers. With no particular effort, I can push >750MB/s over each network interface to the storage machines, for an aggregate 1.5GB/ s, using iperf. Local disk tests on the storage servers shows 700 MB/s reads on each.

Using gluster I struggle to write more than 160MB/s and read at more than 200MB/s. Interestingly, using a single client and a single server I get the same results as when I use a single client and both storage servers - the read/write rate per storage server halves. This is using various combinations of readahead, writebehind, io-threads and a strip across the storage servers (tried lots of variations in the #threads, cache sizes, aggregate-sizes, block sizes, etc etc).

My write test is:

dd if=/dev/zero of=/mnt/stripe/big.file bs=8M count=10000

and my read test is

dd if=/mnt/stripe/big.file of=/dev/null bs=8M

I'm using packages:

fuse-2.7.2glfs8  glusterfs-1.3.8pre3

Sample configs below (with last numbers tested, as noted I tried many others).

Any suggestions?

thanks
niall


server1 (8 cores, 16GB memory)
-------------------------------------------

volume brick
  type storage/posix
  option directory /big
end-volume

volume iothreads
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes brick
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  subvolumes brick
end-volume


server2 (8 cores, 16GB memory)
-------------------------------------------

volume brick
  type storage/posix
  option directory /big
end-volume

volume iothreads
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes brick
end-volume

volume server
  type protocol/server
  subvolumes brick
  option transport-type tcp/server     # For TCP/IP transport
  option auth.ip.brick.allow *
  subvolumes brick
end-volume


client (16 cores, 64GB memory)
-------------------------------------------

volume jr1
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.3.2
  option remote-subvolume brick
end-volume

volume jr2
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.2.2
  option remote-subvolume brick
end-volume

volume writebehind-jr1
  type performance/write-behind
  option aggregate-size 1024kB
  subvolumes jr1
end-volume

volume writebehind-jr2
  type performance/write-behind
  option aggregate-size 1024kB
  subvolumes jr2
end-volume

volume readahead-jr1
  type performance/read-ahead
  option page-size 1024kB
  option page-count 64      )
  subvolumes writebehind-jr1
end-volume

volume readahead-jr2
  type performance/read-ahead
  option page-size 1024kB
  option page-count 64
  subvolumes writebehind-jr2
end-volume

volume iothreads-jr1
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes jr1
end-volume

volume iothreads-jr2
  type performance/io-threads
  option thread-count 8
  option cache-size 4096MB
  subvolumes jr2
end-volume

volume stripe0
  type cluster/stripe
  option block-size *:4MB
  subvolumes jr1 jr2
end-volume




reply via email to

[Prev in Thread] Current Thread [Next in Thread]