gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Performance problems in our web server setup


From: Anand Avati
Subject: Re: [Gluster-devel] Performance problems in our web server setup
Date: Tue, 24 Jul 2007 20:42:56 +0530

Bernhard,
Thanks for trying glusterfs! I have some questions/suggestions -

1. The read-ahead translator in glusterfs--mainline--2.4 used an 'always
aggressive' mode. Probably setting a lower page-count (2?) and a page-size
of 131072 can help. If you are using gigabit ethernet, glusterfs can peak
1Gbps even without read-ahead. So you could infact try without read-ahead as
well.

2. I would suggest you to try if the latest TLA on
glusterfs--mainline--2.5works well for you, and if it does, use the
io-cache translator on the
client side. For your scenario (serving lot of small files read-only)
io-cache should do a lot of good. If you can have a trial setup and see how
well io-cache helps you, we will be very much in knowing your results (and
if possible, some numbers)

3. Please try the patched fuse available at -
http://ftp.zresearch.com/pub/gluster/glusterfs/fuse/fuse-2.7.0-glfs1.tar.gz
  This patched fuse greatly improves read performance, and we expect it to
complement the io-cache feature very well.

4. About using multiple tcp connections, the load-balancer feature is in our
roadmap where you can load balance over two network interfaces, or just
exploit multiple tcp connections over the same network interface. You will
have to wait for the 1.4 release for this.

thanks,
avati

2007/7/24, Bernhard J. M. Grün <address@hidden>:

Hello!

We experience some performance problems with our setup at the moment.
And we would be happy if someone of you could help us out.
This is our setup:
Two clients connect to two servers that share the same data via AFR.
The two servers hold about 13.000.000 smaller image files that are
sent out to the web via the two clients.
First I'll show you the configuration of the servers:
volume brick
  type storage/posix                   # POSIX FS translator
  option directory /media/storage       # Export this directory
end-volume

volume iothreads    #iothreads can give performance a boost
   type performance/io-threads
   option thread-count 16
   subvolumes brick
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  option listen-port 6996              # Default is 6996
  option client-volume-filename /opt/glusterfs/etc/glusterfs/client.vol
  subvolumes iothreads
  option auth.ip.iothreads.allow * # Allow access to "brick" volume
end-volume

Now the configuration of the clients:
### Add client feature and attach to remote subvolume
volume client1
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.1.1.13     # IP address of the remote brick
  option remote-port 6996              # default server port is 6996
  option remote-subvolume iothreads        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume
volume client2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.1.1.14     # IP address of the remote brick
  option remote-port 6996              # default server port is 6996
  option remote-subvolume iothreads        # name of the remote volume
end-volume

volume afrbricks
  type cluster/afr
  subvolumes client1 client2
  option replicate *:2
end-volume

volume iothreads    #iothreads can give performance a boost
   type performance/io-threads
   option thread-count 8
   subvolumes afrbricks
end-volume

### Add writeback feature
volume writeback
  type performance/write-behind
  option aggregate-size 0  # unit in bytes
  subvolumes iothreads
end-volume

### Add readahead feature
volume bricks
  type performance/read-ahead
  option page-size 65536     # unit in bytes
  option page-count 16       # cache per file  = (page-count x page-size)
  subvolumes writeback
end-volume

We use Lighttpd as web server to handle the web traffic and it seems
that the image loading is quite slow. Also the used bandwidth between
one client and its corresponding AFR-Server is low - about 12 MBit/s
over a 1 GBit line. So there must be a bottleneck in our
configuration. Maybe you can help us.
At the moment we are using 1.3.0 (mainline--2.4 patch-131). At the
moment we can't easily switch to mainline--2.5 because the servers are
under high load.

We also have seen that each client uses only one connection to each
server. In my opinion this means that the iothreads subvolume on the
client is (nearly) useless. Wouldn't it be better to establish more
than just one connection to each server?

Many thanks in advance

Bernhard J. M. Grün


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
Anand V. Avati


reply via email to

[Prev in Thread] Current Thread [Next in Thread]