gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: mod_glusterfs (was Re: [Gluster-devel] Hammering start() calls)


From: Hannes Dorbath
Subject: Re: mod_glusterfs (was Re: [Gluster-devel] Hammering start() calls)
Date: Sun, 06 Apr 2008 20:51:05 +0200
User-agent: Thunderbird 2.0.0.12 (Windows/20080213)

Anand Avati wrote:
You might also be interested in mod_glusterfs. This is an apache module
which lets you run the filesystem inside apache address space (in a seperate
thread) which can greatly boost the performance of serving static files.
This module is available if you checkout glusterfs--mainline--3.0. Currently
mod_glusterfs works with apache-1.3.x, but work for getting it ready for
apache2.x and lighttpd is on the way. The way to use it - (in your
httpd.conf)

That sounds great, however I'm currently bound to Lighttpd.

I've setup a clean test case to benchmark the issues I'm seeing. With -e 600 -a 600 it seems I'm no longer bound to stat(), but to open() now.

Following is a trace of a "Hello World" PHP script (FCGI) as well as my current configuration.

I think I messed something with iothreads. Without GlusterFS total throughput increases a lot with multiple HTTP clients, with GlusterFS it decreases. I could not find any load where using iothreads improved my situation, so there still seems to be a single point of serialisation.

If you have any suggestion on how to improve this configuration, I'd really appreciative it.

Thanks in advance.


Strace with GlusterFS:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 43.99    0.053319           5     10000           open
 16.22    0.019654           2     10000           accept
 12.95    0.015696           1     20000           close
  7.66    0.009287           0     60000           read
  6.73    0.008154           0     20000           fstat
  3.53    0.004273           0     10000           write
  1.71    0.002071           0     10000           munmap
  1.61    0.001955           0     50000           gettimeofday
  1.33    0.001614           0     40000           fcntl
  1.07    0.001301           0     10000           mmap
  1.01    0.001222           0     10000           chdir
  0.51    0.000620           0     30000           setitimer
  0.45    0.000551           0     20000           recvfrom
  0.36    0.000442           0     10000           shutdown
  0.26    0.000313           0     20000           rt_sigaction
  0.25    0.000302           0     10000           select
  0.22    0.000261           0     20000           rt_sigprocmask
  0.14    0.000164           0     10000           lseek
------ ----------- ----------- --------- --------- ----------------
100.00    0.121199                370000           total


Strace without GlusterFS (XFS noatime, lazy-counters):

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 34.98    0.008714           1     10000           accept
 13.22    0.003293           0     60000           read
  9.41    0.002345           0     10000           write
  7.59    0.001890           0     10000           open
  6.81    0.001696           0     50000           gettimeofday
  5.34    0.001331           0     40000           fcntl
  5.07    0.001264           0     20000           close
  3.11    0.000774           0     10000           munmap
  2.77    0.000690           0     10000           chdir
  1.92    0.000478           0     20000           recvfrom
  1.91    0.000476           0     20000           fstat
  1.79    0.000446           0     30000           setitimer
  1.60    0.000399           0     10000           mmap
  1.09    0.000271           0     20000           rt_sigaction
  1.04    0.000260           0     10000           lseek
  0.93    0.000231           0     20000           rt_sigprocmask
  0.77    0.000191           0     10000           select
  0.65    0.000161           0     10000           shutdown
------ ----------- ----------- --------- --------- ----------------
100.00    0.024910                370000           total


The server config is:

volume webroot
  type storage/posix
  option directory /data/webs/default/webroot_server
end-volume

volume src
  type performance/io-threads
  option thread-count 8
  option cache-size 32MB
  subvolumes webroot
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  subvolumes webroot
  option auth.ip.webroot.allow *
end-volume


The client config:

volume client
  type protocol/client
  option transport-type tcp/client
  option remote-host localhost
  option transport-timeout 10
  option remote-subvolume webroot
end-volume

volume iot
  type performance/io-threads
  option thread-count 8
  option cache-size 32MB
  subvolumes client
end-volume

volume wb
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes iot
end-volume

volume io-cache
  type performance/io-cache
  option cache-size 64MB
  option page-size 128KB
  option force-revalidate-timeout 600
  subvolumes wb
end-volume


--
Best regards,
Hannes Dorbath




reply via email to

[Prev in Thread] Current Thread [Next in Thread]