gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Handling huge number of file read requests


From: Amrik Singh
Subject: Re: [Gluster-devel] Handling huge number of file read requests
Date: Fri, 04 May 2007 10:26:24 -0400
User-agent: Thunderbird 1.5.0.10 (Windows/20070221)

Hi Anand,

Please find attached the config files. This is the configuration for a setup with a single brick.

thanks



### file: client-volume.spec

##############################################
###  GlusterFS Client Volume Specification  ##
##############################################

#### CONFIG FILE RULES:
### Add client feature and attach to remote subvolume
volume client
 type protocol/client
 option transport-type tcp/client     # for TCP/IP transport
# option transport-type ib-sdp/client  # for Infiniband transport
 option remote-host 192.168.10.254      # IP address of the remote brick
# option remote-port 6996              # default server port is 6996
 option remote-subvolume brick        # name of the remote volume
end-volume

### Add writeback feature
volume writeback
 type performance/write-back
 option aggregate-size 131072 # unit in bytes
 subvolumes client
end-volume

### Add readahead feature
volume readahead
 type performance/read-ahead
 option page-size 65536     # unit in bytes
 option page-count 16       # cache per file  = (page-count x page-size)
 subvolumes writeback
end-volume

### If you are not concerned about performance of interactive commands
### like "ls -l", you wouln't need this translator.
# volume statprefetch
#   type performance/stat-prefetch
#   option cache-seconds 2  # cache expires in 2 seconds
# subvolumes readahead # add "stat-prefetch" feature to "readahead" volume
# end-volume
### file: server-volume.spec.sample

##############################################
###  GlusterFS Server Volume Specification  ##
##############################################

#### CONFIG FILE RULES:
### Export volume "brick" with the contents of "/home/EMS" directory.
volume brick
 type storage/posix                   # POSIX FS translator
 option directory /home/EMS        # Export this directory
end-volume

### Add network serving capability to above brick.
volume server
 type protocol/server
 option transport-type tcp/server     # For TCP/IP transport
# option transport-type ib-sdp/server  # For Infiniband transport
# option bind-address 192.168.1.10 # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
 subvolumes brick
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
 option auth.ip.brick.allow 192.168.* # Allow access to "brick" volume
end-volume

Amrik


Anand Avati wrote:
Amrik,
 can you send me your config files of both server and client?

regards,
avati

On 5/4/07, Amrik Singh <address@hidden> wrote:
Hi Guys,

We are hoping that glusterfs would help us in the particular problem
that we are facing with our cluster. We have a visual search application
that runs on a cluster with around 300 processors. These compute nodes
run a search for images that are hosted on an NFS server. In certain
circumstances all these compute nodes are sending requests for query
images at extremely high rates (20-40 images per second). When 300 nodes
send 20-40 requests per second for these images, the NFS server just
can't cope with it and we start seeing a lot of retransmissions and a
very high wait time on the server as well as on the nodes. The images
are sized at around 2MB each.

With the current application we are not in a position where we can
quickly change the way things are being done so we are looking for a
file system that can handle this kind of situation. We tried glusterfs
with the default settings but we did not see any improvement. Is there a
way to tune glusterfs to handle this kind of situation.

I can provide more details about our setup as needed.


thanks

--
Amrik Singh
Idée Inc.
http://www.ideeinc.com




_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel








reply via email to

[Prev in Thread] Current Thread [Next in Thread]