gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Files not available in all clients immediately


From: Claudio Cuqui
Subject: Re: [Gluster-devel] Files not available in all clients immediately
Date: Tue, 18 Mar 2008 16:05:33 -0300
User-agent: Thunderbird 1.5.0.12 (X11/20070530)



wow............I didn´t imagine that a had so much in my clients :)

2008-03-13 16:03:21 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 413500: /mmeh47tnKt9mkjOBGpJpphqB1p8WRoPe => 173959 Rehashing because st_nlink less than dentry maps 2008-03-13 16:03:21 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 413508: /t0PcWJnLBKlMabZJw62D9b1Yg1i3BYv9 => 173966 Rehashing because st_nlink less than dentry maps 2008-03-13 16:03:42 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 414640: /prkkqX74mzHwxYSOTWtwPoa4y7I4YVCO => 174116 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:28 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 415589: /g9PV333Ry9Q6poJSCdrvJej8gDKQd6aC => 170821 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:30 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 415832: /V1N9EVZYgWuTSIrTScYFbGQV2Ck96cQj => 174191 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:30 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 415853: /8TBKd5WrWWRUaJxdNSinKk1kT5cPIkVL => 174194 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:39 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416054: /EUBWzXdieXh6C9YCheDAUtBuEJaiI4Ut => 173993 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:41 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416056: /IjLFSrWXb6J143DG8KqMlIJhDPLinBb0 => 170829 Rehashing because st_nlink less than dentry maps 2008-03-13 16:04:55 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416361: /Gng1jYoARcC4lmU06UUk2PABEjaT71py => 174153 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:14 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416484: /caAMRuadiAa3XSop1fe4aYQlPAD66rPn => 174129 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:15 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416606: /3lkwLISdJ31x1WjbfoVka72cXrApas2K => 174272 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:18 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416838: /qpgrMN0354zoZ0QTAqm1Tqm7Gides4y2 => 170967 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:18 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416844: /Qwh2zNPSkolsokxu0rlLCxCKnyaDlq4N => 173967 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:19 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416882: /Mfnu0hJKv920qodHaCljPOmdcSjLQ1dX => 174264 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:34 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416925: /FlNEczCPL9q9ACQwTX6qJC3EgHEESbw7 => 174304 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:34 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416931: /Wwohfs9AmhhJ6edrgN1uVOhhvaNG2odQ => 174305 Rehashing because st_nlink less than dentry maps 2008-03-13 16:05:34 W [fuse-bridge.c:391:fuse_entry_cbk] glusterfs-fuse: 416937: /6TpEccab9yDi5fvbKZVbUGVYCQdMyupc => 174308 Rehashing because st_nlink less than dentry maps

Tons of this in all clients logs.

Regards !

Claudio Cuqui

Anand Avati wrote:
Do you have logs from the client when you see this behaviour?

avati

2008/3/19, Claudio Cuqui <address@hidden <mailto:address@hidden>>:

    Hi there !

    We are using gluster on an environment with multiple webservers
    and load
    balancer, where we have only one server and multiple clients (6).
    All servers are running Fedora Core 6 X86_64 with kernel
    2.6.22.14-72.fc6 (with exactly same packages installed in all server).
    The gluster version used is 1.3.8pre2 + 2.7.2glfs8 (both compiled
    locally). The underlying FS is reiserfs mounted with the follow
    options
    rw,noatime,nodiratime,notail. This filesystem has almost 4 thousand
    files from 2k - 10Mb in size. We are using gluster to export this
    filesystem for all other webservers. Below the config file used by
    gluster server:

    ### Export volume "brick" with the contents of "/home/export"
    directory.
    volume attachments-nl
      type storage/posix                   # POSIX FS translator
      option directory
    /C3Systems/data/domains/webmail.pop.com.br/attachments
    end-volume

    volume attachments
      type features/posix-locks
      subvolumes attachments-nl
      option mandatory on
    end-volume


    ### Add network serving capability to above brick.
    volume server
      type protocol/server
      option transport-type tcp/server     # For TCP/IP transport
      option client-volume-filename
    /C3Systems/gluster/bin/etc/glusterfs/glusterfs-client.vol
      subvolumes attachments-nl attachments
      option auth.ip.attachments-nl.allow * # Allow access to
    "attachments-nl" volume
      option auth.ip.attachments.allow * # Allow access to
    "attachments" volume
    end-volume

    The problem happen when the LB sent the post (the uploaded file)
    to one
    webserver and than the next post goes to other webserver  that try to
    access the same file. When this happen, the other client got these
    messages:

    PHP Warning:
    
fopen(/C3Systems/data/domains/c3systems.com.br/attachments/27gBgFQSIiOLDEo7AvxlpsFkqZw9jdnZ):
    failed to open stream: File Not Found.
    PHP Warning:
    
unlink(/C3Systems/data/domains/c3systems.com.br/attachments/5Dech7jNxjORZ2cZ9IAbR7kmgmgn2vTE):
    File Not Found.

    LB is using RoundRobin to distribute the load between servers.

    Below, you can find the gluster configuration file used by all
    clients:

    ### file: client-volume.spec.sample

    ##############################################
    ###  GlusterFS Client Volume Specification  ##
    ##############################################

    #### CONFIG FILE RULES:
    ### "#" is comment character.
    ### - Config file is case sensitive
    ### - Options within a volume block can be in any order.
    ### - Spaces or tabs are used as delimitter within a line.
    ### - Each option should end within a line.
    ### - Missing or commented fields will assume default values.
    ### - Blank/commented lines are allowed.
    ### - Sub-volumes should already be defined above before referring.

    ### Add client feature and attach to remote subvolume
    volume client
      type protocol/client
      option transport-type tcp/client     # for TCP/IP transport
    # option ib-verbs-work-request-send-size  1048576
    # option ib-verbs-work-request-send-count 16
    # option ib-verbs-work-request-recv-size  1048576
    # option ib-verbs-work-request-recv-count 16
    # option transport-type ib-sdp/client  # for Infiniband transport
    # option transport-type ib-verbs/client # for ib-verbs transport
      option remote-host 1.2.3.4 <http://1.2.3.4>      # IP address of
    the remote brick
    # option remote-port 6996              # default server port is 6996

    # option transport-timeout 30          # seconds to wait for a reply
                                           # from server for each request
      option remote-subvolume attachments  # name of the remote volume
    end-volume

    ### Add readahead feature
    volume readahead
      type performance/read-ahead
      option page-size 1MB     # unit in bytes
      option page-count 2       # cache per file  = (page-count x
    page-size)
      subvolumes client
    end-volume

    ### Add IO-Cache feature
    volume iocache
      type performance/io-cache
      option page-size 256KB
      option page-count 2
      subvolumes readahead
    end-volume

    ### Add writeback feature
    #volume writeback
    #  type performance/write-behind
    #  option aggregate-size 1MB
    #  option flush-behind off
    #  subvolumes iocache
    #end-volume

    When I do the test manually, everything goes fine. What I think is
    happening is that gluster isn´t having enough time to sync all clients
    before clients trying to access the files (those servers are very busy
    ones.....they receive millions of requests per day).

    Is this configuration appropriate for this situation ? a bug ? a
    feature
    ;-) ? Is there any option like the sync used in NFS that I can use in
    order guarantee that when the file is write down, all the clients
    already
    have it ?

    TIA,

    Claudio Cuqui





    _______________________________________________
    Gluster-devel mailing list
    address@hidden <mailto:address@hidden>
    http://lists.nongnu.org/mailman/listinfo/gluster-devel




--
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]