gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Quick Question: glusterFS and kernel (stat) caching


From: Bernhard J. M. Grün
Subject: Re: [Gluster-devel] Quick Question: glusterFS and kernel (stat) caching
Date: Fri, 10 Aug 2007 09:48:05 +0200

Anand,

Thank you again for your great work. It seems that the timeouts are
working correctly. At the moment we are using one of our client
machines without io-cache but with 3600 second timeouts and another
client machine with io-cache (1024MB, 64KB page size) but without any
other modifications. This works in our setup because we don't delete
or modify files on our storage. It seems that the machine with the
changed timeouts is nearly as fast as the machine with io-cache
enabled. We'll test some other configurations and other values for
io-thread, io-cache and timeouts in the next few days.
All in all it seems that this new version of glusterfs runs stable in our setup.
Therefore a big THANK YOU to you and all of your colleagues. You are
doing great work!

Bernhard J. M. Grün

2007/8/8, Anand Avati <address@hidden>:
> Bernhard,
>  Can you checkout the latest repository snapshot and re-run your tests,
> except this time mounting glusterfs with '--entry-timeout 10.0
> --attr-timeout 10.0' which sets and entry and attribute timeouts of 10
> seconds for every file.
>
> avati
>
> 2007/8/8, Anand Avati <address@hidden>:
> > Bernhard,
> >  In fuse based filesystems, the filesystem can set the stat cache timeout
> value. By default it is 1 second. On disk based filesystems this timeout is
> 'infinite' (till the dentry cache is shrunk, stat() never goes to the disk
> for a second time). But for network based filesystems it is unsafe to set
> such high stat cache timeouts. Still you are free to try. Currently in the
> GlusterFS codebase this value is hardcoded to 1 sec. I'll be adding a
> command line argument to set this timeout for you to experiment with it.
> Please let us know your results.
> >
> > thanks,
> > avati
> >
> >
> > 2007/8/6, Bernhard J. M. Grün <address@hidden >:
> >
> > > Hello developers!
> > >
> > > At the moment we try to optimize our web server setup again.
> > > We tried to parallelize the stat calls in our web server software
> > > (lighttpd). But it seems the kernel does not cache the stat
> > > information from one request for further requests. But the same setup
> > > works fine on a local file system. So it seems that glusterFS and/or
> > > FUSE is not able to communicate with the kernel (stat) cache. Is this
> > > right? And is this problem solvable?
> > >
> > > Here is some schematic diagram of our approach:
> > > request -> lighttpd -> Threaded FastCGI program, that does only a stat
> > > (via fopen) -> lighttpd opens the file for reading and writes the date
> > > to the socket
> > >
> > >
> > > In a local scenario the second open uses the cached stat and so it
> > > does not block other reads in lighty at that point. but with a
> > > glusterFS mount it still blocks there.
> > >
> > > Maybe you can give me some advice. Thank you!
> > >
> > > Bernhard J. M. Grün
> > >
> > >
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > address@hidden
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> > It always takes longer than you expect, even when you take into account
> Hofstadter's Law.
> >
> > -- Hofstadter's Law
>
>
>
> --
> It always takes longer than you expect, even when you take into account
> Hofstadter's Law.
>
>  -- Hofstadter's Law




reply via email to

[Prev in Thread] Current Thread [Next in Thread]