gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] excessive inode consumption in namespace?


From: Rhesa Rozendaal
Subject: Re: [Gluster-devel] excessive inode consumption in namespace?
Date: Mon, 02 Jul 2007 21:20:02 +0200
User-agent: Thunderbird 1.5.0.12 (X11/20070604)

Anand Avati wrote:


    I'm having a bit of a puzzle here.

    I've setup a 1 server, 1 client test. The server exports 4 bricks and a
    namespace volume, and the client unifies those (see specs at the
    bottom).

    The namespace directory actually sits on the first partition, so I would
    expect that partition to show twice the number of consumed inodes
    compared to
    the other partitions. But what I'm seeing instead is a threefold
    consumption:


That is a bit of wrong math.

No of inodes in NS = No of directories in any child + No of files in all children.
It is not directly proportional to any one child.

You're right. I did the math, and it works out exactly. I was just surprised that my ns+brick used almost exactly 3 times as much inodes as the other bricks, but that's just a coincidence, due to my directory/file ratio.

The namespace should be put on a partition where you can create LOT of small files. Generally XFS or reiserfs will do a good job. If you are using ext3, format it with a very small block size.

Yes, I will be moving the ns volume to a dedicated disk once I get the space for it. For now, the 488M inodes should be sufficient.

I'm actually moving back from xfs to ext3, with smaller partitions. We had big problems with xfs after a severe power outage. Its repair requirements are unrealistic: for a 8TB partition, you'd need a box with something like 24GB of ram, plus 2 weeks downtime. We can't afford either, which is why we're switching to glusterfs on ext3.

Putting the namespace on xfs is still a viable option, I guess. It depends on how fast the amount of content grows.

Anyway, thanks for making me look closer, and understand what actually goes on!

Rhesa




reply via email to

[Prev in Thread] Current Thread [Next in Thread]