gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] quotas, unified name space, data migration


From: Brent A Nelson
Subject: Re: [Gluster-devel] quotas, unified name space, data migration
Date: Sun, 25 Feb 2007 21:51:00 -0500 (EST)

On Sun, 18 Feb 2007, Anand Babu Periasamy wrote:


,----
| I was wondering if the planned quota support would be
| filesystem-based or more flexible like OpenAFS, which has
| volume-level quotas? What about quota-support integration with NFS
| re-export and SAMBA shares ?
`----
Our plan was to do file system level quotas (based on
UIDs). Implementing volume level or file level or dir level quota is
easy. If you tell us what all are you looking for, we can plan around
it.

Some combination of most or all of the above would be cool. I want to be able to designate a portion of the total storage for home directories, with user-level quotas defined within this storage space, but the whole space should have an adjustable limit (i.e. directory-level or volume-level quota), so that I can flexibly adjust the amount of space allocated to the home directory area vs. the mail server area vs. the scratch space... That way, we can prevent something from consuming more space than it should (of course) but have the flexibility to give more space to someone/something when there is a need and not have to decide on fixed, inflexible "partitions" ahead of time that will become a nuisance later (so we will always be able to utilize our storage to the fullest, rather than have too much space assigned to something and not enough space assigned to something else).

Do the AFR file-patterns apply to directories as well as files (that would be useful, too, probably more so than the files)? If so, perhaps the same method could be used to specify quotas, except that something that matches a pattern could have a simple fixed limit OR call a table of limits for each UID.


For NFS quota integration, rquotad module should be loaded on the
server side. GlusterFS should just appear like a regular filesystem with
quota support for NFS. The same goes for SAMBA.  "smbd -b" should show
"WITH_QUOTAS". I am just curious, why there is a need to re-export with NFS or
SAMBA, when glusterfs can be mounted directly. We don't support
proprietary OSes though.


For SAMBA, we would just use it to talk to Windows. Unless, of course, you have plans to get GlusterFS (just client would be fine) working there, too. :-)

With regards to NFS:

1) I got kind of used to Lustre, where there seems to be a concern that a wayward client could affect overall stability/availability of the filesystem, but a reexport could limit the impact. I'm guessing that isn't as much of a concern with GlusterFS?

2) We would probably need NFS at least for migration purposes. Our Solaris is pre-FUSE support, and we probably would upgrade only after GlusterFS is in place. We also might still have a few Tru64 and old pre-FUSE Linux systems to worry about.

,----
| Also, are there any plans to implement some of the other cool
| aspects of OpenAFS, such as a unified name space, the ability to
| move data from one storage target to another while the filesytem
| remains available, and snapshots?
`----
We already have unified name space support (cluster/unify translator
implements this). For moving files between bricks, we have it in our
roadmap. For example, defrag tool can re-organize files between bricks
based on load statistics. You will also be able to manually move
files between bricks with the management tool.

Awesome. That's something that worried me about Lustre (although they also have online data migration in their roadmap as a future commercial option). It's important to be able to easily add and remove storage bricks and whole server nodes without disrupting the filesystem. This would allow us to deal with hardware failure AND upgrading the storage infrastructure without any downtime.

I will plan for on-the-fly snapshot support too. We are adding more developers in a month. Snapshot will be implemented as a translator in a distributed manner. Each snapshot translator will have its own private directory for storing changes. You will be able to have a chronological sequence of multiple distributed snapshots and tools to manage them.


Cool. Aside from the obvious usefulness in performing consistent tape/disk backups, this would allow us to have instantaneous, non-space-consuming, user-accessible backups. I always liked that about OpenAFS (although OpenAFS wouldn't allow for multiple snapshots).

One of our developer implemented "trashcan" translator for fun. May be
that will help till we implement snapshot support.


I will check it out when I have 1.3 up (right now, I have no clue what it can do, apart from what I imagine a trashcan translator might do).

Thanks,

Brent




reply via email to

[Prev in Thread] Current Thread [Next in Thread]