gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] NFS reexport still a little glitchy


From: Brent A Nelson
Subject: [Gluster-devel] NFS reexport still a little glitchy
Date: Wed, 25 Jul 2007 16:16:51 -0400 (EDT)

I've been testing NFS reexport with cp -pr (from a Sun) of a /usr directory from GlusterFS to GlusterFS. So far, it's been close, but not quite right. The copies never turn out the same, there's always at least some missing files. The cp reports "cannot access" for a lot of items. GlusterFS logs a ton of op_ret=-1 op_errno=61 errors such as the following:

2007-07-25 14:36:43 E [afr.c:1234:afr_selfheal_getxattr_cbk] mirror2: (path=/nfs2/share/zoneinfo/right/Pacific/Johnston child=share2-1) op_ret=-1 op_errno=61 2007-07-25 14:36:43 E [afr.c:1234:afr_selfheal_getxattr_cbk] ns0: (path=/nfs2/share/zoneinfo/right/Pacific/Johnston child=ns0-0) op_ret=-1 op_errno=61 2007-07-25 14:36:43 E [afr.c:1234:afr_selfheal_getxattr_cbk] ns0: (path=/nfs2/share/zoneinfo/right/Pacific/Johnston child=ns0-1) op_ret=-1 op_errno=61

In my tests from today's TLA repository, nfsd is eventually even hanging (this wasn't occurring in earlier patch releases). Following that, I did an ls from the GlusterFS reexport machine of one of the areas that the NFS client recently complained about and got a similar error, but in a different function:

2007-07-25 15:51:10 E [afr.c:563:afr_getxattr_cbk] mirror4: (path=/usr/share/dict child=share4-0) op_ret=-1 op_errno=61 2007-07-25 15:51:10 E [afr.c:563:afr_getxattr_cbk] mirror4: (path=/usr/share child=share4-0) op_ret=-1 op_errno=61

I'm hoping the errors provide some clue to the NFS glitches (and presumably pins the issue down to AFR), but perhaps they're harmless. Any ideas?

Thanks,

Brent

PS An NFS reexport from an extremely simple (no protocol/*, no unify, no AFR; only storage/posix) GlusterFS volume last week seemed fast and trouble-free. If anyone needs NFS reexport and doesn't need AFR, I haven't tested it (yet), but it's definitely worth a shot!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]