gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Updated Wireshark packages for RHEL-6 and Fedora-17 avai


From: Niels de Vos
Subject: [Gluster-devel] Updated Wireshark packages for RHEL-6 and Fedora-17 available for testing
Date: Wed, 16 May 2012 21:56:04 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.4) Gecko/20120422 Thunderbird/10.0.4

Hi all,

today I have merged support for GlusterFS 3.2 and 3.3 into one Wireshark 'dissector'. The packages with date 20120516 in the version support both the current stable 3.2.x version, and the latest 3.3.0qa41. Older 3.3.0 versions will likely have issues due to some changes in the RPC-AUTH protocol used. Updating to the latest qa41 release (or newer) is recommended anyway. I do not expect that we'll add support for earlier 3.3.0 releases.

My repository with packages for RHEL-6 and Fedora-17 contains a .repo file for yum (save it in /etc/yum.repos.d):
- http://repos.fedorapeople.org/repos/devos/wireshark-gluster/

RPMs for other Fedora or RHEL versions can be provided on request. Let me know if you need an other version (or architecture).

Single patches for some different Wireshark versions are available from https://github.com/nixpanic/gluster-wireshark.

A full history of commits can be found here:
- https://github.com/nixpanic/gluster-wireshark-1.4/commits/master/
  (Support for GlusterFS 3.3 was added by Akhila and Shree, thanks!)

Please test and report success and problems, file a issues on github: https://github.com/nixpanic/gluster-wireshark-1.4/issues Some functionality is still missing, but with the current status, it should be good for most analysing already. With more issues filed, it makes it easier to track what items are important.

Of course, you can also respond to this email and give feedback :-)

After some more cleanup of the code, this dissector will be passed on for review and inclusion in the upstream Wireshark project. Some more testing results is therefore much appreciated.

Thanks,
Niels



reply via email to

[Prev in Thread] Current Thread [Next in Thread]