gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Segfault in glusterfsd


From: NovA
Subject: Re: [Gluster-devel] Segfault in glusterfsd
Date: Fri, 15 Jun 2007 18:48:14 +0400

Avati,

 Here it is server spec file you asked:
----------
volume disk
 type storage/posix              # POSIX FS translator
 option directory /mnt/hd        # Export this directory
end-volume

volume locks
 type features/posix-locks
 subvolumes disk
end-volume

volume brick    #iothreads can give performance a boost
 type performance/io-threads
 option thread-count 8
 subvolumes locks
end-volume

### Add network serving capability to above brick
volume server
 type protocol/server
 option transport-type tcp/server     # For TCP/IP transport
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
# option listen-port 6996              # Default is 6996
 option client-volume-filename /etc/glusterfs/client.vol
 subvolumes brick
 option auth.ip.brick.allow 10.1.0.*  # Allow access to "brick" volume
end-volume
---------

As for core dump, gdb says that:
---
Core was generated by `[glusterfsd]
                             '.
Program terminated with signal 11, Segmentation fault.
#0  dict_destroy (this=0x2aaab0000f70) at dict.c:251
251     dict.c: No such file or directory.
       in dict.c
(gdb) p *prev
Cannot access memory at address 0x4449475f52454c
---

WBR,
 Andrey

2007/6/15, Anand Avati <address@hidden>:
Andrey,
  can you also send along the server side spec file? if you still have the
core is it possible to get the output of 'p *prev' from gdb?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]