gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] glusterfs--mainline--3.0 patch-556


From: Steve
Subject: Re: [Gluster-devel] glusterfs--mainline--3.0 patch-556
Date: Thu, 06 Nov 2008 16:20:00 +0100

It's reproducable. As soon as I try to read the GlusterFS mounted share I get 
the error.

GCC information (that's there where I run the client and the server. On the 
other server I use gcc 3.4.6):
--------------------
nemesis ~ # gcc -v
Using built-in specs.
Target: i686-pc-linux-gnu
Configured with: /var/tmp/portage/sys-devel/gcc-4.3.2/work/gcc-4.3.2/configure 
--prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc-bin/4.3.2 
--includedir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.2/include 
--datadir=/usr/share/gcc-data/i686-pc-linux-gnu/4.3.2 
--mandir=/usr/share/gcc-data/i686-pc-linux-gnu/4.3.2/man 
--infodir=/usr/share/gcc-data/i686-pc-linux-gnu/4.3.2/info 
--with-gxx-include-dir=/usr/lib/gcc/i686-pc-linux-gnu/4.3.2/include/g++-v4 
--host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --disable-altivec 
--enable-nls --without-included-gettext --with-system-zlib --disable-checking 
--disable-werror --enable-secureplt --disable-multilib --enable-libmudflap 
--disable-libssp --enable-cld --disable-libgcj --with-arch=i686 
--enable-languages=c,c++,treelang --enable-shared --enable-threads=posix 
--enable-__cxa_atexit --enable-clocale=gnu 
--with-bugurl=http://bugs.gentoo.org/ --with-pkgversion='Gentoo 4.3.2 p1.0'
Thread model: posix
gcc version 4.3.2 (Gentoo 4.3.2 p1.0)
nemesis ~ #
--------------------


Starting SERVER:
--------------------
2008-11-06 15:38:58 D [glusterfs.c:291:_get_specfp] glusterfs: loading volume 
specfile /etc/glusterfs/glusterfs-server.vol

Version      : glusterfs 1.4.0pre7 built on Nov  6 2008 14:23:21
TLA Revision : glusterfs--mainline--3.0--patch-561
Starting Time: 2008-11-06 15:38:58
Command line : /usr/sbin/glusterfsd -N -l/dev/stdout -L DEBUG -f 
/etc/glusterfs/glusterfs-server.vol
given volume specfile
+-----
  1: ##############################################
  2: ###  GlusterFS Server Volume Specification  ##
  3: ###                NEMESIS                  ##
  4: ##############################################
  5:
  6: # dataspace on local
  7: volume gfs-ds
  8:   type storage/posix                               # POSIX FS translator
  9:   option directory /local/gfs-brick001             # Export this directoy
 10: end-volume
 11:
 12: # posix locks on local
 13: volume gfs-ds-locks
 14:   type features/posix-locks
 15:   subvolumes gfs-ds
 16:   option mandatory on                              # Enables mandatory 
locking on all files
 17: end-volume
 18:
 19: # dataspace on remote
 20: volume gfs-remote-ds
 21:   type protocol/client
 22:   option transport-type tcp/client         # For TCP/IP transport
 23:   option remote-host 192.168.0.115         # IP address of the remote 
storage
 24:   option remote-subvolume gfs-ds-locks
 25:   option transport-timeout 10                      # Value in seconds; it 
should be set relatively low
 26: end-volume
 27:
 28: # automatic file replication translator for dataspace
 29: #volume gfs-ds-afr
 30: volume gfs
 31:   type cluster/afr
 32:   subvolumes gfs-ds-locks gfs-remote-ds            # Local and remote 
dataspaces
 33: end-volume
 34:
 35: # the actual exported volume
 36: #volume gfs
 37: #  type performance/io-threads
 38: #  option thread-count 8                           # Deault is 1
 39: #  option cache-size 64MB                  # Default is 64MB
 40: #  subvolumes gfs-ds-afr
 41: #end-volume
 42:
 43: # server declaration
 44: volume server
 45:   type protocol/server
 46:   option transport-type tcp/server         # For TCP/IP transport
 47:   subvolumes gfs
 48:   # storage network access only
 49:   option auth.addr.gfs-ds-locks.allow 192.168.0.*,127.0.0.1
 50:   option auth.addr.gfs.allow 192.168.0.*
 51: end-volume
+-----
2008-11-06 15:38:58 D [spec.y:178:new_section] parser: New node for 'gfs-ds'
2008-11-06 15:38:58 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/storage/posix.so
2008-11-06 15:38:58 D [spec.y:202:section_type] parser: 
Type:gfs-ds:storage/posix
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-ds:directory:/local/gfs-brick001
2008-11-06 15:38:58 D [spec.y:350:section_end] parser: end:gfs-ds
2008-11-06 15:38:58 D [spec.y:178:new_section] parser: New node for 
'gfs-ds-locks'
2008-11-06 15:38:58 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/features/posix-locks.so
2008-11-06 15:38:58 D [xlator.c:407:xlator_set_type] xlator: dlsym(notify) on 
/usr/lib/glusterfs/1.4.0pre7/xlator/features/posix-locks.so: undefined symbol: 
notify -- neglecting
2008-11-06 15:38:58 D [spec.y:202:section_type] parser: 
Type:gfs-ds-locks:features/posix-locks
2008-11-06 15:38:58 D [spec.y:335:section_sub] parser: 
child:gfs-ds-locks->gfs-ds
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-ds-locks:mandatory:on
2008-11-06 15:38:58 D [spec.y:350:section_end] parser: end:gfs-ds-locks
2008-11-06 15:38:58 D [spec.y:178:new_section] parser: New node for 
'gfs-remote-ds'
2008-11-06 15:38:58 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/protocol/client.so
2008-11-06 15:38:58 D [spec.y:202:section_type] parser: 
Type:gfs-remote-ds:protocol/client
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-remote-ds:transport-type:tcp/client
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-remote-ds:remote-host:192.168.0.115
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-remote-ds:remote-subvolume:gfs-ds-locks
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:gfs-remote-ds:transport-timeout:10
2008-11-06 15:38:58 D [spec.y:350:section_end] parser: end:gfs-remote-ds
2008-11-06 15:38:58 D [spec.y:178:new_section] parser: New node for 'gfs'
2008-11-06 15:38:58 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/cluster/afr.so
2008-11-06 15:38:58 D [spec.y:202:section_type] parser: Type:gfs:cluster/afr
2008-11-06 15:38:58 D [spec.y:335:section_sub] parser: child:gfs->gfs-ds-locks
2008-11-06 15:38:58 D [spec.y:335:section_sub] parser: child:gfs->gfs-remote-ds
2008-11-06 15:38:58 D [spec.y:350:section_end] parser: end:gfs
2008-11-06 15:38:58 D [spec.y:178:new_section] parser: New node for 'server'
2008-11-06 15:38:58 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/protocol/server.so
2008-11-06 15:38:58 D [spec.y:202:section_type] parser: 
Type:server:protocol/server
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:server:transport-type:tcp/server
2008-11-06 15:38:58 D [spec.y:335:section_sub] parser: child:server->gfs
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:server:auth.addr.gfs-ds-locks.allow:192.168.0.*,127.0.0.1
2008-11-06 15:38:58 D [spec.y:268:section_option] parser: 
Option:server:auth.addr.gfs.allow:192.168.0.*
2008-11-06 15:38:58 D [spec.y:350:section_end] parser: end:server
2008-11-06 15:38:58 D [glusterfs.c:804:main] glusterfs: running in pid 24152

2008-11-06 15:38:58 D [transport.c:104:transport_load] transport: attempt to 
load file /usr/lib/glusterfs/1.4.0pre7/transport/socket.so
2008-11-06 15:38:58 D [server-protocol.c:7130:init] server: defaulting 
limits.transaction-size to 4194304
2008-11-06 15:38:58 D [xlator.c:491:xlator_init_rec] gfs-ds: Initialization done
2008-11-06 15:38:58 D [xlator.c:491:xlator_init_rec] gfs-ds-locks: 
Initialization done
2008-11-06 15:38:58 D [client-protocol.c:4918:init] gfs-remote-ds: setting 
transport-timeout to 10
2008-11-06 15:38:58 D [transport.c:104:transport_load] transport: attempt to 
load file /usr/lib/glusterfs/1.4.0pre7/transport/socket.so
2008-11-06 15:38:58 D [client-protocol.c:4962:init] gfs-remote-ds: defaulting 
limits.transaction-size to 268435456
2008-11-06 15:38:58 D [xlator.c:491:xlator_init_rec] gfs-remote-ds: 
Initialization done
2008-11-06 15:38:58 D [client-protocol.c:5194:notify] gfs-remote-ds: got 
GF_EVENT_PARENT_UP, attempting connect on transport
2008-11-06 15:38:58 D [client-protocol.c:5194:notify] gfs-remote-ds: got 
GF_EVENT_PARENT_UP, attempting connect on transport
2008-11-06 15:38:58 D [client-protocol.c:4610:client_protocol_reconnect] 
gfs-remote-ds: attempting reconnect
2008-11-06 15:38:58 D [name.c:182:af_inet_client_get_remote_sockaddr] 
gfs-remote-ds: option remote-port missing in volume gfs-remote-ds. Defaulting 
to 6996
2008-11-06 15:38:58 D [common-utils.c:213:gf_resolve_ip6] resolver: DNS cache 
not present, freshly probing hostname: 192.168.0.115
2008-11-06 15:38:58 D [common-utils.c:250:gf_resolve_ip6] resolver: returning 
ip-192.168.0.115 (port-6996) for hostname: 192.168.0.115 and port: 6996
2008-11-06 15:38:58 D [client-protocol.c:5231:notify] gfs-remote-ds: got 
GF_EVENT_CHILD_UP
2008-11-06 15:38:58 D [socket.c:924:socket_connect] gfs-remote-ds: connect () 
called on transport already connected
2008-11-06 15:38:58 D [client-protocol.c:4549:client_setvolume_cbk] 
gfs-remote-ds: SETVOLUME on remote-host succeeded
2008-11-06 15:38:59 D [client-protocol.c:4616:client_protocol_reconnect] 
gfs-remote-ds: breaking reconnect chain
2008-11-06 15:39:09 D [addr.c:166:gf_auth] gfs: allowed = "192.168.0.*", 
received addr = "192.168.0.145"
2008-11-06 15:39:09 D [server-protocol.c:6406:mop_setvolume] server: accepted 
client from 192.168.0.145:1022
2008-11-06 15:39:09 D [server-protocol.c:6440:mop_setvolume] server: creating 
inode table with lru_limit=1024, xlator=gfs
2008-11-06 15:39:09 D [inode.c:934:inode_table_new] gfs: creating new inode 
table with lru_limit=1024
2008-11-06 15:39:09 D [inode.c:443:__inode_create] gfs/inode: create inode(0)
2008-11-06 15:39:19 D [inode.c:268:__inode_activate] gfs/inode: activating 
inode(1), lru=0/1024 active=1 purge=0
2008-11-06 15:39:19 D [afr.c:335:afr_lookup_cbk] gfs: scaling inode 1 to 3
*** glibc detected *** /usr/sbin/glusterfsd: free(): invalid pointer: 
0x08059f4e ***
--------------------


Starting client:
--------------------
2008-11-06 15:39:09 D [glusterfs.c:291:_get_specfp] glusterfs: loading volume 
specfile /etc/glusterfs/glusterfs-client.vol

Version      : glusterfs 1.4.0pre7 built on Nov  6 2008 14:23:21
TLA Revision : glusterfs--mainline--3.0--patch-561
Starting Time: 2008-11-06 15:39:09
Command line : /usr/sbin/glusterfs -N -l/dev/stdout -L DEBUG -f 
/etc/glusterfs/glusterfs-client.vol /home/vmail/
given volume specfile
+-----
  1: #############################################
  2: ##  GlusterFS Client Volume Specification  ##
  3: #############################################
  4:
  5: # the exported volume to mount
  6: volume cluster
  7:   type protocol/client
  8:   option transport-type tcp/client
  9:   option remote-host gfs-vmail001.vunet.local      # RRDNS
 10:   option remote-subvolume gfs                      # Exported volume
 11:   option transport-timeout 10                      # Value in seconds, 
should be relatively low
 12: end-volume
 13:
 14: # performance block for cluster (Write Behind Translator)
 15: #volume writeback
 16: #  type performance/write-behind
 17: #  option aggregate-size 1MB                       # Default is 0bytes
 18: #  option window-size 3MB                  # Default is 0bytes
 19: #  option flush-behind on                  # Default is 'off'
 20: #  subvolumes cluster
 21: #end-volume
 22:
 23: # performance block for cluster (Read Ahead Translator)
 24: #volume readahead
 25: #  type performance/read-ahead
 26: #  option page-size 65KB                           # 256KB is the default 
option
 27: #  option page-count 16                            # 2 is default option
 28: #  option force-atime-update off                   # Default is off
 29: #  subvolumes writeback
 30: #end-volume
+-----
2008-11-06 15:39:09 D [spec.y:178:new_section] parser: New node for 'cluster'
2008-11-06 15:39:09 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/protocol/client.so
2008-11-06 15:39:09 D [spec.y:202:section_type] parser: 
Type:cluster:protocol/client
2008-11-06 15:39:09 D [spec.y:268:section_option] parser: 
Option:cluster:transport-type:tcp/client
2008-11-06 15:39:09 D [spec.y:268:section_option] parser: 
Option:cluster:remote-host:gfs-vmail001.vunet.local
2008-11-06 15:39:09 D [spec.y:268:section_option] parser: 
Option:cluster:remote-subvolume:gfs
2008-11-06 15:39:09 D [spec.y:268:section_option] parser: 
Option:cluster:transport-timeout:10
2008-11-06 15:39:09 D [spec.y:350:section_end] parser: end:cluster
2008-11-06 15:39:09 D [xlator.c:367:xlator_set_type] xlator: attempt to load 
file /usr/lib/glusterfs/1.4.0pre7/xlator/mount/fuse.so
2008-11-06 15:39:09 D [glusterfs.c:804:main] glusterfs: running in pid 24155

2008-11-06 15:39:09 D [fuse-options.c:149:fuse_options_validate] fuse-options: 
using mount-point = /home/vmail/
2008-11-06 15:39:09 D [fuse-options.c:156:fuse_options_validate] fuse-options: 
using attr-timeout = 1
2008-11-06 15:39:09 D [fuse-options.c:168:fuse_options_validate] fuse-options: 
using entry-timeout = 1
2008-11-06 15:39:09 D [fuse-options.c:180:fuse_options_validate] fuse-options: 
using direct-io-mode = 1
2008-11-06 15:39:09 D [client-protocol.c:4918:init] cluster: setting 
transport-timeout to 10
2008-11-06 15:39:09 D [transport.c:104:transport_load] transport: attempt to 
load file /usr/lib/glusterfs/1.4.0pre7/transport/socket.so
2008-11-06 15:39:09 D [client-protocol.c:4962:init] cluster: defaulting 
limits.transaction-size to 268435456
2008-11-06 15:39:09 D [client-protocol.c:5194:notify] cluster: got 
GF_EVENT_PARENT_UP, attempting connect on transport
2008-11-06 15:39:09 D [inode.c:934:inode_table_new] fuse: creating new inode 
table with lru_limit=0
2008-11-06 15:39:09 D [inode.c:443:__inode_create] fuse/inode: create inode(0)
2008-11-06 15:39:09 D [client-protocol.c:5194:notify] cluster: got 
GF_EVENT_PARENT_UP, attempting connect on transport
2008-11-06 15:39:09 D [client-protocol.c:4610:client_protocol_reconnect] 
cluster: attempting reconnect
2008-11-06 15:39:09 D [name.c:182:af_inet_client_get_remote_sockaddr] cluster: 
option remote-port missing in volume cluster. Defaulting to 6996
2008-11-06 15:39:09 D [common-utils.c:213:gf_resolve_ip6] resolver: DNS cache 
not present, freshly probing hostname: gfs-vmail001.vunet.local
2008-11-06 15:39:09 D [common-utils.c:250:gf_resolve_ip6] resolver: returning 
ip-192.168.0.145 (port-6996) for hostname: gfs-vmail001.vunet.local and port: 
6996
2008-11-06 15:39:09 D [common-utils.c:270:gf_resolve_ip6] resolver: next DNS 
query will return: ip-192.168.0.115 port-6996
2008-11-06 15:39:09 D [client-protocol.c:5231:notify] cluster: got 
GF_EVENT_CHILD_UP
2008-11-06 15:39:09 D [socket.c:924:socket_connect] cluster: connect () called 
on transport already connected
2008-11-06 15:39:09 D [client-protocol.c:4549:client_setvolume_cbk] cluster: 
SETVOLUME on remote-host succeeded
2008-11-06 15:39:10 D [client-protocol.c:4616:client_protocol_reconnect] 
cluster: breaking reconnect chain
2008-11-06 15:39:19 D [inode.c:268:__inode_activate] fuse/inode: activating 
inode(1), lru=0/0 active=1 purge=0
2008-11-06 15:39:19 D [fuse-bridge.c:334:fuse_entry_cbk] glusterfs-fuse: 2: 
LOOKUP() / => 1
2008-11-06 15:39:19 D [fuse-bridge.c:2137:fuse_getxattr] glusterfs-fuse: 3: 
GETXATTR //1 (system.posix_acl_access)
2008-11-06 15:39:19 D [fuse-bridge.c:1987:fuse_xattr_cbk] glusterfs-fuse: 3: 
GETXATTR() / => 50
2008-11-06 15:39:19 D [fuse-bridge.c:2137:fuse_getxattr] glusterfs-fuse: 4: 
GETXATTR //1 (system.posix_acl_default)
2008-11-06 15:39:39 E [client-protocol.c:240:call_bail] cluster: activating 
bail-out. pending frames = 1. last sent = 2008-11-06 15:39:19. last received = 
2008-11-06 15:39:19. transport-timeout = 10
2008-11-06 15:39:39 C [client-protocol.c:247:call_bail] cluster: bailing 
transport
2008-11-06 15:39:39 D [socket.c:183:__socket_disconnect] cluster: shutdown() 
returned 0. setting connection state to -1
2008-11-06 15:39:39 D [socket.c:93:__socket_rwv] cluster: EOF from peer 
192.168.0.145:6996
2008-11-06 15:39:39 D [socket.c:568:socket_proto_state_machine] cluster: socket 
read failed (Transport endpoint is not connected) in state 1 
(192.168.0.145:6996)
2008-11-06 15:39:39 D [client-protocol.c:4636:protocol_client_cleanup] cluster: 
cleaning up state in transport object 0x8056218
2008-11-06 15:39:39 E [client-protocol.c:4691:protocol_client_cleanup] cluster: 
forced unwinding frame type(1) op(GETXATTR) address@hidden
2008-11-06 15:39:39 E [fuse-bridge.c:2093:fuse_xattr_cbk] glusterfs-fuse: 4: 
GETXATTR() / => -1 (Transport endpoint is not connected)
2008-11-06 15:39:39 E [socket.c:1187:socket_submit] cluster: transport not 
connected to submit (priv->connected = 255)
2008-11-06 15:39:39 D [inode.c:443:__inode_create] fuse/inode: create inode(0)
2008-11-06 15:39:39 D [inode.c:268:__inode_activate] fuse/inode: activating 
inode(0), lru=0/0 active=2 purge=0
2008-11-06 15:39:39 E [fuse-bridge.c:364:fuse_entry_cbk] glusterfs-fuse: 5: 
LOOKUP() / => -1 (Transport endpoint is not connected)
2008-11-06 15:39:39 D [inode.c:311:__inode_retire] fuse/inode: retiring 
inode(0) lru=0/0 active=1 purge=1
2008-11-06 15:39:39 D [client-protocol.c:4610:client_protocol_reconnect] 
cluster: attempting reconnect
2008-11-06 15:39:39 D [name.c:182:af_inet_client_get_remote_sockaddr] cluster: 
option remote-port missing in volume cluster. Defaulting to 6996
2008-11-06 15:39:39 D [common-utils.c:250:gf_resolve_ip6] resolver: returning 
ip-192.168.0.115 (port-6996) for hostname: gfs-vmail001.vunet.local and port: 
6996
2008-11-06 15:39:39 D [client-protocol.c:5231:notify] cluster: got 
GF_EVENT_CHILD_UP
2008-11-06 15:39:39 D [socket.c:924:socket_connect] cluster: connect () called 
on transport already connected
2008-11-06 15:39:39 D [client-protocol.c:4549:client_setvolume_cbk] cluster: 
SETVOLUME on remote-host succeeded
2008-11-06 15:39:40 D [client-protocol.c:4616:client_protocol_reconnect] 
cluster: breaking reconnect chain
2008-11-06 15:40:42 W [glusterfs.c:548:cleanup_and_exit] glusterfs: shutting 
down
2008-11-06 15:40:42 W [fuse-bridge.c:2685:fini] fuse: unmounting /home/vmail/

2008-11-06 15:40:42 W [glusterfs.c:548:cleanup_and_exit] glusterfs: shutting 
down
2008-11-06 15:40:42 D [glusterfs.c:569:cleanup_and_exit] glusterfs: no graph 
present
2008-11-06 15:40:42 D [dict.c:353:dict_destroy] dict: @this=(nil)
2008-11-06 15:40:42 D [dict.c:353:dict_destroy] dict: @this=(nil)
--------------------

I used FUSE 2.8.0_pre1 but switched back to FUSE 2.7.4. But this did not help. 
Same error even after recompiling GlusterFS after the fresh install of FUSE 
2.7.4.

I did not have those issues (the crashing stuff) with GlusterFS 1.3.x. I had 
other issues (with locking files etc) and that is the reason I started to look 
at the TLA version since I have read in a bug report that a certain patchlevel 
fixes that particular issue with locking. Well... and now I have other issues 
and my old issue is still not fixed.

Let me know if you need more info from me.

// Steve


-------- Original-Nachricht --------
> Datum: Thu, 6 Nov 2008 19:22:44 +0530
> Von: "Vikas Gorur" <address@hidden>
> An: Steve <address@hidden>
> CC: address@hidden
> Betreff: Re: [Gluster-devel] glusterfs--mainline--3.0 patch-556

> 2008/11/6 Steve <address@hidden>:
> > Hallo Raghavendra
> >
> > Now I get this, wenn I try to read a GlusterFS mounted filesystem:
> > ----
> > 2008-11-06 14:13:31 D [inode.c:268:__inode_activate] gfs/inode:
> activating inode(1), lru=0/1024 active=1 purge=0
> > 2008-11-06 14:13:31 D [afr.c:335:afr_lookup_cbk] gfs: scaling inode 1 to
> 3
> > *** glibc detected *** /usr/sbin/glusterfsd: free(): invalid pointer:
> 0x08091e5e ***
> > ----
> >
> > Do you need more information? What information?
> 
> Hi Steve,
> 
> Is it easy to reproduce? If so, can you give us your spec files and steps
> to
> reproduce it?
> 
> If it is not reproducible, it would help us if you had a core file and
> could get
> a backtrace.
> 
> Vikas
> -- 
> Engineer - Z Research
> http://gluster.com/

-- 
Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger




reply via email to

[Prev in Thread] Current Thread [Next in Thread]