gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Potential Memory Leak?


From: Karl Bernard
Subject: Re: [Gluster-devel] Potential Memory Leak?
Date: Tue, 23 Oct 2007 08:03:36 -0400
User-agent: Thunderbird 2.0.0.6 (Windows/20070728)

Hello Krishna,

I have 5 servers running the client and 4 servers running the brick server. In the config I was testing, only 3 of the brick servers are used.

I have scripts running on the 5 servers that open images of 5k to 20k and create thumbnails for those images of about 4k. All files are written in a hash directory structure.

After reading and creating a lot of files (1 million for example), I can see that the memory usage for the glusterfsd have grown substancially.

Software versions:
glusterfs-1.3.4
fuse-2.7.0-glfs4

<<-- glusterfs-server.vol -->>
volume brick-posix
       type storage/posix
       option directory /data/glusterfs/dataspace
end-volume

volume brick-ns
       type storage/posix
       option directory /data/glusterfs/namespace
end-volume

volume brick
 type performance/io-threads
 option thread-count 2
 option cache-size 32MB
 subvolumes brick-posix
end-volume

volume server
       type protocol/server
       option transport-type tcp/server
       subvolumes brick brick-ns
       option auth.ip.brick.allow 172.16.93.*
       option auth.ip.brick-ns.allow 172.16.93.*
end-volume
<<-- end of glusterfs-server.vol -->>

<<-- start client.sharedbig.vol -->>
volume sxx01-ns
type protocol/client
option transport-type tcp/client
option remote-host sxx01b
option remote-subvolume brick-ns
end-volume

volume sxx02-ns
type protocol/client
option transport-type tcp/client
option remote-host sxx02b
option remote-subvolume brick-ns
end-volume

volume sxx03-ns
type protocol/client
option transport-type tcp/client
option remote-host sxx03b
option remote-subvolume brick-ns
end-volume

volume sxx04-ns
type protocol/client
option transport-type tcp/client
option remote-host sxx04b
option remote-subvolume brick-ns
end-volume

volume sxx01
type protocol/client
option transport-type tcp/client
option remote-host sxx01b
option remote-subvolume brick
end-volume

volume sxx02
type protocol/client
option transport-type tcp/client
option remote-host sxx02b
option remote-subvolume brick
end-volume

volume sxx03
type protocol/client
option transport-type tcp/client
option remote-host sxx03b
option remote-subvolume brick
end-volume

volume sxx04
type protocol/client
option transport-type tcp/client
option remote-host sxx04b
option remote-subvolume brick
end-volume

volume afr3-4
 type cluster/afr
 subvolumes sxx03 sxx04
option replicate *:2 end-volume

volume afr2-4ns
 type cluster/afr
 subvolumes sxx02-ns sxx04-ns
option replicate *:2 end-volume

volume unify type cluster/unify
 subvolumes afr3-4
 option namespace afr2-4ns
 option scheduler rr
end-volume

## Add writebehind feature volume writebehind
 type performance/write-behind
 option aggregate-size 128kB
 subvolumes unify
end-volume

## Add readahead feature
volume readahead
 type performance/read-ahead
 option page-size 256kB     #
 option page-count 16       # cache per file  = (page-count x page-size)
 subvolumes writebehind
end-volume

<< -- end of client.sharedbig.vol -->>


Krishna Srinivas wrote:
Hi Karl,

Do you see the memory usage go up everytime you run the script?

Can you give us the config details, spec files and script? That will
help us to find out where the leak might be happening.

Thanks
Krishna

On 10/21/07, Karl Bernard <address@hidden> wrote:
Hello,

I've been testing luster in a development environment with 4 servers
acting as both client and servers.

In the last 2 days, I've run maintenance script that perform a lot of
read and write operations.

I've noticed that the glusterfsd was using 57mb of memory when initially
started and the memory usage grew to using 85% of server memory on one
of the host:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
COMMAND
23219 root      17   0 1027m 861m  756 D    2 85.3 315:29.43 glusterfsd

I also saw the performance slowly degrade and saw a huge jump in speed
after I restarted the deamon on all the bricks.

I'm unsure what to monitor to help debug the potential problem. Is there
something I can do to help improve gluster and fix that problem?

Karl


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel






reply via email to

[Prev in Thread] Current Thread [Next in Thread]