gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] libgfapi threads


From: Kelly Burkhart
Subject: Re: [Gluster-devel] libgfapi threads
Date: Fri, 31 Jan 2014 09:07:23 -0600

Thanks Anand,

I notice three different kind of threads: gf_timer_proc and syncenv_processor in libglusterfs and glfs_poller in the api.  Right off the bat two syncenv threads are created and one each of the other two are created.  In my limited testing, it doesn't seem to take much for more threads to be created.

The reason I'm concerned is that we intend to run our gluster client on a machine with all but one core dedicated to latency critical apps.  The remaining core will handle all other things.  In this scenario creating scads of threads seems likely to be a pessimization compared to just having one thread with an epoll loop handling everything.  Would any of you familiar with the guts of gluster predict a problem with pegging a gfapi client and all of it's child threads to a single core?

BTW, attached is a simple patch to help me track what threads are created, it's linux specific, but I think it's useful.  It adds an identifier and instance count to each kind of child thread so I see this in top:

top - 08:35:47 up 48 min,  3 users,  load average: 0.12, 0.07, 0.05
Tasks:   9 total,   0 running,   9 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.1%sy,  0.0%ni, 98.9%id,  0.0%wa,  0.0%hi,  0.7%si,  0.0%st
Mem:     16007M total,     1372M used,    14634M free,       96M buffers
Swap:     2067M total,        0M used,     2067M free,      683M cached

  PID USER      PR  NI  VIRT  RES  SHR S   %CPU %MEM    TIME+  COMMAND
22979 kelly     20   0  971m 133m  16m S      0  0.8   0:00.06 tst
22987 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/sp:0
22988 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/sp:1
22989 kelly     20   0  971m 133m  16m S      0  0.8   0:00.03 tst/gp:0
22990 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/tm:0
22991 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/sp:2
22992 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/sp:3
22993 kelly     20   0  971m 133m  16m S      0  0.8   0:01.98 tst/gp:1
22994 kelly     20   0  971m 133m  16m S      0  0.8   0:00.00 tst/tm:1

Thanks,

-K



On Thu, Jan 30, 2014 at 4:38 PM, Anand Avati <address@hidden> wrote:
Thread count is independent of number of servers. The number of sockets/connections is a function of number of servers/bricks. There are a minimum number of threads (like the timer threads, syncop exec threads, io-threads, epoll thread, depending on interconnect RDMA event reaping threads) and some of them (syncop and io-thread) count are dependent on the work load. All communication with servers is completely asynchronous and we do not spawn a new thread per server.

HTH
Avati



On Thu, Jan 30, 2014 at 1:17 PM, James <address@hidden> wrote:
On Thu, Jan 30, 2014 at 4:15 PM, Paul Cuzner <address@hidden> wrote:
> Wouldn't the thread count relate to the number of bricks in the volume,
> rather that peers in the cluster?


My naive understanding is:

1) Yes, you should expect to see one connection to each brick.

2) Some of the "scaling gluster to 1000" nodes work might address the
issue, as to avoid 1000 * brick count/perserver connections.

But yeah, Kelly: I think you're seeing the right number of threads.
But this is outside of my expertise.

James

_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Attachment: gluster_pname.patch
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]