bug-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallel make across multiple connected systems.


From: Henrik Carlqvist
Subject: Re: Parallel make across multiple connected systems.
Date: Thu, 28 Nov 2024 07:48:32 +0100

On Wed, 27 Nov 2024 17:00:27 -0500
Sean Godsell <sgodsell@gmail.com> wrote:
> I was wondering if anyone has any plans to make the actual 'make' command
> work across multiple connected PC systems, via networking of some kind.  It
> could be wireless networking, ethernet, or even networking through
> thunderbolt, usb 4, or even fiber.  All that matters is that each networked
> PC has access to the same files.
> 
> For example if you want multiple PC's compiling the linux kernel source
> code for example, then each PC needs to see the same Linux kernel files,
> and directory structure.  The main system compiler, or build server PC that
> has all of the kernel source code, would also need to have something like
> an NFS server configured, and running on that main build PC as well.  That
> way each connected PC, will be able to help out with compiling the source
> code as well.  Just as long as each PC has access to the exact same files
> via an NFS client, which needs to be setup as well.   To speed things up
> even more, you could make sure all of the build programs are installed on
> each client PC as well, like gcc, g++, as, ar, ld, ...

I have written some Makefiles which kind of works that way. The building
blocks of such a solution is:

* NFS so all machines see the same project directory
* ssh keys so ssh logins can be done automatically without entering password
* Some load balancing system like https://balance.inlab.net/overview/

Each heavy  task in the Makefile is prefixed with ssh, something like this:

ssh -x balancer.my.net "cd $WORKINGDIR; heavy_task arguments"

The NFS server should have significantly more network bandwidth than the
clients. If the clients have 1 Gb/s the NFS server maybe should have 10 Gb/s.

The extra time needed for each ssh login might make this solution less
suitable for small tasks like compiling a single small file, instead bigger
chunks of work should be distributed by the load balancer.

When one work depends upon another work, you might need to add calls to 
"sync -f" and maybe also some sleep to avoid NFS caching causing troubles.

For really big works, you might want to be able to vary and adjust the load of
the machines in the load balancer. This can be done by removing and reaadding
nodes from the balancer and it can be done by adjusting the number of
parallell jobs in make. I have applied a patch which allows adjusting the
number of parrallell jobs at https://github.com/henca/Henriks-make

regards Henrik



reply via email to

[Prev in Thread] Current Thread [Next in Thread]