help-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: C++ Modules


From: Paul Smith
Subject: Re: C++ Modules
Date: Tue, 13 Jun 2023 16:39:26 -0400
User-agent: Evolution 3.48.2 (by Flathub.org)

On Fri, 2023-06-09 at 18:30 -0700, Kaz Kylheku wrote:
> As it incrementally compiles, the module system determines what
> else needs to be compiled. The transitive closure of the dependency
> graph is interleaved with the compiling.

Well, I've read a bit more (still need to think about this) and it's
not QUITE that simple IMO.

First, there's the problem that people would like to be able to take
advantage of modules without having to first convert their entire
existing codebase, AND all thirdparty libraries they use, to modules. 
If any part of your code is "old style" straightforward #includes,
which they certainly will be for a very long time, you need something
to manage that, externally from modules.

Second, of course there is incremental building.  You don't want to
have to compile everything every time which means you need a way to
determine which modules need to be recompiled and which do not.  Of
course, this could also be embedded into the "module management"
component of the compiler as well, but my suspicion is that compiler
developers are not very excited about aggregating that extra capability
into their already-complex systems.

If it's feasible, it seems more likely that everyone will opt for the
traditional UNIX philosophy of keeping the build tool separate from the
compiler and providing a way for them to communicate.

It looks like there are (at least) two competing ideas (based on my
quick reading): Nathan's solution is to follow the LSP model and create
a service that runs in the background, that build tools can connect to
and query for module dependency information dynamically as the build
proceeds.

The solution being pushed by the CMake folks is to define a file format
that either the compiler or some other tool would generate; so you'd
run that tool over the codebase first to generate the information then
the build tool would be able to parse it and get the information it
needs (kind of like the old makedepend tool).  This is attractive to a
project like CMake because they already have a "pre-process the project
to set up for builds" step.  I'm not exactly sure what would be
required during incremental builds; I assume you'd need to first re-run
the tool on any changed source, then reread the new information.

My suspicion is that GNU Make could already work with the second
method, due to it's ability to recreate included makefiles and re-exec
itself, if it had some way of parsing the JSON files and/or some tool
that could convert them into makefile rules.

I guess we'll have to see which method gets more widely adopted.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]