QuadriLiteral

joined 1 year ago
[–] [email protected] 2 points 1 month ago (1 children)

Huh odd, I guess it depends quite heavily on the system? Just to check I cleaned my build folder and am building now, ~700 files that take around 5 minutes to compile. I don't notice a thing, CPU (Ryzen 7 7700X ) is fully maxed out. I know that I do notice it on my laptop, but there reducing from 16 to 12 or even 14 is enough. Having to reduce to 4 is very different from what I experience. Currently on manjaro, the laptop has ubuntu.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (3 children)

If you don't want compilation to take all cores, use one or two cores less for the compile. I frequently compile C++ code, almost always I just let it max out 100%, haven't been really bothered by the lag. When I'm in a teams meeting for instance it can cause noticable lag so then I do ninja -n 8 or ninja -n 12 and problem solved.

[–] [email protected] 1 points 1 month ago

Cross-platform and performant, are there options besides C++ and rust?

[–] [email protected] 11 points 2 months ago (1 children)

I was very surprised yesterday to find out that Unreal Engine now offers native linux builds as well as linux targets. Works flawlessly too. So with all the hate linux seems to be getting from them from what you read in the occasional blog post, they must have devs working only on this support.

[–] [email protected] 2 points 2 months ago

Turns out you were the hacker all along

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

Note that the scope of "New Circle" is much bigger than "just" memory safety: choice types, pattern matching, ...

[–] [email protected] 1 points 2 months ago (1 children)

Which problems did you experienced?

ccache folder size started becoming huge. And it just didn't speed up the project builds, I don't remember the details of why.

This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.

Right, that might have been the reason.

To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.

When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come. Compilation of everything from scratch was an hour at the end. Switching to lld was a huge win, as well as going from 12 to compilation 24 threads. The code-base in a way you don't need to build everything to work on a specific part, using dynamically loaded libraries to inject functionality in the main app.

I was a linux dev there, the pch's worked, not as well as for MSVC where they made a HUGE difference. Otoh lld blows the microsoft linker out of the water, clean builds were faster on msvc, incremental faster on linux.

[–] [email protected] 1 points 3 months ago (3 children)

I've had mixed results with ccache myself, ending up not using it. Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads. This together with pchs and judicously forward declaring and including only what you use.

[–] [email protected] 1 points 3 months ago (1 children)

From the times Circle surfaces in discussions, I think I remember reading it's it not being open source that is holding back adoption? Not sure, anyway as a C++ dev i'd love to see one of the different approaches to fundamentally improving C++ take widespread hold.

[–] [email protected] 1 points 3 months ago

I guess this is go, and I don't know what the scoping is. In C++ I also suggest putting as much in the if as possible, because it limits the scope of the variables.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (6 children)

Such gains by limiting included headers is surprising to me, as it's the first thing anyone would suggest doing. Clang-tidy hints in QtCreator show warnings for includes that are not used. For me this works pretty well to keep build times due to headers under control. I wonder, if reducing the amount of included headers already yields such significant gains, what other gains can be had, and what LOC we're talking about. I've seen dramatic improvements by using pch for instance. Or isolating boost usage.

[–] [email protected] 2 points 4 months ago

I found basic functioning of worktrees to fail with submodules. The worktree doesn't know about submodules, and again and again messes up the links to it. Basic pulling, switching branches, ..., all of this frequently fails to work because the link to the submodule is broken. I ended up creating the submodules as worktrees of a separate checkout of the submodule repo, and recreating these submodule worktrees over and over. I pretty much stopped using worktrees at that point.

Have you tried the global git config to enable recursive over sub modules by default?

Nope, fingers crossed it helps for you ;) Unrelated to worktrees but: in the end I like submodules in theory but found them to be absolutely terrible in practice, that's without even factoring in the worktrees. So we went back to a monorepo.

view more: next ›