Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the code is modularized and put in libraries, how often do you need to build all 2 million lines of code after making a change. What am I missing?


He does say:

>Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn't need to — with most Rust projects being statically linked, the machine code isn't needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.

It was my first thought as well but I have no knowledge how Rust linking works so I don't know how reasonable caching module compilation units would be.


Maybe a slow linker is the problem


The system linked is pretty slow; many folks report massive speed ups by using LLD


Replacing BFD with Gold (ELF-only) or LLD can be a huge win.

On the last major production codebase I did this for, linking a single artifacts went from ~60 seconds (BFD) to ~10 (Gold). Multiply by nearly 100 artifacts (several dozen libraries, several dozen tools, per-library unit tests, benchmarks, applications, etc. all statically linking some common code), and some basic incremental "I touched a single .cpp file" builds went from 30+ minutes to maybe 5 minutes for a single platform x config combination.


This depends on where you are in the stack. The perspective of someone working on a low-level, widely used library is different from someone working on a top-level application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: