It's similar to the defense in depth strategy used in the cyber space. First, you need to start with a real-time OS (RTOS). We used Microware OS-9. You need to ensure you have processes or tasks guaranteed to run at set intervals. Then you need to use a language that doesn't have garbage collection. Again, you need to know that the code you write will be run in the time you measured - you can't have garbage collection interfering with those timings. We used Objective-C (this was the early 90's - and remember, Objective-C was not a language invented by Apple). They desired the object-oriented nature of Objective-C with the runtime predictability of C. They felt C++ was too difficult to analyze.
Which is a good seque into the code. Yes, we used static analysis tools and performance analysis tools. We needed to know exactly how long each section of code took to execute on our target processor. We also did a lot of unit testing and had extensive integration testing. We had a dedicated build manager who ran the build daily, ran the unit tests, ran the integration tests, and ran the performance tests. They sent out a daily report. This was a full decade before anyone had created what we now refer to as CI/CD tools. All told I would say there was a 4:1 testing-to-coding ratio. Our customers never experienced any issues with our system not working properly - and these were industrial users in complex manufacturing environments.
Feature management is a serial process. You add one feature at a time and thoroughly vet it before adding the next feature. The marketing department prioritized features - usually based off what customers needed for their upcoming projects. We released on a set schedule so customers could plan their updates.