Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just cannot face the thought of calling any of them "best".

Every one had good and bad features though. One or two were OS-sized and I think a codebase that compiles and links to 85GB of output for 20+ devices without being a total disaster inside is harder to do than a neat small python module or whatever.

GOOD FEATURES:

Maintenance of the build and test: I worked on tools that helped builds go faster so I saw a lot of codebases where people were not maintaining the build partly because nobody had that a s a responsibility. There was bad management of dependencies leading to build failures, poor performance, incorrect build output. Android would be a counter example to that - I don't know if people like developing in it but it was always hard to accelerate it as the maintainers fixed performance problems regularly leaving our tools with little to improve.

Using appropriate languages. Writing everything in C++ was a fad at one time. All projects work better, port better, have faster build times, are easier to test etc if they use memory safe "build once" languages to a maximum (e.g. java) and unsafe ones (e.g. C/C++ which have to be rebuilt and tested for each device/os) to a minimum. IMO Android beat Symbian amongst other reasons because it wasn't all C/C++ and that meant a lot of code didn't have to be rebuilt for different devices. This made builds faster and fast builds lead to better quality because of a short dev-test cycle.

Use of faster compilers over "better" compilers. Ultimate code performance and quality depends on a fast development cycle more (IMO) than on having the most optimizing compiler. GCC versus the older ARM compilers for example. Now the ARM compiler is based on LLVM and I know that happened indirectly from a suggestion I made to someone who then made it to ARM who then did it.

The setup and build of one codebase I worked on was as easy as one could expect, the build errored out if you tried to use the wrong tools so you never ended up debugging weird failures because of an incorrect tool in your PATH somewhere. I made this feature happen :-D. With big codebases the tools could be included in the version control system so you knew you had the right compiler, right object dumper etc. This is another strength of Android and yet I was in a project for Symbian to do the opposite because of some utter bonehead who never touched a build in his life who was trying to make a name for himself with his slimy bosses as a "doer" and "reformer."

Codebases (especially big ones) benefit a lot from some kind of codesearch/index where you could find out where some function/class/variable was defined and what version of the source base it was introduced in.

BAD FEATURES:

Exclusively Owned code - we need to know who understands code best and who is best to review it but I don't think anyone should have totally exclusive control. It was a nightmare for me at one job - trying to get another team to make some needed change (like fixing their stupid makefiles to work properly in parallel). We (build team) should have been able to do it ourselves - maybe including them in the PR. Sometimes ownership is entirely theoretical - nobody who wrote it is still employed and nobody among the notional owners understands it and none of them want to approach it within 100 metres in case it blows up and becomes their problem. I simply had to approach such code - no choice - but I kept having to send diffs to people who didn't want to bother to look at them. It was a case of pushing wet spaghetti and took forever to do very simple changes.

Insufficient tests that run infrequently. What else is there to say?

Complicated code with no "why" or "what this is for" type comments. The kind of thing you trawl around in for weeks and cannot make head nor tail of what is going on overall.

Code with so much dependency injection and general SOLID that you have to bounce all over the place to understand a very simple action.

Code where writing tests is an enormous ballache. In one Go codebase the reason was because somone decided that the standard Go practise of an array of test data being run through a kind of "test engine" was the only way anyone should be allowed to write tests. Hence you had to do lots of weird things to make your test cases into data. Generally we use a kind of "religious" approach to try to get consistency out of a group of people but then take it much too far.

codebases without automated reformatting - so everyone wastes time arguing about line spacing or camel-case names or whatever in their PRs.



> Code with so much dependency injection and general SOLID that you have to bounce all over the place to understand a very simple action.

I find that happens when people get religious about patterns and methodology; without understanding the "why", the language, and how a computer works.

Case in point: I once worked on a C# project that used a port of Spring for dependency injection: Ultimately, it was near impossible to know when something was constructed, and what was calling what. There were classes that couldn't call themselves through "this" because of certain weird dependency injection features used.

Later, I decided to use dependency injection as a design pattern: Instead of a complicated DI framework, there was just a few files of code. It was very easy for newcomers to understand. It was also easy to swap in mock objects, and easy to swap dependencies based on the target platform. It was also easy to see when a dependency was constructed; because it wasn't hidden behind a giant framework.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: