Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

+1

I can not understate how much I agree with parent comment.

The opposite of move fast, build a shitty prototype and iterate is a deliberate problem solving approach undertaken by the highest caliber of engineers. The actual challenges to be addressed are effectively addressed right at the design stage.

The result is a thing of immense beauty and elegance.

I will forever be grateful for the opportunity I had to see this magnificent piece of engineering in action.



I dunno man, I work at google and this comment feels like circlejerk. It's not held to THAT high a standard. We absolutely commit janky MVPs and iterate.


He is referring to the codebase itself, not individual projects or CLs.


He is referring to the codebase not the code?

What does that mean?


I think they are referring to the developer experience of working in the codebase, rather than the quality of the code.


As a Google outsider, that's not at all the impression I got from the paragraphs.

This:

> It's not held to THAT high a standard. We absolutely commit janky MVPs and iterate.

seems to very directly address this:

> The opposite of move fast, build a shitty prototype and iterate is a deliberate problem solving approach undertaken by the highest caliber of engineers. The actual challenges to be addressed are effectively addressed right at the design stage.

If your claim is that "Well, if you look at the codebase AS A WHOLE, there's absolutely no iteration and shitty prototypes, it's all designed right from the start."... well, I can't see any way that codebase came into existence without being built up by individual projects and changesets.

So, yanno, when folks on the ground report that individual projects and changesets/PRs/whatever ARE using the "Commit something barely serviceable, test it out, and iterate." process, statements like "We always get the design right before we write even a single line of code!" definitely come off as a circlejerk.

Google's a very large software house, it has a very exclusive hiring process, and it (reportedly) has internal tooling that's very well-adapted for the problems a company of its size faces. But Google is still hiring from the same pool of programmers as everyone else, and the odds that zero of those that it hires will work best with the "Get something out there to field-test, and use the test results to make it more suited for field use." method of development are absolutely zero. Given Google's size, the odds that zero of those will never be able to negotiate to work in the way they work best are ALSO zero.

(And -frankly- I expect that this method of development gets used a lot in the company. You can burn an assload of time on simulators (and what is the "What if?" game, but a in-brain simulator?), but in the realm of software, it's not-infrequently the case that the simulator with the best ROI is real-world deployment.)


While Google is a monorepo, that doesn't mean that every part of the codebase is equally critical.

People don't yolo submit CLs to core library components for example.

OTOH submitting somewhat hacky code to a new project while you iterate on it doesn't harm the rest of the code base that much since very little will depend on it.


Okay, sure.

How does this comment address any parts of my statement?

I'll even quote what might be the most important part for you:

> If your claim is that "Well, if you look at the codebase AS A WHOLE, there's absolutely no iteration and shitty prototypes, it's all designed right from the start."... well, I can't see any way that codebase came into existence without being built up by individual projects and changesets.


I, like many others here, echo this sentiment. While I disliked working in ads, the experience of working with that repo+tooling is unmatched by anything else in my career.


If the tooling is so far beyond anything publicly available, why don't you guys make something like that and make millions?


I strongly suspect it’s not that useful for a lot of businesses.

So many have their code split between a two dozen clouds, BA tools (… does Google put that in the monorepo too? Or is that, which is a lot of the code at most businesses, not “really” code at Google?), vendor platforms, low-code tools you all hate and are trying to deprecate but god damned if you aren’t still spending dev hours on new features in it, et c…

I bet achieving anywhere near the full benefits would first require retooling one’s entire business, including processes, and bringing a whole shitload of farmed-out SaaS stuff in-house at enormous expense, most places.


It’s not just tooling; processes, infrastructure and dedicated teams for centralized infrastructure are what makes Google’s monorepo what it is. FWIW, most of the tools are publicly available or have good publicly available counterparts. What’s likely missing elsewhere is funding for infrastructure teams.


I’d love to see something like that applied to a project like Debian. The tools already exist. The cost of switching is too great, and you’d need everyone to learn your new system before you’d see the benefit.

I wrote a bit about this here: https://blog.williammanley.net/2020/05/25/unlock-software-fr...

I understand that nix have made some progress in this direction, but I don’t know any more than that.


Perhaps they make more than that just by using the tooling internally and creating/maintaining other stuff with it — that’s a major competitive advantage


millions sounds like a lot but it's like just 3 employees.

To quote broccoli man, "i've forgotten how to count that low" [1].

[1]: https://www.youtube.com/watch?v=3t6L-FlfeaI


I think skill is also an issue here, in both directions. I have worked with a company that followed the opposite of move fast and it just turned into a 'who is the most correct' and 'what is the most elegant code' competition. We didn't push out a single new feature in the three years i'd worked there at that point. There was so much focus on code that we gave almost no time to business requirements.

The wider implication of this was that the number of tickets we got dropped dramatically, because users knew they'd never be resolved anyway.

Balance is key.


Cannot overstate.


(Aside). To expand slightly, what robertsdionne is highlighting is the changing usage of this expression. In its original sense, e.g an issue is so important that it is impossible to overstate its importance. It is now increasingly used the other way around.

Old me would have said it’s used wrongly, but this happens all the time with language. Especially things being used in the opposite of their original sense, e.g. inflammable for flammable.


In my mind, "cannot overstate" always meant "impossible to overstate", but I think some people interpret/intend "cannot understate" to mean something like "must not understate". I don't know if that's really what they're thinking, but it is how I make sense of it. I have come to just avoid such constructions.

Edit: reminds me of an ancient SNL skit with Ed Asner in which he's a retiring nuclear engineer and as he heads out the door he says to his incompetent co-workers "Just remember, you can't put too much water in a nuclear reactor".


> opposite of their original sense, e.g. inflammable for flammable

Inflammable was never the opposite of flammable. Those word have always been synonyms. The opposite was always non-flammable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: