Hacker Newsnew | past | comments | ask | show | jobs | submit | jorisd's commentslogin

I'd argue that they're not completely wrong in doing those things.

Many of those things you list really don't take too much time to do, like writing systemd units or using an ORM. But they really help when anyone needs to take a look at things in the future or someone else wants to contribute as well later on. Besides, they're easier to do when things are still fresh in the mind, and these kinds of chores rarely get done later on when a project has grown.

This being a hobby project may also be a reason why the other programmers want to do things right; they may get satisfaction and learn new things by doing it this way!


> This being a hobby project may also be a reason why the other programmers want to do things right; they may get satisfaction and learn new things by doing it this way!

Exactly my thought. Hobby projects are great opportunities to practice those skills since, if you don't, they don't manifest.


This is a project which builds firmware for multiple devices and processor architectures, supporting developers running various operating systems locally, and includes support for documentation generation and firmware localization. It doesn't sound too strange to me that such a project includes a decent amount of tooling to ensure that compiling things is accessible for a layperson, and ensure a healthy influx of community and corporate contributions which don't diminish software quality. None of the dependencies mentioned in the Dockerfile [1] really seem out of place, to me, and I don't think the documentation generation or checkstyle packages are critical to compiling the firmware.

Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as - as far as I can tell - none of the software relies on it being run inside a Docker container.

[1] https://github.com/Ralim/IronOS/blob/80c4b58976268849b6d1c8d...


> This is a project which builds firmware for multiple devices and processor architectures, supporting developers running various operating systems locally, and includes support for documentation generation and firmware localization.

exactly the source of the issue. The scope of the project is just preposterous for what it is. I'm not sure what the proportion between boilerplate and actual useful functionality would is, but from the little that I saw it is outrageous.

> Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as

I heavily disagree with this assumption and the rest of the assumptions related to the stability of the dependencies.


The scope of the project is for the project to decide. It seems that 168 contributors and thousands of users seem to disagree with you here.

I'm not really sure what fewer dependencies you are used to other than a compiler, make, a scripting environment to orchestrate things (bash) and some (other) scripting environment to cook assets (Python3). I suppose that that last bit is something you're not used to in a more embedded world, but in the world of user-facing tools with UIs, it's really not uncommon at all to have a dependency on some font library or internationalization library so that you can generate an image or display some text. The latter of which is presumably fairly important, given that the users and hardware manufacturers that this project supports aren't based out of locations where English is the native language. I'm not sure localization can be pulled out of scope, because of that.

> > Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as

> I heavily disagree with this assumption and the rest of the assumptions related to the stability of the dependencies.

Docker images quite literally contain an entire (userspace) root filesystem. As long as you have an existing Linux installation on an x86 processor, a Linux kernel that didn't cause any breaking changes compared with the one that was used when the image was built, and some way of extracting a gzipped tarball, you can take the image that you previously built 12 years ago, extract its contents, and run all of the tools (gcc, Python3, make, bash) embedded within outside of Docker, without any dependency issues because all of the dependent libraries can be found within the image already (if they weren't in there, the project's CI builds would not work at all!).

You can verify this quite easily: install Docker, then run `docker pull ubuntu:latest; docker save --output test.tar.gz ubuntu:latest`.

I'll agree with you that if you are a user that stumbles upon this project 12 years from now (assuming they ceased development today) there are likely some challenges as you'll have to source the dependencies from somewhere and the repository URLs used today may no longer be available by then (most projects probably suffer from this). But if today someone builds the IronOS development image from the Dockerfile and saves it, I really don't know what'll have to happen for it to be impossible to get the compiler and other tools contained within to run on supported hardware in 12 years.

EDIT: Imagine what the project owners would have to do to do the same things they're doing now (building documentation, cooking required assets) but without relying on third party tools or programming languages other than C. They'd have to spend time writing font parsers, documentation generators, build scripting tooling, and much more! In a sibling comment you mentioned that "it seems to me that at some point this industry stopped trying to solve real issues", but I'd argue that the project pulling in the scope of building those tools to avoid dependency issues is exactly that: them solving issues that is not within their scope or merit to solve.


I'm asking you what do you think about it, not what the devs think about it. Let me phrase it differently, there are a total of 16 languages and 276969 lines of code in the repo for a "soldering iron firmware"


I'm not sure where you get that language statistic from. I'm going to assume whatever tool you're using thinks that "JSON" and "YAML" are languages, to which I respond that they're not or at least not as significantly so as a programming language with its own paradigms, libs and tools. The repo in question is mostly C/C++ with a relatively small amount of other stuff to provide tooling support, and the languages used for that are really not all that problematic, difficult to understand, or unsupported.

As far as LOC goes, I know well enough that it's a meaningless statistic that has very little practical use. I've written 34 lines of JavaScript that were as meaningful as 25k lines of C, but those lines of JS were obviously interpreted on an engine that's millions of LOC.


DOSEMU or FreeDOS in a VM (e.g. Virtualbox) typically gives the best performance with decent ease-of-use. Otherwise, you'll want to run it natively on Windows XP or earlier, or FreeDOS. For running in a VM, you'll need to have an x86 processor with hardware-assisted virtualization enabled. DosBOX might work and is worth trying due to its ease of setup, but you really need a very powerful machine.


Most of the implementation details here don't really matter until you need to modify these advanced types directly. That Ensure type definition line in that example is a low level detail that you put in a library somewhere, import throughout your codebase, and then mostly forget about.

In practice you'd have someone that understands this set it up once, and then document its usage for others, maybe document the implementation to make it easier to modify later.

The TS compiler is surprisingly good at giving you good readable error messages as well when your code violates these advanced types; the errors tell you what you specified and what is supported, it doesn't display the low level type logic as part of the error users see. This means that there's very little need for anyone to really how these type definitions work.

EDIT: clarifications and spelling.


Until it's the root cause of code not working as expected, by another developer far removed from initial implementation. Non-obvious code is harder to maintain. Code is written for people, not machines; that means the harder it is for people to maintain, the less useful it actually is.


Valid point. On the other hand, these kinds of advanced types can prevent lots of bugs and maintenance work, and may therefore be worth the day of debugging when it breaks after two or three years of usage.

I've used types like this in a pretty advanced TypeScript UI project consuming lots of services to enforce compile time errors. We were using generated TS clients for all of the APIs we consumed, and the compiler would automatically throw readable errors wherever we were missing form fields or types became incompatible. I committed the advanced type once, documented its usage, and I don't think anyone has had to deal with it since, whilst the types have steadily prevented errors.

And even then: it's just type definitions. If it really becomes a maintenance burden or someone has no clue what it does, you can simply replace the type with "any" or something similar and all of your problems are gone and typescript won't complain anymore (at the expense of less type error checking).

EDIT: improved wording


Complicated types (like any complicated code) need their own tests demonstrating that they do what the author thinks (and fails to think about, as it's changed).


> In practice you'd have someone that understands this set it up once, and then document its usage for others, maybe document the implementation to make it easier to modify later.

In practice, that someone then leaves the company, leaving this nightmare underfoot.

> The TS compiler is surprisingly good at giving you good readable error messages as well when your code violates these advanced types

Only if you're that original person who understands it! I would still have no idea what is happening, no matter how clear.


The sample type code mentioned above will give the following error on the TypeScript Playground (https://www.typescriptlang.org/play):

  "Type '{ foo: number; }' is not assignable to type 'B'.
    Property 'baz' is missing in type '{ foo: number; }' but required in type '{ foo: number; baz: number; }'."
I've had the compiler emit errors much like the following for way more complicated types that combined several of these kinds of structures together to form much more bespoke type checks (reproduced from memory, so I'm not 100% certain on the error or use case):

  const e = form.email
  ^
  ERROR: "email" is not in '"name" | "firstname" | "lastname" | "e-mail" | "birthdate" | "password"'
The thing to note here is that it often doesn't expose the details of the implementing type and underlying (admittedly complicated) type system primitives at all to users. That said, I'll have to be honest and say that I have seen it throw much more difficult to understand nested errors referring to the underlying type implementation when I was working on the type system itself to create stricter type checks for functionality that was previously unchecked (i.e. treated as "any" by the compiler).

The other thing to note is that these things are really only doing type checking. If it becomes troublesome and it does start to spit out type errors incorrectly, throw unreadable errors, or otherwise become a maintenance burden, these types are not particularly difficult to remove, and by removing them you won't break your code. Consider that to be the equivalent of removing a linting rule or no longer requesting a review from a colleague. Though it's probably a good idea to document how to remove these advanced checks for when people find them annoying when someone leaves ;)

Incorrect type checking implementation is probably the biggest problem with these things getting complex, though. If your type check is incorrectly throwing errors for implementations that don't contain any errors at all, that's going to set you back a lot!


Isn't this true for any abstraction, though? If the person who wrote it is inaccessible, you have to understand it by reading the source.


Sure. But this is easier to understand, albeit means repeating code:

    class B {
        foo: number;
        bar?: number;
        baz: number;
    }
I know these are toy examples, and I'm sure there are reasons why in the real world you'll be modeling things where this is not a good option. But I'd really need to be convinced that the CRUD webapps a lot of us write actually benefit from having this in our source code.


A blog post requires an account with a blogging provider or a hosting provider though, so I wouldn't call it a lower barrier if you don't have it.

Besides that, the context matters here. This post is riding off of the attention that another Skyrim WTF bug got recently, which was posted on Twitter. Continuing the conversation there seems like the lowest barrier to entry to me, considering that the author already has a Twitter account and a following there, especially if you actually want to reach people that might find it interesting.


On Twitter you sometimes need an account just to read posts as well. That's the only reason I have one.

> Continuing the conversation there seems like the lowest barrier to entry to me

You can also link it.


Linking no longer works since for people who are logged out twitter has blocked being able to click on anything (ie unable to click on next message, thread, post, etc)

https://news.ycombinator.com/item?id=28231129

there's a lot of gaslighting of people because twitter is changing their ui. (ie oh no? you can't see it? must be something weird, it works for me. But in fact, twitter is the one changing the ui for difference classes of users so nothing really works consistently from one user to the next)


Counterpoint: build steps allow for the creation of batteries-included systems that magically do things right, instead of requiring the engineers to do it manually and know about it. This allows for horizontal scaling of workforces (just add more people and train them just enough to be proficient), and keeps knowledge requirements of new engineers low (e.g. they don't need to know about image minification and compression because the build steps takes care of that).

I do greatly enjoy no-buildstep projects for the same reasons as you've mentioned, but when working with others I've also seen them fail because of knowledge gaps in more junior engineers. Provisioning a server and CI/CD pipeline to build your projects is a lot easier than ensuring that everyone working on a project has all of the required knowledge to keep it nice and performant at runtime.


You actually don't even need the HTML scaffolding for that, and can author a js-sequence-diagrams diagram straight into a text file, append a simple script to render the document, and save as .html! Example: https://unpkg.com/browse/js-sequence-diagrams-autorenderer@1... - click on "view raw" to see it in action.

Looks a lot cleaner, and the .html itself is a valid diagram as the script tag that bootstraps the renderer is prefixed with a comment hash.

EDIT: I used to have this in a gist that I'd load via rawgit.com, but since that's no longer active, I figured I'd update my script and make it publicly available through unpkg :)


Howdy, that's some black magic! Thanks a lot, that's a really neat idea.


The third slide of Ryan Dahl's 2009 JSConf.eu presentation (https://www.youtube.com/watch?v=ztspvPYybIY) covers most of the reasons for why it was made: "I/O needs to be done differently". Evented I/O via event loops wasn't really that much of a thing back then, and many server-side web frameworks were simply sitting idle whilst waiting on I/O. Node changed this, and thereby enabled a kind of concurrency that was easy to achieve and there by default, often without the programmer really realizing it, because they didn't have to do anything too special to get it done other than write JavaScript with callbacks.

I don't think JavaScript was really the point of it.

(EDIT: but JavaScript having functions as a first-class citizen, and closures, makes it a very good candidate for something that leverages event loops for this kind of thing)


The NodeJS community isn't averse to shell commands as you described; far from it I'd say. But a main differentiating factor between the Node and Python community is that Node folks like to compose things from lower level functionality a bit more, and are much more into functional paradigms and patterns. It helps that in Node, everything has essentially consolidated to using Express and Connect-style middleware, so interop between frameworks and libraries is high, and it's easy to go from a lower level basic framework to one that has batteries included (to put it in Python analogies: going from NodeJS "Flask" to NodeJS "Django" is easy because in Node "Django" runs on top of "Flask" and has support for the same middleware pattern, so all "Flask" middleware works fine if you switch to "Django"). The higher level batteries included frameworks almost always ship with their own CLI and tooling that closely matches what you'd find in something like Django; take a look at Nest.js for an example of such a framework.


I think this is one of the main reasons why TypeScript got so popular, the other reason being the excellent support for it in Visual Studio Code. Before adopting TypeScript, I'd have to read documentation in a wide variety of documentation styles and standards, and then manually ensure that I was calling the right functions with the right arguments (or - alternatively, if I was lazy - I'd just write some shim code and attach a debugger to figure out the call signatures of callback functions). With TypeScript and type hints installed for the libraries I'm using, instead I just let my editor hand out typing information and autocomplete hits, and let the typescript compiler do type checking.

If anything, TypeScript sometimes feels like a nice middle-ground between C# and JavaScript (and Java?), and though it's not perfect, I do feel that it's pleasurable once you get the hang of it and the quirks of the ecosystem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: