Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is nice and innovative, no doubt. But reinventing the syntax from scratch instead of introducing just the minimal amount of changes to the original C is a barrier that will deter 90% of possible adopters.

The reason C# took off is that it's as close to C/C++ as possible. If there is a difference, it's due to a fundamental semantic change. E.g. it moved from painstakingly reinterpreting hundreds of small header files for each compiled source file to loading efficiently serialized hierarchies of class definitions. Hence #include got replaced with using. It changed the semantics of pointers vs. references, hence 'ref' instead of '*', and so on. But there is no "fn square(x: u32)" instead of "uint32_t square(uint32_t x)" just for the sake of it.

And that's the main reason why many people will never consider even looking into the advantages Zig offers. Keeping another syntax in your head is just not worth it.



If you just keep C ideas you might almost just as well give up altogether.

C# actually only resembles C rather superficially, idiomatically there's a large gap and of course you should write idiomatic code, for example it's true you can write a C-style for loop in C#, but you almost never should, C# has a (not great, but it's something) for-each loop that's idiomatic.

C#'s primitive types look superficially like C types, but behave more like the modern sized types from a language like Rust. They're technically structures, albeit with a more convenient alias keyword. For example "long" is a signed 64-bit integer type, like i64, not some arbitrarily "maybe bigger than int" type as it is in C.

123.ToString() is a reasonable thing to write in C#, it means "Call the ToString method on this integer 123" much like Rust's 123.to_string() -- you can't do anything similar in C or even in C++


>for example it's true you can write a C-style for loop in C#, but you almost never should, C# has a (not great, but it's something) for-each loop that's idiomatic.

This makes the learning curve way easier. You can start meaningfully using C# while using C-style for loops, and eventually switch to 'foreach' once you realize it's better.

If I wanted to give Zig a try today by using it for some small low-priority task, I would keep stumbling on these minor syntax differences all the time, and would eventually give up and do it in C/C++ because the overhead outweighs my curiosity.

Most pragmatists don't have infinite time to learn a new programming language for fun. They have a very limited amount of attention and tight time constraints, and will move to the next pragmatic solution if it starts looking like the current one is not cutting it.


> I would keep stumbling on these minor syntax differences all the time, and would eventually give up

I can't necessarily fault Zig for the state of this so early in its lifetime, but - that should not be a big problem, handling transition is work for the diagnostics.

Suppose I write this in Rust: printf("%d", count);

Rust says it can't find a function named printf, but it suggests perhaps I want the print! macro instead?

OK, let's try again: print!("%d", count);

No, says Rust, % style format strings aren't a thing in Rust, use {curly brackets}

Sure enough: print!("{count}"); // compiles and works.


> And that's the main reason why many people will never consider even looking into the advantages Zig offers.

I think you got it upside down. Syntax is the least of its problems. Any experienced developer has already learned several languages and can pick up new syntax quickly. Zig syntax is straightforward, similar to other languages, and can be learned in a couple of hours. Easy stuff. The problems with Zig are more related to uncertainty around long term support, the network effect, and so on.


I disagree, I argue that you've got it backwards.

C has a fair amount of kludge in its syntax. C cannot change its syntax because it would massively break backwards compatibility, which is bad.

New languages do not have this problem - they don't have to worry about backwards compatibility. Because they have that freedom, they should always opt for what they believe is the best possible syntax. I'd say they're obligated to do so. Otherwise, we're stuck with another 10-20+ years of dealing with bad syntax, for no good reason!

If the opportunity for improvement is there, and it's nearly close to free to do so, it should absolutely be taken.


Counterpoint: C itself invented lot of syntax compared to the contemporary ALGOLs and PL/I. Yet, it became immensely popular.


I would dare say, different combination of early adopters/pragmatists in your target audience. C/C++ these days is mostly legacy stuff, so if you want to cater to that audience, you deliver small incremental improvements.

Many people that could be bothered to learn Zig syntax, jumped ship to Python/Java/whatever already.


Zig and Python/Java are completely different beasts. One is a low-level systems language on the order of Rust or C. The other two are much higher level, easier to work with languages more attuned to desktop, mobile applications and enterprise work.

I don't think anyone is seriously "jumping ship" from Zig to those.


Low-level programming isn't the holy grail. If a person gets fed up with limitations of C, they might as well move to a higher-level domain area as well. Especially, given the higher pay there.

Those who haven't will have a much lower tolerance for changes. Survivor bias of a kind.


It's not really a choice of low-level or high-level. Neither is better than the other.

It's about choosing the right language for the task at hand. Some work will really be suited (or only be feasible) with a low-level language and vice-versa.

If I'm writing a command-line tool on the order of ripgrep or working on a microcontroller embedded in a dishwasher, I'm not going to go to Java. That would be weird and awkward. And if I'm writing a new 3D AAA-level game, I'm going to jump to something maybe even higher level like UE5 - trying to do all that in Rust would be a PITA.


Legacy stuff like the compiler and runtime used by the new hip languages or the webserver, browser, database, libraries needed to run those programs, or the OS, hypervisor, device drivers needed by the machine those programming run on.


Thankfully most of those are C++ and not C, and some hip languages are bootstraped.


IIRC even K&R Second Edition acknowledges that the C type syntax is awkward for non-trivial cases.

The "new" style used by Go, Rust, Zig, Typescript etc... is a lot easier to read because it always 'resolves' from left to right.


In practice it only takes a weekend or two to pick up the syntax changes, and the benefits easily outweigh that investment.

Heck, the simple fact that type signatures can be read literally from left to right (instead of using the spiral rule) is enough for me to switch. https://zig.news/toxi/typepointer-cheatsheet-3ne2


Minor differences in syntax are not what makes programming (in general, and when it comes to learning a new language) hard.

As an aside, I really like that I can grep source files for a keyword such as `fn ` and get a good idea of how many functions I am defining, and where they are. This gets especially powerful with multi-cursor editing.


What do you think about C3 then? https://c3-lang.org




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: