Hacker Newsnew | past | comments | ask | show | jobs | submit | moefh's commentslogin

Shouldn't that be 0xc8c70ff0, since we're talking about a little-endian CPU? (according to this[1] the bytes in memory are F0 0F C7 C8).

On the other hand, I probably wouldn't have recognized the F00F bug mention if you had actually written 0xc8c70ff0.

[1] https://en.wikipedia.org/wiki/Pentium_F00F_bug


It was a popular meme in computer security focused groups for a while after it was discovered since it was an unprivileged DoS. I only remember seeing it talked about with the f00f representation: some people even called it "getting f00f'd" if you managed to trick someone into executing the instructions.

Being "not very inaccurate" is very different from publishing outright fabricated quotes, which is what Ars Technica did and later admitted to: https://arstechnica.com/staff/2026/02/editors-note-retractio...


Had no idea they even did anything. Was waiting for this. Nice to see some consequences and something resembling an attempt at integrity.

Now they just need to do something about all of the other writers! With the exception of the science lady, the security guy, and the british car guy, it's indistinguishable from the kind of PR-copy-paste blogspam 'coverage' you'd see from a place that will never have the reputation Ars used to.


Great stuff.

It wouldn't be surprising if the RP2350 gets officially certified to run at something above the max supported clock at launch (150MHz), though obviously nothing close to 800MHz. That happened to the RP2040[1], which at launch nominally supported 133MHz but now it's up to 200MHz (the SDK still defaults to 125MHz for compatibility, but getting 200MHz is as simple as toggling a config flag[2]).

[1] https://www.tomshardware.com/raspberry-pi/the-raspberry-pi-p...

[2] https://github.com/raspberrypi/pico-sdk/releases/tag/2.1.1


The 300MHz, 400MHz, and 500MHz points requiring only 1.1, 1.3, and 1.5v and with only the last point getting slightly above body temperature, even with no cooling, seem like something that should maybe not be "officially" supported, but maybe mentioned somewhere in an official blog post or docs. Getting 3x+ the performance with some config changes is noteworthy. It would be interesting to run an experiment to see if there's any measurable degradation of stability or increased likelihood at failure at those settings compared to a stock unit running the same workload for the same time.


All of their reliability testing and validation happens at the lower voltages and speeds. I doubt they'd include anything in the official docs lest they be accused of officially endorsing something that might later turn out to reduce longevity.


Yes. A number is transcendental if it's not the root of a polynomial with integer coefficients; that's completely independent of how you represent it.


We don't know that. We don't even know if there's selection bias.

The article says the research was "focusing on 246 deceased drivers who were tested for THC", and that the test usually happens when autopsies are performed. It doesn't say if autopsies are performed for all driver deaths, and it also doesn't say what exactly is "usually".

If (for example) autopsy only happens when the driver is suspected of drug use, then there's a clear selection bias.

Note that this doesn't mean the study is useless: they were able to see that legalization of cannabis didn't have impact on recreational use.


> The fact that the correct type signature, a pointer to fixed-size array, exists and that you can create a struct containing a fixed-size array member and pass that in by value completely invalidates any possible argument for having special semantics for fixed-size array parameters.

That's not entirely accurate: "fixed-size" array parameters (unlike pointers to arrays or arrays in structs) actually say that the array must be at least that size, not exactly that size, which makes them way more flexible (e.g. you don't need a buffer of an exact size, it can be larger). The examples from the article are neat but fairly specific because cryptographic functions always work with pre-defined array sizes, unlike most algorithms.

Incidentally, that was one of the main complaints about Pascal back in the day (see section 2.1 of [1]): it originally had only fixed-size arrays and strings, with no way for a function to accept a "generic array" or a "generic string" with size unknown at compile time.

[1] https://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pas...


It was always considered bad not (just) because it's ugly, but because it hides potential problems and adds no safety at all: a `[static N]` parameter tells the compiler that the parameter will never be NULL, but the function can still be called with a NULL pointer anyway.

That's is the current state of both gcc and clang: they will both happily, without warnings, pass a NULL pointer to a function with a `[static N]` parameter, and then REMOVE ANY NULL CHECK from the function, because the argument can't possibly be NULL according to the function signature, so the check is obviously redundant.

See the example in [1]: note that in the assembly of `f1` the NULL check is removed, while it's present in the "unsafe" `f2`, making it actually safer.

Also note that gcc will at least tell you that the check in `f1()` is "useless" (yet no warning about `g()` calling it with a pointer that could be NULL), while clang sees nothing wrong at all.

[1] https://godbolt.org/z/ba6rxc8W5


Interesting, I wasn't aware of that and thought the compiler would at least throw up a warning if it had seen that function prototype.


It's not intuitive, although arguably conforms to the general C philosophy of not getting in the way unless the code has no chance of being right.

For example, both compilers do complain if you try to pass a literal NULL to `f1` (because that can't possibly be right), the same way they warn about division by a literal zero but give no warnings about dividing by a number that is not known to be nonzero.


Right, so if the value is known at compile time it will flag the error but if it only appears at runtime it will happily consume the null and wreak whatever havoc that will lead to further down the line. Ok, thank you for pointing this out, I must have held that misconception for a really long time.


Note that the point of [static N] and [N] is to enforce type safety for "internal code". Any external ABI facing code should not use it and arguably there should be a lint/warning for its usage across an untrusted interface.

Inside of a project that's all compiled together however it tends to work as expected. It's just that you must make sure your nullable pointers are being checked (which of course one can enforce with annotations in C).

TLDR: Explicit non-null pointers work just fine but you shouldn't be using them on external interfaces and if you are using them in general you should be annotating and/or explicitly checking your nullable pointers as soon as they cross your external interfaces.


Wow, that’s crazy. Does anyone have any context on why they didn’t fix this by either disallowing NULL, or not treating the pointer as non-nullable? I’m assuming there is code that was expecting this not to error, but the combination really seems like a bug not just a sharp edge.


Treating the pointer as not-nullable is precisely the point of the feature, though. By letting the compiler know that there's at least N elements there, it can do things like e.g. move that read around and even prefetch if that makes the most sense.


Indeed, at a minimum you should be able to enforce that check using a compiler flag.


You can add that check using -fsanitize=null (and you may want to turn the diagnostic into a run-time trap)


> It probably shouldn't do that if you create a dynamic library that needs a symbol table but for an ELF binary it could, no?

It can't do that because the program might load a dynamic library that depends on the function (it's perfectly OK for a `.so` to depend on a function from the main executable, for example).

That's one of the reasons why a very cheap optimization is to always use `static` for functions when you can. You're telling the compiler that the function doesn't need to be visible outside the current compilation unit, so the compiler is free to even inline it completely and never produce an actual callable function, if appropriate.


Sadly most C++ projects are organized in a way that hampers static functions. To achieve incremental builds, stuff is split into separate source files that are compiled and optimized separately, and only at the final step linked, which requires symbols of course.

I get it though, because carefully structuring your #includes to get a single translation unit is messy, and compile times get too long.


That’s where link-time optimization enters the picture. It’s expensive but tolerable for production builds of small projects and feasible for mid-sized ones.


That's one major reason why I don't like C++. I think the concept of header and implementation files is fine, but idiomatic C++ code basically makes it broken. Surely a class should go into the implementation file? (Internal) Types belong into the implementation, what belongs into headers are interfaces and function signatures. A class is a type, so it does not belong into a header file.


[[gnu::visibility(hidden)]] (or the equivalent for your compiler), might help.


> It can't do that because the program might load a dynamic library that depends on the function

That makes perfect sense, thank you!

And I just realized why I was mistaken. I am using fasm with `format ELF64 executable` to create a ELF file. Looking at it with a hex editor, it has no sections or symbol table because it creates a completely stripped binary.

Learned something :)


> Special Relativity (non-accelerating frames of reference, i.e. moving at a constant speed)

Sorry, but this is a pet peeve of mine: special relativity works perfectly well in accelerating frames of reference, as long as the spacetime remains flat (a Minkowski space[1]), for example when any curvature caused by gravity is small enough that you can ignore it.

[1] https://en.wikipedia.org/wiki/Minkowski_space


That's not great context: China and India have huge populations, it's expected that they should be at the top.

Better context can be found here[1] (countries by emission per capita). It's still not great because it shows a lot of small countries at the top. For example: Palau is the first, but it has a population of a few thousand people, so their emissions are a rounding error when compared to other countries.

[1] https://en.wikipedia.org/wiki/List_of_countries_by_carbon_di...


Per capita isn't the useful metric in this regard for the reason Palau illustrates. The climate cares about volume.

Per capita emissions is a way to assign relative sin by those who feel guilty about living large.

Bill Gates today, "This is a chance to refocus on the metric that should count even more than emissions and temperature change: improving lives. Our chief goal should be to prevent suffering, particularly for those in the toughest conditions who live in the world’s poorest countries. The biggest problems are poverty and disease, just as they always have been. Understanding this will let us focus our limited resources on interventions that will have the greatest impact for the most vulnerable people.”


Why? I would expect China to be at the top since it's #1 manufacturing country? But India is like behind Germany at (5).

How about GDP per emission? And that would make China way higher than US.

https://ourworldindata.org/grapher/co2-intensity


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: