Hacker Newsnew | past | comments | ask | show | jobs | submit | corysama's commentslogin

Even in code where performance is a serious concern, you don't need to feel guilty about using a data structure that is an array of pointers to 4 kbyte chunks or a tree of such chunks. 4K is linear enough that using a completely flat array probably won't be significantly faster.

I’d bet the DS is the most advanced game console where it is still possible for a person to productively program it entirely via the bare metal memory map. As in: using an “SDK” that’s just a C header full of struct and array definitions at magic fixed addresses and no functions at all. Set values and the hardware does stuff.

I'd say the GBA is the sweet spot for this.

The DS has you dealing with two cores you need to write a firmware for that have to communicate to do anything useful, a cartridge protocol to fetch any extra code or assets that wouldn't all fit into RAM at runtime, instruction and data caches, an MMU, ... And that's without mentioning some of the more complex peripherals like the touch screen and wifi.

All official games used the same firmware for one of the cores, a copy of which is embedded into every single cartridge ROM. There's some homebrew firmwares included in the respective SDKs, but they aren't well documented for standalone use.

Granted, all of the above isn't completely impossible, but if you think of how much code you'd need to get a simple demo (button input, sprite moving across the screen), especially for a beginner, the DS requires a nontrivial amount of code and knowledge to get started without an SDK. Meanwhile, you can do something similar in less than 100 lines of ASM/C for GBA.


Agreed. I spent a lot of time programming the GBA in the early 2000s (back when the state of the art devkit was a flash cartridge writer with parallel cable...) and I consider it the last "grounded" console that Nintendo made, where you immediately and directly get to touch hardware right off the bat, without any gyrations. After having worked with the SNES in the 90s the GBA was a very familiar and pleasant platform to experience, in many ways similar to and built upon the SNES' foundation.

I've never coded for SNES, but the GBA having access to a mainline, modern C compiler is a massive buff. Also, emulators for it have always been available on practically any computer, console and mobile phone, and there's many so-called "emulation handhelds" that bring its (and similar) form-factor handheld devices to the market. If you really need an upgraded OG experience, many upgrade kits for the handheld exist as well.

None of this fixes the audio, but it sure gets damn close.


Just curious what you mean by "fixing the audio"? In GBA emulation or on the hardware?

I'm aware that if you need/want PCM audio, there's going to be mixing, probably with a software library, and significant CPU use for it. Is emulated GBA audio buggy?

One of my first gigs was Game Boy and Game Gear programming. I know the GBA allows DMG audio compatibility and, with all its constraints, well it sure does keep things simple. And emulation is reliable AFAIK.


I see what happened, I was replying to a different comment, that did mention the GBA audio, when I wrote that, but somehow ended up replying to this one.

This comment explains it better than I could: https://news.ycombinator.com/item?id=47708201


The DS, more specifically the arm946e-s has an MPU, not a MMU (you're confusing it with the 3DS's Arm11). Not like it makes much of a difference anyway, you configure either once or twice then leave them be.

Honestly, I think why the GBA is more popular than the DS for that kind of thing is because it only has one screen (much less awkward to emulate), has high-quality emulators that are mostly free of bugs (mGBA most notably), and its aspect ratio is better than the DS anyway (3:2 upscales really well on 16:10 devices). That is to say, it's much easier to emulate GBA software on a phone or a Steam Deck than it is to emulate DS software.


gah, you're right, I was thinking of memory protection (as in, marking the relevant regions as read-write and read-execute) when I wrote MMU.

It's of course optional, and you can ignore it for trivial examples, but most games and SDKs will tweak it all the time when loading additional code modules from the cartridge.

It's just another way in which the DS is more complex to use properly without an SDK to do this for you - there's just more to think about. At least compared to how the GBA lacks all of this and the entire cartridge is mapped into memory at all times.


I agree, the GBA is a pleasure to work with. It's just a shame that the poor quality of the (stock) screens, low resolution, and lousy sound hardware make it feel like such a downgrade from the otherwise gnarlier and technically inferior SNES.

There's a pretty big renaissance of GBA clones out there right now that put better screens and speakers to the platform. And of course with emulators you can get all the modern hardware affordances for the platform.

The screen can be improved, but the resolution and sound system can't be.

The issue with the sound isn't just the speakers - you could always use headphones, after all. The GBA only has the original GB's primitive PSG (two square waves, a noise channel, and a short programmable 4-bit waveform) plus two 8-bit PCM channels. 8-bit PCM samples are unavoidably noisy with lots of aliasing, and all sound mixing, sequencing, envelopes, etc. for those channels needs to be done in software, which tends to introduce performance and battery life constraints on quality, channel count, effects, and sample rate.

The SNES, by comparison, uses high-quality 16-bit 32kHz samples, and all the places on the GBA where devs may have had to cut corners are done in hardware: eight separate channels, no need for software mixing, built-in envelopes and delay.

Compare the SNES FFVI soundtrack to the GBA version; the difference is dramatic. Frankly, using high quality speakers or headphones just makes the quality difference more obvious.


There are also drop-in replacements for the unlit screens of genuine units.

In addition to the screen and the sound, don't forget having just 2 face buttons after 4 buttons had become standard and almost mandatory. Many ports suffer mightily in the control department.

Probably? Everything else onward relies on libraries...

Though there were some fits and starts there. The N64 for example is, from what I've heard, heavily library dependent and absolutely brutal to program bare metal (GPU "microcode" that was almost like programmable shaders v0.1); even the GameCube is a significant improvement for that kind of thing.


I think 3ds is also reasonably in the sweet spot.

Check out this project, fully written in bare metal C

https://github.com/profi200/open_agb_firm


It will always be the Leeloo Dallas Memory Palace to me.

Agreed. I’ve done trivial obfuscation for games. In my observation, if you make it trivial to hack your game, huge numbers will trivially hack it. If you make it even slightly non-trivial, the numbers decrease exponentially. The more you waste their time, put up hurdles, the lower the number of successful hackers goes.

The goal is not perfect security in all situations for all products. The goal is to make the effort required for your particular product excessive compared to the payoff.


At the bottom of that page is a list of “Here are some awesome things people have built using Ohm:”

Dunno if the link was changed or something but I had to go to the main page to see the list at the bottom https://ohmjs.org/. Hope that saves someone some searching!

You can also check rhe examples folder: https://github.com/ohmjs/ohm/tree/main/examples

Some ten years ago I used an earlier version of https://unity.com/how-to/analyze-memory-usage-memory-profili... to accidentally discover a memory leak that was due to some 3rd party code with a lambda that captured an ancient, archived version of Microsoft's C# vector which had a bug. There were multiple layers of impossibility of me finding that through inspection. But, with a functional tool, it was obvious.

Ten years before that I worked on a bespoke commercial game engine that had its own memory tracker. First thing we did with it was fire up a demo program, attach the memory analyzer to it, then attach a second instance of the memory analyzer to the first one and found a memory error in the memory analyzer.

Now that I'm out of gamedev, I feel like I'm working completely blind. People barely acknowledge the existence of debuggers. I don't know how y'all get anything to work.

A quick google for open-source C++ solutions turns up https://github.com/RudjiGames/MTuner which happens to have been updated today. From a game developer, of course XD



According to that chart 2021 was anomalously low and it has been linearly returning to normal for the past four years.

AFAICT, the general populace is anxious about AI. So, the news knows they can get clicks with “You are right to be afraid. AI bad.” Meanwhile, CEOs know they can get stock boosts by saying “We are so AI we don’t need expenses. Infinite ROI!”

Put together we’re getting a ton of scary reporting on what looks like a quite normal business cycle (at least as far as layoffs go). And, everyone being afraid to hire is the only thing actually making it self-fulfilling.


I wouldn’t call the massive levels of investment by both private equity and municipal/state governments “business as usual.” The sums being thrown down and/or promised are staggering. People/groups that lose are going to lose big.


I give it a month before someone launches a TUI-TUI.


I believe what they are bragging about is not the translated proofs, but the process of doing the translation.

> produced by frontier AI with ~2 person-days of human effort versus an estimated ~2.75 person-years manually (a 350x speed-up). We achieve this through task-level specification generators...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: