I feel like, looking at the complexity of the programs that can be implemented in shaders (eg. https://dev.epicgames.com/documentation/en-us/unreal-engine/...) that it's unreasonable, bordering on disingenuous to suggest that the GPU pipeline is not capable enough to handle those workloads, or produce pixel perfect outputs.
Be really specific.
What exactly is it that you can't do in a shader, that you can do in a CPU based sandbox, better and faster?
(There are things, sure, like IO, networking, shared memory but I'm struggling to see why you would want any of them in this context)
I'll accept the answer, 'well, maybe you want to render fonts on a toaster with no GPU'; sure... but that having a GPU isn't good enough for you, yeah... nah. I'm not buying that).
Vector graphics are really hard to do on a GPU in an efficient manner. The way the data is stored as individual curve segments makes it difficult to parallelize the coverage problem, it's equivalent to a global parse; the best approaches all do some form of parsing of curve data on the CPU, either rasterizing fully on the GPU, or putting it in a structure the GPU can chew on.
But again, this has nothing to do with HarfBuzz or wasm.
what exactly do you mean by 'global parse'? it's very usual, i think, when operating on data stored in files, to parse them into in-memory structures before operating on them? but it feels like you are talking about something specific to vector rendering
slug builds acceleration structures ahead of time. the structures are overfit to the algorithm in a way that ttf should be but which is economical for video games. that doesn't seem like an interesting concern and nothing about it is specific to the gpu
I'm referring to needing to traverse all path segments to determine the winding order for an individual pixel. You can't solve this problem locally, you need global knowledge. The easiest way to do this is to build an acceleration structure to contain the global knowledge (what Slug does), but you can also propagate the global knowledge across (Hoppe ravg does this).
It's more about the nature of the problem, not that you can't do it in shaders. After all, I think you can do pretty much anything in shaders if you try hard enough.
Even if you already have a GPU renderer for glyphs and any other vector data, you still want to know where to actually position the glyphs. And since this is highly dependent on the text itself and your application state (that lies on the CPU), it would actually be pretty difficult to do it directly on the GPU. The shader that you would want should emit positions, but the code to do that won't be easily ported to the GPU. Working with text is not really what shaders are meant for.
Be really specific.
What exactly is it that you can't do in a shader, that you can do in a CPU based sandbox, better and faster?
(There are things, sure, like IO, networking, shared memory but I'm struggling to see why you would want any of them in this context)
I'll accept the answer, 'well, maybe you want to render fonts on a toaster with no GPU'; sure... but that having a GPU isn't good enough for you, yeah... nah. I'm not buying that).