In the level data, the start/goal tiles (approx 10x10 blocks of solid ground under the player and goal) and slopes aren't represented by their tile offset in the level data -- whilst all other "ground" tiles are. Instead, for slopes you're only told the start and end coordinates, and they often overlap.
So to render the slopes correctly I had to work out all the rules for which tiles were allowed next to each other, and solve some ambiguity rules -- I figured out that shallow slopes take precedence over steep ones. Eventually I cracked it, but it took quite a week or so of iteration to figure it out.
I have a few parallel AI-authored side projects on the go that have quite different shapes, and I feel quite different things about each
1. A survival horde game (like Vampire Survivors and Brotato). At the moment it's very primitive, very derivative (no new ideas) and not much fun. I have no sense of pride over it, but it is much further along than it would be if i'd been writing it from scratch. I expect once I invest in the fun side (gameplay innovations, graphics) i'll feel a greater sense of attachment, and I plan to do all the art assets myself.
2. A MacOS web app for managing dev env processes, works but is ugly. I don't have confidence in AI making a remotely presentable UI, so I'll be doing that part myself.
3. A useful little utility library. The kind of thing that pre-LLM would've been too far out of my expertise to be motivated to try making. I'm steering the design of it quite heavily, but haven't written any code. It seems like it's already capable of doing very useful things, and I oddly feel quite proud of it. But I have a weird sense of unease in that I _think_ it's good, but I don't _know_ it's good.
I think the main thing I'm learning is to make sure there's always something of yourself in whatever you produce with the help of AI, especially if you want to feel a sense of accomplishment. And make sure you have a good testing philosophy if you're planning to be hands-off with the code itself.
I did a slightly less ambitious prototype a few weeks ago where I created added lazy loading of GCS files into the just-bash file-systems, as well as lots of other on-demand files. Was a lot of fun.
just-bash comes with Python installed, so in a way that's what this has done. I've used this for some prototypes with AI tools (via bash-tool), can't really productionise it in our current setup, but it worked very well and was undeniably pretty cool.
Yeah, whilst git was more popular than mercurial, I still think mercurial would have won if bitbucket had a better UI.
It's interesting to me that the only thing that made me vastly prefer using Github over bitbucket is that Github prioritised showing the readme over showing the source tree. Such a little thing, but it made all the difference.
Ambiguity increasingly feels like the crux of estimation. By that I mean the extent to which you have a clear idea of what needs to be done before you start the work.
I do a lot of fussy UI finesse work, which on the surface are small changes, so people are tempted to give them small estimates. But they often take a while because you’re really learning what needs to be done as you’re doing it.
On the other end of the spectrum I’ve seen tickets that are very large in terms of the magnitude of the change, but very well specified and understood — so don’t actually take that long (the biggest bottleneck seems to be the need to break down the work into reviewable units).
In the LLM age, I think the ambiguity angle is going to much more apparent, as the raw size of the change becomes even less of an input into how long it takes.
I mean, the use of GraphQL for third party APIs has always been questionable wisdom. I’m about a big a GraphQL fan as it gets, but I’ve always come down on the side of being very skeptical that it’s suitable for anything beyond its primary use case — serving the needs of 1st-party UI clients.
It is still a major problem, yes. Interestingly, if you go back to the talks that introduced GraphQL, much of the motivation wasn’t about solving overfetching (they kinda assumed you were already doing that because it was at the peak of mobile app wave), but solving the organisational and technical issues with existing solutions.
As someone who’s used GraphQL since mid-2015, if you haven’t used GraphQL with Relay you probably haven’t experienced GraphQL in a way that truly exploits its strengths.
I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.
I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.
The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.
For me it’s really about the component-level experience.
* Relatively fine-grained re-rendering out of the box because you don’t pass the entire query response down the tree. useFragment is akin to a redux selector
* Plays nicely with suspense and the defer fragment, deferring a component subtree is very intuitive
* mutation updaters defined inline rather than in centralised config. This ended up being more important than expected, but having lived the reality of global cache config with our existing urql setup at my current job, I’m convinced the Relay approach is better.
* Useful helpers for pagination, refetchable fragments, etc
* No massive up-front representation of the entire schema needed to make the cache work properly. Each query/fragment has its own codegenned file that contains all the information needed to write to the cache efficiently. But because they’re distributed across the codebase, it plays well with bundle size for individual screens.
* Guardrails against reuse of fragments thanks to the eslint plugin. Fragments are written to define the data contract for individual components or functions, so there’s no need to share them around. Our existing urql codebase has a lot of “god fragments” which are very incredibly painful to work with.
Recent versions of Apollo have some of these things, but only Relay has the full suite. It’s really about trying to get the exact data a component needs with as little performance overhead as possible. It’s not perfect — it has some quite esoteric advanced parts and the documentation still sucks, but I haven’t yet found anything better.
I did. I really wanted to like it. I think it broke due to something I was doing with fragments or splitting up code in my monorepo. I may give it a shot again, from first principles it is a better approach.
In the level data, the start/goal tiles (approx 10x10 blocks of solid ground under the player and goal) and slopes aren't represented by their tile offset in the level data -- whilst all other "ground" tiles are. Instead, for slopes you're only told the start and end coordinates, and they often overlap.
So to render the slopes correctly I had to work out all the rules for which tiles were allowed next to each other, and solve some ambiguity rules -- I figured out that shallow slopes take precedence over steep ones. Eventually I cracked it, but it took quite a week or so of iteration to figure it out.
reply