Repo author here, let me promote jq a bit – it's much more than a simple command-line JSON processor you know it for:
* It is a generator-based language which means you operate streams of values rather than single values. Takes some time to get used to, but you'd never want to go back to traditional model, at least for data processing. See 'Generators and iterators' section of the man page: https://github.com/stedolan/jq/blob/cff5336ec71b6fee396a95bb...
* Designed for CLI, it makes it easy and even pushes you to express your program as a single pipeline. You rarely need variables, functions or any control structure. And pipelines are great to build iteratively, debug and compose (see https://jqplay.org, https://jqterm.com)
* Core language is small but powerful with features like slicing, destructuring, complex assignments and error handling. And you are already an expert in it's (immutable) data structures – it is just JSON
* Batteries included in the stdlib – regex, path operations, C math/dates, algorithms. Modules are supported, but I did not need any dependencies for solving Advent of Code.
* jq is ubiquitous. Often pre-installed, tiny binary, no dependencies, basically single version (ok, awk is better – anything else?)
Being a go-to tool for JSON is sort of a double-edged sword – people just don't look past that. But nowadays JSON is the format for data, everything is convertible to it. And you can feed plain text into jq using --raw-input flag. i.e.:
jq -nR '[inputs]' /etc/hosts
The main limitation is there is no I/O whatsoever – you can't read from file/network/pipe/system from the program. You need to provide all inputs upfront.
Hope that explains why I prefer jq. For any programming really, not just AOC
The one thing I dislike about jq is that it treats many-things and one-thing the same way. This PL design choice is IMO fraught with peril. Its the reason why jq can't have a "slurp" filter. Treating many-things the same way means there is no first-class representation of a json stream, which in turn means there are no functions that can operate on a JSON stream as a whole.
If JQ had first-class streams, most code would've looked quite pedestrian. Everything that operates on many values would need some kind of `map`, there would also be `reduce` as a normal filter, as well as slurp - and the overal weirdness of programs would significantly decrease - the code would look exactly like normal functional programming pipelines.
It's 'zero-class' representation – everything is a json stream. You can slurp any stream into an array by wrapping it with the `[]` operator:
$ jq -n '[range(3)]'
[
0,
1,
2
]
Do you mean that you can't 'slurp' a stream from inside the pipeline? It makes sense since parts of the pipeline just process individual values, they are not supposed to have context of the whole stream. So to slurp you need to 'wrap around'.
I agree this is unconventional, but it what enables many of the advantages like pipeline structure of the program and conciseness (rare need for parameters/variables/functions)
> It makes sense since parts of the pipeline just process individual values, they are not supposed to have context of the whole stream.
> I agree this is unconventional, but it what enables many of the advantages like pipeline structure of the program and conciseness
I'm not sure I agree with this. There are no operators that work on streams other than `select` since most other operators are ambiguous. This means streams are not first-class values (they are not at the same level as all other values).
It doesn't have to be designed this way, if we differentiate between stream and nonstream values you could have
'.someattr[] | smap(.otherattr) | sreduce(+)'
where `smap` is a stream equivalents to `map`
The language won't be as compact but it will gain significantly in readability and consistency, and its pipelining will continue to work just as well.
Its like designing a language where everything is an array and where every operation implicitly maps over the array. In those cases you can't have a "reduce" operation, because its ambiguous whether you want to run reduce on every individual element or you want to reduce the entire array of elements.
I can kinda make out how it's parsing the input and converting it to a graph. I can't make heads or tails of the path finding and I don't see anything that really looks like Djikstra's algorithm.
You would hope so. Saw many people on r/adventofcode using dijkstra or even A* when every path step is just 1. I went with bfs because I'm pretty sure using a priority queue or heuristic function is going to slow those down.
Was that really necessary? The title of the item, to which the above reply was written, was "Solving Advent of Code with jq". So yes, "doing the same thing" here fits fine.