Hacker Newsnew | past | comments | ask | show | jobs | submit | lytigas's commentslogin

I've read a few small overviews of jj. One thing that's off-putting as a git lover is that while git is truly append-only (except refs), jj seems quite "mutable" by comparison.

Say I'm messing around with the commit that introduced a bug, somewhere deep in the history. With git, it's basically impossible to mess up the repo state. Even if I commit, or commit --amend, my downstream refs still point to the old history. This kind of sucks for making stacked PRs (hello git rebase -i --autosquash --update-refs) but gives me a lot of confidence to mess around in a repo.

With jj, it seems like all I would have to do is forget to "jj new" before some mass find+replace, and now my repo is unfixable. How does jj deal with this scenario?


As others have pointed out there's `jj undo` and other tools, but they all rely on the fact that JJ is less mutable than it seems.

Internally, JJ is still backed by an append-only tree of commits. You don't normally get to see these commits directly, but they're there. A change (i.e. the thing you see in `jj log`) is always backed by one or more commits. You can see the latest commit in the log directly (the ID on the left is the change ID, the ID on the right is the commit ID), but you can also look back in history for a single change using `jj evolog`, and you can see all commits using `jj op log`.

This ensures that even if you were to exclusively use `jj edit`, never made a new commit, and kept all your work and history in a single change, you could still track the history of your project (or the "evolution", hence the name evolog). It would be kind of impractical, but it would work.

The only caveat here is that, by default, JJ only creates new snapshots/commits from the working directory whenever you run the CLI. So if you made a large change, didn't run any JJ command at all, then made a second large change, JJ would by default see that as a single change to the working directory. To catch these issues, you can use a file watcher to automatically run JJ whenever any file changes, which typically means that JJ makes much more frequent snapshots and therefore you're less likely to lose work (at the cost of having a file watcher running, and also potentially bloating your repository with lots of tiny snapshots).

Note also that the above is all local. When using the git backend, Jujutsu will only sync one commit for each change when pushing to a remote repository, so the people you're collaborating with will not see all these minor edits and changes, they'll only see the finished, edited history that you've built. But locally, all of those intermediate snapshots exist, which is why Jujutsu should never lose your data.


> you can use a file watcher to automatically run JJ whenever any file changes, which typically means that JJ makes much more frequent snapshots and therefore you're less likely to lose work (at the cost of having a file watcher running, and also potentially bloating your repository with lots of tiny snapshots).

For example? Or should I create my own in C using inotify/kqueue? Is there a library for jj?


See the documentation here: https://jj-vcs.github.io/jj/latest/config/#filesystem-monito...

The default behaviour (i.e. `core.fsmonitor = "watchman"`) is to only use the file watcher as an optimisation — rather than scanning the entire folder every time JJ wants to make a snapshot, the watch keeps a list of which files have changed, and then when creating a snapshot, JJ only needs to check those files.

However, you can also add `core.watchman.register_snapshot_trigger = true` to the configuration, and this will make it so that every time the watcher sees that a file has changed, it automatically makes the new snapshot.

That said, neither of these are active by default, and neither are necessary by default. But if you're the sort of person who uses VS Code's "Timeline" view to see exactly how each file you've worked with has changed over time, then you might also appreciate the automatic snapshotting feature.


It’s a simple config flag.


The first time I’ve tried to prepare a set of commits to push out I found out the hard way that merges cannot be undone - and I’m not sure which of the commands were doing a merge.


Merges can be undone. Either you can manually remediate by `jj abandon`ing the merge commit so that it's not visible anymore, or you can restore the entire repo state to a previous point in time with `jj undo` or `jj op restore`, or you can do some remediation in between those two extremes.

Off of the top of my head, `jj new` and occasionally `jj rebase` can create merge commits; I don't recall any others.


You can always undo the most recent action using `jj undo`. To undo older actions, the easiest solution is to look for the state you want to get back to in the operations log `jj op log`, and then restore that state directly using `jj op restore <hash of state>`.

You really can undo every action in Jujutsu (and if you can't, that's a bug), but the `undo` mechanism can be a bit surprising - it doesn't behave like a stack, where undoing multiple times will take you further back in history. Instead, undo always undoes the most recent action, and if the most recent action is an undo, then it undoes the undo. This often catches people off-guard - in future versions, JJ will show a warning when this happens, and in even further future versions, there's a plan to make undo behave more like expected.

But if you use `jj op log` and `jj op restore`, you can always get back to any previous state, including undoing merges and other complicated changes.


Thanks, that makes sense. The cheat sheets either didn’t contain the op log-reflog analog or I missed it.

For the record I’ve seen ‘cannot undo a merge’ after jj undo 3-4 times, can’t remember now. I was trying to squash a change into a single commit for a GitHub pr for the first time and couldn’t figure out how to map jj commits into something acceptable, then decided to undo the whole thing and actually managed to overwrite some of my changes in a way I couldn’t find them, fortunately only a few lines of boilerplate.


> With git, it's basically impossible to mess up the repo state.

I must be a wizard because I’ve lost count of the number of times I’ve messed up my repo’s state.

I jest. Kinda. I know that git’s state might technically not be messed up and someone skilled in the ways of git could get me out of my predicament. But the fact that I’m able to easily dig myself into a hole that I don’t know how to get out of is one of git’s biggest issues, in my opinion.


Totally.

That hole is very easily dug with git:

use ctrl x ctrl v to move files around commit boom, you've lost history for those files (file tracking only works in theory, not in real life), let's say you don't notice (very easy to not notice)

commit some more, merge

discover your mistake tons of commits back

good luck fixing that, without digging a bigger hole.

And that's one of 100's of examples in which git just is really really not fun or user friendly.


> use ctrl x ctrl v to move files around commit boom, you've lost history for those files (file tracking only works in theory, not in real life), let's say you don't notice (very easy to not notice)

This is one of the things git excels at: You didn't lose your history because that's not how git handles it. Git might be the only version control that can actually handle your case (renaming files without using a special command) - it looks for file similarity between a deleted file and an added file in the same commit, with various flags to make this look in broader places.

"git mv file1 file2" is almost identical to "mv file1 file2" + "git add file1 file2" (it also handles unstaged changes instead staging them)


I keep reading explanation similar to yours, yet it obviously doesn't work in my case.

I'm probably doing something dumb/wrong, some setting somewhere, wrong OS, whatever it is: it proves my point that git is harder than it should be.


If you want to “checkout” some previous commit, jj has your back in three ways

- first, that commit that’s been merged to main is marked as immutable and, unless you add a flag to say “I know this is immutable and I want to mutate it anyway”, you can’t mutate it

- second, as part of your regular workflow, you haven’t actually checked out that historical commit. You created a new, empty commit when you “checked it out” using “jj new old_commit”

- third, you can use jj undo. Or, you can use “jj obs log” to see how a change has evolved over time (read: undo your mass find+replace by reverting to a previous state)


jj is pretty much just safer than Git in terms of the core architecture.

There's several things Git can't undo, such as if you delete a ref (in particular, for commits not observed by `HEAD`), or if you want to determine a global ordering between events from different reflogs: https://github.com/arxanas/git-branchless/wiki/Architecture#...

In contrast, jj snapshots the entire repo after each operation (including the assignment of refs to commits), so the above issues are naturally handled as part of the design. You can check the historical repo states with the operation log: https://jj-vcs.github.io/jj/latest/operation-log/ (That being said, there may be bugs in jj itself.)


“jj op log” shows you the operation history, which you can then “jj op restore” and point to where you want to restore to :) (disclaimer: im still jj newbie, but this has gotten me out of the snafus ive put myself into while learning g)


Can I ask what your motivation was for trying jj?

I'm always keen to explore new things but I don't have many complaints about git. I'm wondering what this solves that made it attractive for you.


I was always very frustrated with git workflow for working on multiple features/bugfixes simultaneously (multiple branches [1]). Changing between them, or combining them for testing is tedious -- constant stashing, switching, cherry-picking. Conflicts fit very poorly into git's version control model - you can't just tell git to ignore some conflict and continue for the moment so you can take care of it later. You have to stop the thing you wanted to focus on and instead babysit git because it found a conflict. Etc.

These are less of an issue once you've molded yourself to fit into git's strange ways, but jj feels like a much nicer tool -- especially for beginners, but feels like it frees up cognitive space even for more experienced folks. You can focus less on the tool and focus more on what you actually want to do.

[1]: I've tried using multiple working trees, but that workflow never really "stuck" with me.


I've done the multiple trees thing too, and agree it didn't work very well.

jj solved the biggest problem for me, which is how much time you spend rebasing when you have 1 PR = 1 stack of commits on top of main. It's easy enough to work on multiple branches this way, but it's a lot of repeated pain when `main` diverges and your changes on top are still out for review. (I honestly just started squashing all of my commits before review, so I would only have to resolve conflicts once.) jj fixes all of this. I especially enjoy working on a 3rd pending change that refers to the previous 2 pending changes; `jj new june/feature-1 june/feature-2` and then you add feature 3 there. You can even `jj squash --into june/feature-1` if something makes more sense being in a prior commit. It's all very wonderful if you are working with other people and you can't immediately mutate `main` upon finishing some work.


i’ve always been interested in improving the git workflow i have for my small team (usually 2-3 other programmers), i think particularly hopping branches/commits, merging changes, rebasing and reorganizing history, i think git does suffice for that stuff if you know the commands, but jj makes it feel so fluid and easy to do, and having enough depth to get as expressive as you need to be

a lot of other tools ive found were lacking for one reason or another, and mightve not been git compatible. with jj you can hop between git and jj commands as you please, essentially full compatibility with git


I didn’t really have any complaints about git, but people I trust told me to check out jj anyway. Now I’m not going back. Something can still be nicer without another thing having to be bad, basically.


> With git, it's basically impossible to mess up the repo state

I'd like to introduce you to a couple of my former colleagues...


As other commenters have mentioned, there's `jj undo`, but you can also configure a set of immutable commits (by default it's the main branch and heads of untracked branches) and jj will stop you from changing those unintentionally (there's a flag you have to explicitly set if you want to force it).


Er, well, never type `jj edit` and this will never be an issue?

I exclusively move in a jj repo with `jj new` and `jj squash` or `jj squash --to <rev>` as appropriate. I've been using it 8+ hours daily for months and have never, ever even thought of having this issue.


I find it quite funny that this is exactly the kind of the thing "just don't use ... 8+hours daily ... Never had issues ..." that people who remain with git say.

And now people are saying it for jj.

Endless cycle it seems, with every new tool.


`jj edit` is specifically and exclusively for mutating existing commits.

Thus, if you're worried about mutating existing commits, don't use it.

What exactly is so hard to understand here? You're not making the gotcha point you seem to think you are - it's not like it's some common command that is hyper-overloaded and has to be used specially.

Just another example of the usual HN skepticism that isn't even skepticism, it's just smug ignorance. It's so exhausting. But sure, the countless people that keep claiming its the single biggest tool improvement in some time are just idiots? suckers? hype-beasts? making it up? or what?

Like, the irony of you assuming that it must be as convoluted and hard to use as git is just... awesome. I love the Git defenders that literally can't fathom that there is actually a better mental model or simpler tool, and can't even be arsed to try it and see.


With JJ you can even override the ref spec of what should be considered a mutable commit. Feel free to set it to all commits and JJ will not allow you to mutate anything unless you pass the --ignore-mutable flag.

For example, I've configured it for me to make any commit by anyone else mutable regardless of branch:

    [revset-aliases]
    # To always consider changes by others immutable:
    "immutable_heads()" = "builtin_immutable_heads() | (trunk().. & ~mine())"


I was under the impression Waymo was the leader. Who are they behind and how?


i guess it depends on how you measure it.

1. how many miles does waymo drive compared to competition?

2. how many people use waymo versus competition?

3. how many miles of road does waymo work on?

the answer to all three is thousands of times less than Tesla's FSD, which granted is not free of disengagements and still supervised. But then you have to ask what consumers want:

1. self driving that needs intervention once every few hundred miles and works anywhere

2. self driving that works only in low volume and predictable traffic, in a small section of a single city, sometimes still gets stuck and blocks intersections, breaks as soon as the road conditions change, but is entirely hands off

i think most people want #1, and the usage stats agree with that


Sure, but the major difference is that Waymo already operates a real service accessible to absolutely anyone in certain cities that actually transports passengers from point A to point B with no driver present in the car. Tesla FSD is not at that point yet.

I am not even saying this as some Tesla FSD hater, I used it a ton over the years and am generally happy with it. But claiming that, in its current state, it has a lead over Waymo is a bit questionable.


waymo is using an approach that is inherently limited and not scalable. they went for short term gains at the expense of knee-capping themselves long term. it's a fundamentally less robust approach.

FSD is really, really close. The recent 12.5 software is a big step forward for them, and their rate of improvement is really, really fast these days.

i appreciate the level headed discussion


Fair point about recent software updates, I can definitely believe that (I moved to a place where a car is not a necessity at the beginning of this year, so I didnt get to test the recent updates myself).

However, I can still see someone claiming that Waymo might have an edge in some form as a fair take, given they indeed have something that FSD, at the moment, doesn’t (which is full operation with no driver). Whether that is a viable long-term approach compared to FSD (or whether it is hitting the ceiling that FSD doesn’t have) is a solid point though.


> A C programmer who doesn't check the validity of pointers passed to functions and subsequently causes a NULL dereference is not a C programmer I want on my team.

I disagree. Interfaces in C need to carefully document their expectations and do exactly that amount of checking, not more. Documentation should replace a strong type system, not runtime checks. Code filled with NULL checks and other defensive maneuvers is far less readable. You could argue for more defensive checking at a library boundary, and this is exactly what the article pushes for: push these checks up.

Security-critical code may be different, but in most cases an accidental NULL dereference is fine and will be caught by tests, sanitizers, or fuzzing.


I agree with that. If a function "can't" be called with a null pointer, but is, that's a very interesting bug that should expose itself as quickly as possible. It is likely hiding a different and more difficult to detect bug.

Checking for null in every function is a pattern you get into when the codebase violates so many internal invariants so regularly that it can't function without the null checks. But this is hiding careless design and implementation, which is going to be an even bigger problem to grapple with than random crashes as the codebase evolves.

Ultimately, if your problem today is that your program crashes, your problem tomorrow will be that it returns incorrect results. What's easier for your monitoring system to detect, a crashed program, or days of returning the wrong answer 1% of the time? The latter is really scary, depending on the program is supposed to do. Charge the wrong credit card, grant access when something should be private, etc. Those have much worse consequences than downtime. (Of course, crashing on user data is a denial of service attack, so you can't really do either. To really win the programming game, you have to return correct results AND not crash all the time.)


Yeah but, not checking for null in C can cause undefined behavior. One possible outcome of undefined behavior is that your program doesn't even crash, but rather continues running in a weird state. So such a bug doesn't always "expose itself".

If we accept that bugs are inevitable, and that accidentally passing a null pointer to a function is a possible bug, then we also conclude that your code really should include non-null assertions that intentionally abort() the program. (Which run in debug/staging mode but can be disabled in release/production mode.)


Indeed, Rust's own standard library uses this method. There are lots of public-facing unsafe functions that can result in undefined behavior if called incorrectly. But if the standard library is compiled in debug mode (which currently requires the unstable flag -Zbuild-std), then it will activate assertions on many of these unsafe functions, so that they will print a message and abort the program if they detect invalid input.


The Rust compiler has even started recently to put extra checks on unsafe code in codegen, e.g. on raw pointer dereference to check it is aligned.


That raises a more general point. When you can't or don't have compile-time checks, removing run-time checks in production amounts to wearing your seat belt only when driving around a parking lot and then unbuckling when you get on the highway. It's very much the Wrong Thing.


I wouldn't really characterize it that way. You (ideally) shouldn't be hitting code paths in production that you didn't ever hit in testing.

But, in any case, if you are fine with the slight performance hit (though many C/C++ projects are not), you can always just keep assertions enabled in production.


Very good point. For C, I like the idea of sticking an assertion in there.


assert_always(ptr != nullptr);

(custom assert_always macro, so it doesn't get compiled out in release builds)


I used to ask this same question in interviews: should C code always check for NULL? My favorite answer was that the code should have a notion of boundaries, runtime checks should happen at the boundaries, and debug-only assertions are nice but not required inside the boundaries.


For a structured approach to ptrace/syscall rewriting, you could try FB's reverie. I worked on and used it during an internship a few years back; it's pretty amazing at what it does.

https://github.com/facebookexperimental/reverie


> During the early part of this event, we were unable to update the Service Health Dashboard because the tool we use to post these updates itself uses Cognito, which was impacted by this event.

Poetry.

Then, to be fair:

> We have a back-up means of updating the Service Health Dashboard that has minimal service dependencies. While this worked as expected, we encountered several delays during the earlier part of the event in posting to the Service Health Dashboard with this tool, as it is a more manual and less familiar tool for our support operators. To ensure customers were getting timely updates, the support team used the Personal Health Dashboard to notify impacted customers if they were impacted by the service issues.

I'm curious if anyone here actually got one of these.


The PHD is always updated first, long before the global status page is updated. Every single one of my clients that use AWS got updates on the PHD literally hours before the status page was even showing any issues, which is typical. It’s the entire point of the PHD.

Through reading Reddit and HN during this event I learned that most people apparently aren’t even aware of the existence of the PHD and rely solely on the global status page, despite the fact that there is a giant “View my PHD” button at the very top of the global status page, and additionally there is a notification icon on the header of every AWS console page that lights up and links you directly to the PHD whenever there is an issue.

The PHD is always where you should look first. It is, by design, updated long before the global status page is.


> despite the fact that there is a giant “View my PHD” button at the very top of the global status page

If you don’t know what the PHD is, a big button pointing to it won’t do anything. People ignore big boxes of irrelevant stuff all the time.

AWS user of ~8 years and I’ve never heard of the PHD nor this sequencing of updating it first.


I can't say for sure that the company I work for didn't, but it certainly didn't make it's way to me and there are only 8 of us.


My employer is a pretty big spender with AWS. I didn't hear anything about anybody getting status updates from a "Personal Health Dashboard" or anywhere else. I can't be 100% sure such an update would have made its way to me, but given the amount of buzzing, it's hard to believe that somebody had info like that and didn't share it.


Yes, we had some messages coming through in our PHD.


This won't be a first. The status page was hosted in S3. It is hilarious in the hindsight, but understandable.


> but understandable

Is it really? I get the value of eating your own dogfood, it improves things a lot.

But your status page? Such a high importance, low difficulty thing to build that dogfeeding it gives you small amount of benefits (dogfeed something bigger/more complex instead) in the good case, and high amount of drawback when things go wrong (like when your infrastructure goes down, so does your status page). So what's the point?


I can really imafgine what happened: Engineer wants to host dashboard at different provider for resilience. Manager argues that they cant do this, it would be embarassing if anybody found out. And why choose another provider? Aws has multiple AZs and cant be down everywhere at the same moment. Engineer then says „fu it“ and just builds it on a single solution.


Arrogance.


I can confirm we got the Personal Health Dashboard notifications.


It's possible the Chromium/Electron thing is this bug[1], which affects new proprietary nvidia drivers on recent Chromium versions. I also experienced that bug, though I don't think it affected Firefox for me.

It resolved itself for me recently in all apps except Slack. I have no idea why. I'm fairly sure I didn't update anything.

While it was still reproducing, I tested a purported fix in the upcoming Chromium 87 (or 88?) and it was resolved. So just wait a bit or try Canary. More specific info in the bug thread of course.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=111304...


Yep. Only Slack ever had that problem for me. I reopen it again after every resume, when I have to work with my customers. I don't use Slack for anything else.


I can't stand the electron client of Slack, so I use ripcord instead.


According to [1], Crystal Macros "receive AST nodes at compile-time and produce code that is pasted into a program." This is basically the same as Rust procedural macros[2]. You're probably thinking of "Macros by example"[3]

[1] https://crystal-lang.org/reference/syntax_and_semantics/macr...

[2] https://doc.rust-lang.org/reference/procedural-macros.html

[3] https://doc.rust-lang.org/reference/macros-by-example.html


Being able to receive AST in crystal is only good. However have a look here [1], 90% of your macros are covered with those helpers. In Rust doing the same things are either more difficult or impossible. For instance, I wanted in rust to inject allow/warn kind of derives based in env var. Nope.

[1] https://crystal-lang.org/reference/syntax_and_semantics/macr...


Rust wraps in release mode and traps in debug mode[0][1]. All large projects that I know of ship in release mode. Though I'd be interested to see a source for it not pessimizing badly.

[0] https://play.rust-lang.org/?version=stable&mode=release&edit...

[1] https://github.com/rust-lang/rfcs/pull/560


See my reply to masklin in this thread for why this is incorrect. In release mode, the behavior of integer overflow in Rust is not UB/cannot happen, but modulo two arithmetic.

These two are not the same thing.


Huh? Parent wrote:

>> Rust wraps in release mode and traps in debug mode

You wrote:

> In release mode, the behavior of integer overflow in Rust is not UB/cannot happen, but modulo two arithmetic.

The parent didn't claim or imply that overflow is UB. The parent wrote that it "wraps", which is equivalent to your "modulo two arithmetic". You are not in disagreement, so why are you disagreeing?


It is quite believable that trapping does not slow down Firefox. GCC is part of SPECint, and trapping is known not to slow down SPECint GCC. (It does slow down other SPECint benchmarks.)


The field of view of the camera is only 1.5x1.5 degrees[1], roughly 3x the size of the moon viewed from the ground.

[1] "NAR" in the upper left, and https://forums.vrsimulations.com/support/index.php/A/A_Forwa...


Some here are saying "Why not just use Rust/D/Zig/Nim/Crystal if you're going to break backwards compatibility?" I believe the proposal is closer to "We've removed this old feature, run this tool to replace it with the new one." C++ will keep looking like C++, not like Rust. Here's Google's "Codebase Cultivator" explaining the idea for their Abseil library:

https://youtu.be/tISy7EJQPzI?t=2209

I remember watching another talk where they propose something similar for the language itself, but I can't find it at the moment.


This.

Per Hyrum's law, someone will rely on any given behavior. So, if you choose backwards compatibility, it becomes harder for the committee to evolve the language over time.

The path of breaking changes can be made less painful, and does not, necessarily, invalidate all the code that is already written.

With that being said, the problem right now is not the decision of keeping backwards compatibility or not, but the fact that the standard should be explicit about it, so people know what to expect.


Yeah there's a lot of discussion here as if the proposal is some radical breaking thing when in reality it's stupid stuff like "stop pretending a byte isn't always 8-bits" or "if you're a platform that's still shipping a 10 year old compiler, you're not supported anymore in the latest language version." Problems that you'd never even dream of in most other languages.


Because C++ is not a single implementation language, rather driven by a standard with multiple implementations.

Such a solution is only possible if driven by the standard, otherwise it will never be fully available, just like the static analysis tooling varies by vendor.


That sounds worryingly close to the JavaScript model.


Unless you manage to integrate this directly into the compiler stack and have it work 100% of the time with no "ifs" or "buts" I don't think it'll work.

Maybe you could do like Rust with their "Epoch" system that lets you interoperate code using various standards in order to migrate progressively without breaking backward compatibility. I suspect that it would be a lot harder to make it work for C++ however, mainly due to its extreme reliance on #includes (especially for anything using templates) and more common use of macros.

I'm not saying it's impossible but I suspect that it would fragment the ecosystem quite a bit. Removing "old features" tends to have massive side effects in a language like C++ with metaprograming, overloading, multiple inheritance, unlimited macro usage and complex symbol resolution rules.

So I think "why not just use Rust/D/Zig/Nim/Crystal" is warranted feedback for these proposals (and you could probably add Go, C#, Java and a few others).


Funny enough, Rust doesn't call them "epochs" anymore, we switched to "edition" before the release.

However, there's an active (and in my understanding, decently received) proposal based on it for C++ that is called "epochs".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: