It’s one of those tools that’s well designed and you don’t need a thousand line configuration file to be productive. The modal editing style is also an improvement over vim/neovim because you do selection -> action rather than action -> selection. Lastly, it’s written in Rust, which is not a plus purely because of that but because it’s a lot easier to contribute code changes to or tweak a modern Rust codebase.
Hard disagree. It is not an improvement - it is a deterioration.
Repeat (dot command .) is far more powerful and general thanks to operator->action.
Much better composability with counts, registers, marks and operators with operator->action
Operator Pending mode has some unique advantages for swipe edits that are painful in a select first paradigm. Ex: `d/foo<CR>`. No fast equivalent in Helix.
Also, if you really want, you can select in VIM and do your job too. Use visual mode with `v`.
I work as a software engineer, so I was familiar with the stack I told the AI to build the forum with. Also, the forum was made specifically to see what vibecoding was like, so I didn't look for alternatives. But I'd say is working quite well code-wise.
Yes, the only issue I had to debug was the password hashing iterations being too much for the low resources of the VPS I was using. That made for logins and registration taking 2 seconds; other than that, smooth as butter. You can check the code at
I think the spirit is in the right place but the framing is too extreme.
I am never going to be a software engineer either but I took a CS101 class and then data structures and algorithms.
Going from zero to passing those classes was the most useful combination of things I ever learned.
I am so glad I got to do this way before LLMs because I think I am absolutely the type of person who would have used LLMs to cheat and learn nothing from the class.
I think at least those concepts are vastly worth struggling with without the help of LLMs.
"estimation is in points, not days" doesn't tell me anything. Is not like tasks have an intrinsic attribute that everyone can agree on (e.g. the sky is blue)
How are you estimating the points if not thinking about how hard the task is for you and how long is it going to take you?
And then another matter is that points do not correlate to who later takes that work. If you are 5 seniors and 3 juniors and average on a task being a 3, but the task falls to a junior, they will take longer as is expected for his experience.
Points are not intrinstic or objective attributes, like the sky being blue. The scale is arbitrarily chosen by any given team, and relative to past work. But a common reference point is that 1 point is the "smallest" feature worth tracking (sometimes 1/2), and 20 points is usually the largest individual feature a team can deliver in a sprint. So it's common for teams to be delivering something between e.g. 50 and 200 points per sprint. Teams very quickly develop a "feel" for points.
> And then another matter is that points do not correlate to who later takes that work.
Yes, this is by design. Points represent complexity, not time. An experienced senior dev might tend to deliver 30 points per sprint, while a junior dev might usually deliver 10. If a team swaps out some junior devs for senior devs, you will expect the team to deliver more points per sprint.
So the PM must have the velocity of the team to be able to estimate timescales for the project, which is what they care about, and this velocity metric is only as good as the estimation of complexity points of a team?
> An experienced senior dev might tend to deliver 30 points per sprint
Seems a bit ironic that complexity doesn't measure time but then we are measuring how much complexity can someone deliver on average on a given time. Isn't complexity directly proportional to uncertainty factors, and therefore inversely proportional to confidence of time to completion?
> So the PM must have the velocity of the team to be able to estimate timescales for the project, which is what they care about, and this velocity metric is only as good as the estimation of complexity points of a team?
Basically, yup. It takes a few sprints to start to establish a meaningfully reliable sense of velocity, and the estimation accuracy is why planning poker takes a couple of hours of real discussion over feature complexity, rather than just a few minutes of superficial guesses. But the end result is a far more accurate ability to estimate what a team can reliably deliver in a sprint, and is really good at bringing stakeholders down to earth in terms of what can actually realistically be delivered.
> Seems a bit ironic that complexity doesn't measure time but then we are measuring how much complexity can someone deliver on average on a given time.
What's ironic? And no, it's not about "someone", it's about the the team. Different people on the team will be able to deliver different numbers of points depending on their skill, experience, etc. This is a major reason for not using time -- it actively recognizes that different people take different amounts of time, that things like sick days and meetings are taken into account, etc.
> Isn't complexity directly proportional to uncertainty factors
Yes, this is an explicit assumption of the Fibonnaci-style points usually used.
> and therefore inversely proportional to confidence of time to completion?
Yes, which is precisely why stories over a certain size are disallowed (the feature must be broken up into parts), and why sprints are measured in a very small number of weeks -- to avoid the accumulation of too much uncertainty.
This was super useful. As a technical person willing to learn sales, the numbers that you showed at the different stages of the funnel shows that is all a numbers game and rejection is the norm. From 487 connections to 2 paid clients. Great post!
I concur. Really great post! Been an engineer for a while, starting to prepare to put together SaaS products, so this will be useful to come back to. Thank you, OP!
reply