You should be doing some fairly expensive hashing if you're storing the password correctly. Maybe not an issue for a 50k char password, but how about a 50 billion char password?
> You should be doing some fairly expensive hashing if you're storing the password correctly
Exactly. You aren't storing those bytes.
> Maybe not an issue for a 50k char password, but how about a 50 billion char password?
We're back to a place where the response to the question is another question, but it just ends up failing to give an answer, opting to just keep throwing out larger and larger numbers. My response: "Yeah, okay. How about it?" Why stop at 50 billion? 99 trillion, let's go there next. Again: why not?
Because 50 billion chars is over 46 GiB of data. There are natural consequences of very large payloads and limits that you're going to reach as a result of those consequences (e.g. being prohibitively expensive for the client to send in the first place, or it will max out the server's connectivity lifetimes for extant requests before the payload can be delivered). If "CPU utilization crosses threshold" is the real reason, then let that be the real reason—and let the safeguards you have in place for handling those problems do their jobs. And if "we cap passwords to X chars" is your safeguard, then you have bigger problems.
> If "CPU utilization crosses threshold" is the real reason, then let that be the real reason—and let the safeguards you have in place for handling those problems do their jobs.
To me, this feels a bit like passing the buck.
If you want to test your backend with 50-billion-character passwords as a safeguard in case things get screwy, that makes sense to me! But, has that test been done? Are you sure?
I see this as analogous to the concept of "defense in depth". Safeguarding against very weird edge-cases which do not provide utility to anyone is a good idea. If you assume the other part of the chain can deal with it, and you're wrong, things blow up. If you assume the worst, all will be well regardless.
Consistency for the user is also important. If your login processing code does more work than your password-setting code, load limits may allow a password to be set that then can’t be used for login. A length limit acts like a fuse, ensuring that under load it’s the thing that breaks first and in a predictable way.
This still requires you to hash the hash on the server, otherwise the hash becomes the password and learning the hash allows you to login. But hashing the hash is actually better anyway because it makes it so that the server never even has the plaintext of the password, which can be dangerous in itself when users use the same password on multiple sites.
This is a backend concern, not a frontend one. The backend shouldn’t naively accept input without making sure the input is within the backend’s limitations.
Client side validation does not replace server side validation, and vice versa. Just because you validate server side doesn’t mean you can’t also do it client side and avoid a round trip.
I guess? But if someone puts in a ten megabyte password they are either suffering a bug or blatantly screwing with you. It's okay to throw a 400 error at them.