At my company this typically results in users printing or writing the password down.
I wonder if the guys who create these heuristics/recommendations ever had contact with humans. I believe that their research is thorough in their area of expertise (info sec), however, it sounds like they are only considering data per se, ignoring human behavior variables. There's little value in enforcing hard-to-break passwords while also encouraging users to write them down.
What I believe: Info sec researchers should team up with HCI people.
There's not much wrong with writing passwords down. Printing thema is less desirable.
It's a hundred times better to have a difficult password on a post-it on a monitor than it is to have an easily guessable password. Who do you suspect is going to hack you? Ask that question honestly and you'll know how best to thwart them.
I understand your point, and I agree that the biggest threats aren't physically nearby. In that scenario ("remote" attacks), of course, there are bigger problems than written/printed passwords.
However, at the enterprise level, physically visible passwords are a big problem. Imagine a less-than-happy worker, about to leave the company, having the opportunity to get coworkers passwords. In such scenario, less strict rules (let's say, rules that didn't make people writing the passwords down) would have been beneficial.
And there's another point: the "perception" about IT security rules. If they ask too much of people (think "non-IT people"), they might create a image of overzealousness/"overcomplication". I wonder if this doesn't make people less compliant, with security rules, on the long term.
The thing wrong with writing passwords down is that writing passwords down makes 2FA into 1FA. A password, when stored outside of someone's head, is a token, not a password.
If you really do 2FA, though, you should actually relax your password requirements. The most important attribute of a password used in a 2FA scheme is memorability, to make sure the user doesn't write it down (and thereby remove a factor.) Even a dictionary word works, as long as it's not one that's written down on e.g. the user's employee profile, like their mother's maiden name. Generating one or two dictionary words would be fine.
Keep in mind, the majority of 2FA security is in the token. As long as you verify the token first, the only power the password needs is to distinguish the device owner from someone who stole the device, or has snuck onto it. It doesn't need to protect against automated attackers; that's what the token (plus rate-limiting) is for.
> It's a hundred times better to have a difficult password on a post-it on a monitor than it is to have an easily guessable password. Who do you suspect is going to hack you?
It depends on the threats you face. Generally, most attacks are from insiders.
I'm trying to do that actually. I organize the Boston Security Meetup where we have 150 attendees who come to Google Cambridge to listen to cybersecurity talks. I also organize UX Boston forum for user experience designers. I hope to get more security people interested in UX Design to understand the human aspects of keeping people safe.
> At my company this typically results in users printing or writing the password down. I wonder if the guys who create these heuristics/recommendations ever had contact with humans.
While I absolutely understand your sentiment, I think you might be conflating extreme password requirements with reasonable password requirements.
The article linked suggests that a strong password is 10 characters (that's the whopper), and three of four complexity requirements (capital, special, number, lower). That's not unreasonable. In fact, the only really difficult part of that is the 10 characters bit.
Switch that to 8 characters and you're golden.
Even better, have a five minute lockout and/or email unlock functionality after, say, ten failed attempts -- and you're doing great.
I deal with web application security assessments on a daily basis, and the current status (as a general rule) is abysmal. Passwords won't fix most of those problems, but making sure that users can't set "password" as their password can at least improve one potential issue.
How about just measure entropy based on some criteria (using things from different sets MIGHT count as entropy for each new unique set) and letting the end user decide what goes in to the password and how long it is?
I wonder if the guys who create these heuristics/recommendations ever had contact with humans. I believe that their research is thorough in their area of expertise (info sec), however, it sounds like they are only considering data per se, ignoring human behavior variables. There's little value in enforcing hard-to-break passwords while also encouraging users to write them down.
What I believe: Info sec researchers should team up with HCI people.