MUNI's shortcomings are visible during rush hour due to system collapse, and off-hours / on secondary routes due to missing runs or too-infrequent scheduling.
Moving around the most urban cores outside of rush hour, you typically won't see the worst of the warts.
Daaaang, that's got to be some power. I've looked at Sutro's architecture from across the bay, but never got up close to see what's on it.
I have a keyless-only car now, and this sort of stuff worries me. "Dead battery" mode has always saved me thus far, but in that situation? I'd be hosed. Can't even put it in neutral and coast down the mountain. (Electronic transmission, there's no shift-interlock bypass...)
Yeah, you have to remember that in pulse dial days a central office station had a reach of roughly 3 miles. The longer you go, the more you're paying for cable, repeaters, or just losing quality. 90 volt AC for ringing has a limited range!
> How do they colour-code the wires to identify them?
It’s actually pretty simple. There are only 10 colors: blue, orange, green, brown, slate, white, red, black, yellow, and violet. They’re grouped in “binders” (using colored strings). You’re likely familiar with the first four pairs from network cables (which omit the white/slate pair). After cylcling through blue through slate paired with white through violet (25 pairs), the wires are bundled with binders starting with blue/white string. That gets you to 625 pairs (the first picture posted above is 600 or 625 pairs). After that, the binder groups are bound in a similar fashion (typically if you’re going beyond 625, the slate/violet binder is omitted to get a nice round 600 in the first group).
100-pair cable is only about 3/4” diameter. I have a 24-line 1A2 telephone that uses 75 pairs just to connect to the phone switch and two 100-pair cables feeding a telephone display case in my living room.
It takes me about a half hour to punch down 100 pairs on a 66-block. Old school telecom guys could probably do it in under 10 minutes.
Buuuuut.... not if you hijack the destination route. It helps if the DNS servers are hijacked, but if you also snag the destination route's IP then you can still get the certificate and serve traffic. Sneaky sneaky.
Yes, (last I checked at least) Let's Encrypt validates DNSSEC, this causes problem periodically for users who expected something to work but the DNSSEC setup is wrong and they never noticed before because nothing checked.
Removal performance of large directories is very bad. Reading large directories can result in lots of seeks. Directories are not suited to this at scale, and symlink directories definitely aren't portable.
A side note: at least in python, I benchmarked MsgPack as 3 times faster than JSON, and a whopping 850 times faster to read than YAML. It seems unlikely that people will ever be editing this file by hand, so for unstructured data, especially where playlists can get very large, I suggest MsgPack over YAML.
BUT... you have a defined schema, so it's probably to your advantage to use a storage format with a defined schema: ProtoBuf or Thrift. That would mean somebody trying to use your code would already have generated objects in their language.
As for the hashing algorithm, this is a good use for MD5 -- cheap and fast. You're unlikely to be concerned about somebody actively trying to generate the same checksum for two music files. For non-security (integrity verification) purposes, MD5 is still very appropriate.
Thank you, I agree. If I'm going to use JSON, I might as well use ProtoBuf.
About MD5, I was worried about a case where a service that serves user-submitted files would be exploited by MD5 collisions, leading users to open files that might exploit decoder bugs to execute code. Far-fetched, I know, but the tradeoff didn't seem worth it. I'm not married to that decision, though.
The question of hash usage made me think of an alternative approach -- what about some kind of audio perceptual hash? P-hash has support for audio hashes [1] (at least it claims to, but I've never used it). The metadata is useful, sure, but coupling it with the playlist seems like a bit of a strange design choice, if it could be avoided. In my mind, an ideal world would have two databases (or equivalent): one for metadata -> perceptual hash, and one for playlist -> List[perceptual hashes].
The downside of course is this requires pre-calculation of the p-hash for every track to use. But I can't think of a music application that doesn't require some kind of "library loading" step, so perhaps this could be accomplished then?
Of course none of this mitigates your concern with decoder bugs resulting in RCE, but I think that's probably best handled elsewhere (for example, sandboxed upload validation in your hypothetical user-uploaded-files service).
The most annoying thing is not dedicating the time to cultivate playlist with your favorite tracks, but to LOSE ALL of them, when you move or rename a file. And that is something that happens in 99,99% (I'd say) of all cases with any music library over time.
Example: I just moved all mp3, m4a files to the microSD card on my android phone, keeping names and folders identical, but my playlists are all empty now. Thank you Samsung! grr..
Making a path independent, p-hash independent (but utilizing, if available, or requested) playlist format is what I'd really want. The metadata should always be saved inside the file because metadata get's lost when you change the program. Filling up an sqlite db with all the metadata saved in your audio files would only speed up meta-data management and sync, but remove it's control from you.
Features I think make sense to expect from a perfect music player (without vendor-lock-in), be it run on mobile, web, desktop, cli or as a daemon:
• Playlist export options for ie. Samsung Music Player, iTunes or whatever crap we're locked-in currently
• Save Metadata incl. rating always inside audio files, because metadata get's lost when you change the program
• Extract Metdata from audio files into a database for speed, management and easier sync into the files
• Audio-Fingerprint may allow detection of: duplicates (hash independant), similars, classify genre and map mood-maps
• Batch-Convert between flac,mp3,ogg,mp4,m4a if user wishes without stupid dialogues. 128Kbit LAME-encoded MP3s don't sound converted to "best quality ogg"..
• Create P-Hashes (or another perceptual hash) initially, when idle, periodically or when requested
The AcoustID fingerprint is exactly what you describe, and is already supported by the format. I agree that my crypto concerns are probably handled elsewhere, and, since I'm not actually doing crypto, I might as well add MD5/SHA1, which are more ubiquitous.
Haha, the bikeshedding argument is fair, but not terribly so. It's good to receive feedback of all sorts, and then the designer (me) thinks about it, distills it and makes something that's (hopefully) better than what was there before.
It's only bikeshedding when the decision has to be made by community!
It depends on which 2FA method you use, and there's an associated time window. The TOTP method (Google Authenticator App) of a rotating number must be used within a window of at most a few minutes -- new numbers are generated every 30 seconds, so they could use that if they logged in immediately.
If you use U2F, then the domain name difference will mean that the U2F key can never match unless the attacker has control over DNS and is issued a Google.com SSL certificate by an authority the target's computer trusts.
Moving around the most urban cores outside of rush hour, you typically won't see the worst of the warts.