I don't get it. Loudness normalization past the mastering stage (i.e. at the audio track uploaded by the label) will only make all audio louder or quiter, it can't increase the actual dynamic range of the track. Furthermore, the DR meter can easily produce bogus results if the equalization changes or with non-lossless joint-stereo encoding algorithms.
Humans like loud music, so louder music sounds better and sell more. So, how do you make your music MAXIMUM LOUD? Well, our digital formats have a setting for how loud any given sound is, but you can't just set it to the maximum value because then all sounds in your track will be equally loud. Or can you? Not all parts of a cymbal "phish" is equally loud, so setting it all to max will distort the sound (sacrifice dynamic range), but will consumers really care?
Youtube, by adjusting all tracks to have the same average loudness, is basically saying "to those of you who would give up essential dynamic range, to purchase a little temporary loudness, you deserve neither dynamic range nor loudness"
The hope is that eventually labels will quit sacrificing dynamic range for loudness if all our digital music sources set the average loudness to be the same for all tracks.
Applying Replaygain or EBU R128 loudness equalization will not change the perception of dynamic range and, all other things being equal will actually slightly decrease the DR rating. And TFA is not talking about Replaygain-like algorithms from the description.
Yes, that's true. But in the long run, with YouTube being such an important player, the results should be positive.
The record company's mixers have been escalating a loudness war for years, in an effort to make the most impression on listeners. And the result of that is poor sound quality.
Now YouTube comes along and says "we're resetting you all back to a standard overall volume". Suddenly all that escalating loudness hasn't accomplished anything (or very little; probably there are ways to game the system a little), at least when listened to through YouTube, which as we said is hugely important.
If they're no longer able to compete in loudness, and the vain attempt to do so damages the quality of the recording, then in the long run the mixers ought to quit doing that. It's not going to save Death Magnetic, it's too late for that, but maybe we can reclaim fidelity in future recordings.
Although the tracks have been normalised to have the same average loudness, the more aggressively compressed tracks that have less dynamic range will still sound louder at this lower level. I don't see how what YouTube is doing is going to help.
If the lower dynamic range tracks sound louder even after normalization then the normalization algorithm is flawed. ReplayGain weights the energy by frequency to better match perceived loudness, and it does a reasonably good job. Other algorithms might do an even better job at matching human perception.
Those two things have a pretty similar loudness(A-weighted RMS) and yet the compressed one sounds louder than the measured difference - ~3dB A-weighted which is not obvious to perceive untrained.
As a test - download them both, amplify the Uncompressed one at -2.7dB and listen to them again. They have the same A-weighted RMS yet the Uncompressed one sounds louder.
As I understand it, tracks with more dynamic range will have louder loud's and quieter quiet's, while those with low dynamic range will have less of either.
That's true but most people are used to a smaller DR(up to a point of course but that point is way out) and still prefer the lower DR even after normalization.
So, actually, there's not much evidence that points to this. We see albums with good dynamic range selling just as well as ones with shitty dynamic range that is louder.
The FM radio has long normalized volume for songs - so the only places this mattered was on purchased CDs.
It's some weird feedback loop. The initial input was true - to a point, humans think louder things sound better - but that idea has now been as distorted as the audio tracks they're putting out.
I believe what makes the author excited is the fact that Youtube makes this normalization in the first place, means that a properly mastered track and a track mastered for maximum loudness, will play at roughly the same volume. The benefit is that tracks where the dynamic range have not been compressed, will sound far better than the loudness tracks.
In other words, there is no reason for the studios to master for loudness (at least for Youtube audio).
Dynamic range compression (DRC) is destructive so of course the normalisation cannot repair the damage. But the point is there is no longer any reason to use DRC simply to try to increase the playback level because Youtube will undo any attempt. DRC now simply reduces the dynamic range, making music sound worse, not louder.
I came here to say the same thing. This move probably has everything to do with making sure people don't have to constantly change the volume when shifting between music videos and random amateur cat videos. The simple fact is YouTube can't change the average loudness of a given audio track without applying some form of 'destructive' processing, either compression or expansion. A more dynamic track will always be on average, quieter than a less dynamic track. Re-processing a bunch of audio tracks with just normalization does not change their average loudness, unless of course they are selectively applying compression to audio they find in need of a boost.
Indeed. Perceived loudness is a complex mix of signal level, dynamic range, and frequency content. Perfectly undoing what was done in the mastering studio is not only difficult but maybe impossible.
I'd be cautious before celebrating.