Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FLAC Support in Firefox 51 (bugzilla.mozilla.org)
317 points by kawera on Sept 4, 2016 | hide | past | favorite | 189 comments


The wierd thing is that this has been supported for years. Firefox has a compile-time flag that allows it to use operating system codecs, it is just not activated in the official binaries. (I think it also uses a codec whitelist on some OSes, and you have to comment out a line.) On Windows, it used DirectShow codecs, on Linux GStreamer, and on OS X probably quicktime. It's great to be able to just point the browser to a file on the local net and be able to play everything the system can play (and even if it couldn't load the system codec, it could load a MPlayer plugin).

I used this for a while on Linux when there was a codec war between Firefox and Chrome (forgot the details), but I could play videos that normally only played in Chrome without problems.

Browser vendors cite usability problems (if one person's browser supports more codecs than the default, then it is confusing for other people with the default browser... and the person with more codecs might create websites that depend on this), and stability problems (plugins in any form have a bad reputation, but actually the default OS media player codecs are of pretty good quality as they are excercised thoroughly). But honestly, I think it is mostly politics that this is not enabled.


One big concern is security: code like this is notoriously prone to exploits and once shipped everyone is exposed whether or not they use it.

That doesn't mean new formats can't happen but it should encourage some careful assessment of the benefits relative to the need to support it for years.

(This is one of the stated reasons why Firefox never shipped JPEG 2000 support despite some demand)


Good point. Decoders for various formats are being reimplemented in Rust, which may make Mozilla less hesitant. There's still no guarantees, but it seems much less dangerous.


It will definitely help but it also increases the cost of adding support if that means porting an existing code base and having to maintain it. It'll be interesting to see whether projects start migrating wholesale so that's not just a Mozilla effort.


True, but there is no intrinsic reason why code from one vendor (Mozilla, Chrome group at Google) should be more or less secure that from the OS developer (e.g. MS, Apple).

In fact, for probably everybody except large browser vendors, you should never roll your own codecs (security, networking libs...) if it is not absolutely necessary.


I disagree with this idea in the specific domain of Web Browsers. If a vulnerability is discovered in a codec that ships with Firefox, Mozilla can have a patch out within 24 hours. They only have to test that codec against their own browser software.

Microsoft and Apple can technically roll out a similar patch within 24 hours, but will rarely do so except for major bugs, which have significant baby-punching abilities, as they have to test all of the rest of the operating system software against changes to system libraries. (Linux users get to update their libraries independently, so my argument doesn't really hold water there.)

Since my web browser is one of those bits of software that I regularly expect to load code that I will not vet, from developers whom I don't necessarily trust, I feel like I'd rather have it loading codecs that were designed with the web in mind. I don't think I trust it running my OS codecs, especially for more complicated formats.

This still doesn't mean I'd expect Mozilla to write their own codecs from scratch; certainly they should pull that codec in from an open source, well written and well supported library. But I think they're in a better position to respond to threats and patch domain-specific issues out of that library than my OS vendor is.


Codecs often aren't written by the OS vendor. Throughout the 2000s a lot of Windows users got their codecs via less than reputable codec multipacks from sites that may or may not be reputable. The VLC/MPC-HC/Web "just works out of the box" method is more user friendly and secure than a bunch of independent binaries, written by various people, which may end up conflicting with each other.


Much codec code was written years/decades ago with zero concern for security, so browsers have to have whitelists and private implementations.


Diverse code ownership is a security issue. Even if the OS code is just as "good" as the browser code (and there are reasons to believe that's not necessarily the case), the assumptions is makes are at least partially encoded as knowledge by the authors. When you're dealing with benign code, that's no problem - people work around OS/library bugs and/or (mis)features all the time - but if you're going to expose the code to potentially hostile third-party input, it's a little tricky. There's simply more opportunity for miscommunication when there are multiple parties involved and the spec is complex and may lend it self to native (unsafe) implementation for performance reasons.

Browsers are right to be hesitant, here. Frankly, they should be hesitant, and sandbox the plugin, and use something like rust, and even then I expect there will be exploits sometimes.


> there is no intrinsic reason why code from one vendor ... should be more or less secure that from the OS developer

If you're writing a cross-platform application, it's often easier to be confident in platform-agnostic parsing code that uses the platform sandboxing primitives, versus yielding parsing to the operating system, which may decode in a different execution context, and be less secure compared to the application security policy.


of course its politics. Google ships chrome with unoptimized h.264 software decoder causing 1080p60 h.264 YT clips to stutter on 4GHz intel while more cpu intensive vp9 is butter smooth, and same 1080p60 h.264 clips downloaded to disk play at 30% cpu in mplayer.


That's not just politics, it's also patents.


patents dont make people ship software compiled without optimization. Intel did same thing with its compiler to push cpus, Nvidia with physx compiled to use FPU instead of SSE/AVX and so on, its all politics.


Finally!

I wonder why this took so long. The assumption that no sane site would want to stream in lossless when lossy codecs were starting to be really good (and obviously much smaller)? Lack of expertise, manpower? Priorities? [1][2] Does every FF feature has to be 'parity-chrome'?

It's even more interesting that Chrome also sat on this for ~5 years [3] and are just now about to release it also.

Like the Firefox thread insinuates, will pundits credit TIDAL for lighting the fire under browser vendors to support lossless streaming? No such link appears to exist, aside from TIDAL already streaming to Chrome using NaCl [4], but when we look back in 10 years and see both of the major browser vendors adding FLAC support now as opposed to any other time in the previous 5 years, what will people think?

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=514365 [2] https://bugzilla.mozilla.org/show_bug.cgi?id=586568 [3] https://bugs.chromium.org/p/chromium/issues/detail?id=93887 [4] https://support.tidal.com/hc/en-us/articles/202654692-HiFi-O...


> I wonder why this took so long.

Because supporting lossless audio streaming is as useful as supporting TIFF images or UTF-32 text encoding. Nice to add to the spec sheet; mostly useless in a practical setting.

The only purpose that comes to mind for delivering lossless audio within a web page is to build, somewhat ironically, a lossy codec comparison tool which would in turn demonstrate why lossless audio isn't necessary.


> lossless audio isn't necessary

As an audio engineer, I must point out what is wrong with this statement.

Sure, the average listener cannot discern between FLAC and a well-encoded lossy audio file. However, with the Web Audio API making big advancements, the need for in-browser FLAC support is becoming quite apparent.

People are building all sorts of web apps for creating, manipulating, and processing audio. For this use case, working with lossy audio is out of the question. Supporting FLAC means the ability to follow signal flow best practices [0] without requiring the user to work with much larger PCM files.

[0] https://en.wikipedia.org/wiki/Audio_signal_flow


I think this is the most reasonable argument for supporting FLAC natively. We have support for lossless images in the browser already - TIFF isn't the image analogue, but rather PNG.


Except that PNG is by far not sufficient in professional image manipulation: it's compression scheme is good and 8uc{3,4} is lossless, but beyond 8bpc you're out of luck.

I'd love to have a wide-supported float32 format with reasonable lossless (and maybe lossy) compression options.

Conversely, FLAC supports different resolutions and sampling rates, so it does do the job for the sound processing.


Not necessarily. I use a site local to my network to play music in my browser using the audio tag [0]. Some of the music in my library is flac. I never came around encoding them, and the transcoding support of my player never worked perfectly, so much that I finally got annoyed and cut it. Having flac support in the browser will make those audio files work without further work on my side, which is great.

[0]: https://github.com/onli/music-streamer, but there are probably better alternatives


That's a reasonable counter-example. Thank you.


Lots of browsing activity occurs on the intranet where bandwidth is effectively infinite. But I am not sure why more native code is being added to firefox that handles file parsing. An emscripten based extension to firefox could have been just as effective. And to support all browsers, one will need to drop in a JS flac library anyway, making this addition moot.

The discussion in the bug itself covers JS flac implementations https://bugzilla.mozilla.org/show_bug.cgi?id=1195723#c2

There doesn't seem to be any testing around security, attack surface, etc. Disappointing, and not recommending that anyone use Firefox for the foreseeable future.


Shipping new Firefox code that parses untrusted data from the network normally requires security view, and a new fuzzer for the data format in question is usually written. There's lots of process and infrastructure around this sort of thing at Mozilla. This is sometimes handled in a separate bug from the one implementing the feature. I'm surprised that the corresponding security review bug isn't linked from the FLAX bug, though, which may be an oversight. (Or it may be the non-public blocking bug, and it's non-public because an issue was found.)


Now that I think of it, the majority of this work should have been done in JS land, with the heaviest decoding being exposed as a document._firefox.media.flac_decode() method that takes a buffer of flac stream. That one flac stream could be formally verified, everything else lives in user land.


>And to support all browsers, one will need to drop in a JS flac library anyway, making this addition moot.

That's silly. By that argument no browser should ever bother adding any new features.


Not ones that are trivially added "in user land".

Browsers should add features where only they have access.


We have the exact same use-case for TIFF or JPEG 2000: users can browse a giant repository. They can view many files directly but we have to run a transcode server for browser-unsupported formats, which also means things like the standard Save As UI no longer work as expected.

That's not clearly bad - the range of obscure formats is long & it's easier to update an app than every client - but it is a cost.


If your source media is FLAC (which is a good way to store 'source' material, that you might lossy-encode later) then being able to just stream that without any encoding step is handy. Maybe you wouldn't do this if you were concerned about bandwidth, but I'd certainly welcome this in a player to use on my LAN.


> Nice to add to the spec sheet; mostly useless in a practical setting.

I'm still waiting for the day when the OPUS codec is supported everywhere.

OPUS is some serious next-gen shit: https://www.opus-codec.org/


I have pretty decent headphone system at work. I'm constantly using services like Tidal HiFi and listening to my own flac collection from Style Jukebox. I just want to have my music in the original quality and if needed, transcoding down to mp3.

So there is at least one of us who has a serious need for flac in a browser. And the bandwidth it uses is still less than with a typical full-hd video stream.


Why not stream in ~256kbps AAC or Opus? The audible quality drop is objectively proven to be zero. Make peace with the placebo effect and enjoy your life.


Those listening tests are done on stereo headphones. I often listen to music through a 5.1 system, which processes the stereo signal through a Dolby Pro Logic II decoder in order to feed the 5 speakers. This stresses lossy codecs much more than listening through headphones; phase information is used for sound placement, and sounds that would normally be masked often end up sufficiently localised to discern (this is part of the fun of a surround system). I can definitely hear outright artefacts at rates and codecs purported to be transparent.

You never know how a piece of audio might be used, and once you've thrown away information you can't get it back.


Thank you.

In a past life I was an audio engineer, and I spent hours tirelessly testing mixes on every kind of system. You are the listener I was working for.


Sounds like I should be thanking you. I'm amazed at how good things often sound when treated that way - I'm glad that artistic input goes into it.


I agree in principle, however I disagree with the logic insofar as anyone who is listening to music through Pro Logic (or a similar matrix decoder) is by definition not a target audience for pedantic degrees of audio signal preservation.


Not really sure what you're trying to say. Why am I not allowed to be in the "target audience" for not wanting artefacts in my music?

Your reasoning seems to be "only deluded, pedantic audiophiles want lossless, and audiophiles don't listen though matrix decoders". But that begs the question - the whole point of relating my personal experience was to show how a perfectly reasonable and commonplace listening style could benefit from lossless encoding.


Please don't put words in my mouth. I would never say that you have to be deluded or pedantic or an audiophile in order to justify lossless audio file. And I would never ever say that "audiophiles don't listen though matrix decoders".

I completely accept that matrix decoding is a legitimate way to listen to music. I do it myself on occasion. But even before you throw lossy compression into play, it's worth acknowledging that Pro Logic decoding of a non-Pro Logic signal is hit-and-miss at best, which can often trip up or be generally unpleasant with many types of material.


Oh, I forgot one big deal: Bluetooth audio. I use bluetooth headphones when commuting and I have a Bluetooth receiver at home for convenience. Here it's a big deal if you compress with a lossy compression two times: once to MP3 and again when using Bluetooth. The loss in quality is significant.


Decent bluetooth receivers can decode mp3 / mp4 & a correctly setup sound path should pass the mp3 packets straight from source to speaker without a decode/encode stage.

Weirdly, this seems to be one of those tech things where it’s impossible to actually find out whether a bluetooth devices supports more than just the mandatory codecs & it’s equally impossible to find out whether your sound path is passing the source through or not. It’s all completely opaque.


I really need a technology like Bluetooth, but there seems to be no other that works with all devices. Bluetooth audio is a black box and you never can't be sure what happens between the device and receiver.


Are you talking about apt-x?


APT-X is a proprietary codec that performs far better than the original bluetooth audio codec & can be used for things that said codec really was’t up to. (It also includes a lossless mode for the audio purists amongst us.)

Modern bluetooth devices can also negotiate various other codecs as part of the A2DP spec: https://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Adv... including mpeg3/mpeg4 encodings but only SBC is mandatory and it seems to be impossible to tell what codec any particular Bluetooth audio device is using on any platform: I’ve not seen anything that will tell you what it’s really sending over the air.


If you're able to test with an Android device running 4.4 or later, you should be able to turn on Bluetooth Logging in Developer Options.

Wireshark can read bt_hcisnoop.log as a Symbian OS Bluetooth Log (really). From a bit of fiddling you're looking for a series of AVDTP requests with GetCapabilites (to your speakers/headset/whatever) and then SetConfiguration (from your phone/player). Filtering by btavdtp.service will get you this.

I can tell you my phone is ignoring SBC and AAC support and asking for APT-X only.


That is impressively convoluted. I have a feeling it ought to be possible to write a gstreamer plugin to do this, but it’s one of those minor projects that never got over the do-I-care-that-much hump.


I've got a feeling that by the time an audio stream gets to GStreamer it's not in the original format. Also you'd have to get an A2DP sink working (to make your phone think the computer is a headset / speaker)


I have never seen a subjective comparison where Apt-X beats SBC. It seems to bee all marketing hype.


You shouldn't be using bluetooth headphones with lossy compression if you care about audio quality.

(And it should be made clear that I have no in-principle issue with anyone purchasing or storing their local music library in a lossless format. It's a valid decision, particularly if you're establishing your music library in the last 5-10 years.)


So to summarize my thoughts (had a nice cup of coffee and thinking, Sunday anyways), the people I allow to do compression for the music I buy is the artist/producer herself or me. I really don't like having a third party in the middle turning the knobs for the music I buy.


You are of course aware that what you are listening has already been tweaked, knob-turned and filtered by the recording studio, sound engineers, artists and whoever else was involved. In the end music is subjective. It sounds different and pretty much every playback system. And to the listener, it either sounds good or not. It's never "correct".


Of course. The typical music I listen to (techno mainly) goes through the producer's studio and usually has another person doing the mastering. Until now all compression is done by professionals who have ear how the end product should sound (with their Genelec studio monitors, of course). I'm fine with this.

What I don't want to have is an automatic lossy compression done by systems which the artist has no influence. Depending on the sound of the original music, this might have no effect or then the compression completely ruins the dynamics and sound of the original production.

On this time and age, bandwidth is cheap and the connections are fast. Disk space is cheap and cloud storage is cheap. I don't see so many reasons to use lossy files anymore and with flac I can be sure that I have the closest possible copy of the production that left the artist's studio.


Yes, you are definitely conflating dynamic range compression (dynamics) and data compression (codecs). These are very different, completely unrelated issues that unfortunately bear the same linguistic shorthand of "compression".

In the way they are commonly used, lossy compression codecs have zero impact on dynamics. Correctly used, lossy compression can have zero audible consequence no matter how good the equipment or how "golden" the ears.

That said, a recording studio shouldn't ever be dealing with lossy codecs, because there's simply no need to, and because there's a sliver of possibility that the inaudible lossy artefacts could compound into an audible artefact over multiple generations.

The only time a lossy codec should ever be used is by distribution networks and/or end users.


> Correctly used, lossy compression can have zero audible consequence no matter how good the equipment or how "golden" the ears.

That's why the compression should be done by me or by the original producer to have it done correctly case by case. Lossless is a good compromise it being easier to apply automatically for any kind of music.


Correctly used just means not using stupid settings like 96 kbps MP3. Apple's standard of 256 kbps AAC is correct in all cases.


For some definitions of "all" I agree. i.e. for Hifi quality Stereo Music to end users where file size and bandwidth are not so important, yes fine, but in situations where space or bandwidth are limited / expensive, AAC can achieve transparency at well below 256kbps with most music, to most people, in which case it would often be overkill --especially for Audio Books, lectures, plays or other mainly speech content... conversely for 5.1 or greater multi-channel audio it's often insufficient.


Sure, I was just talking about stereo music where audio quality is high priority.

For streaming audio, the ideal setting might be different depending on the commercial priorities of the service.

For audiobooks, convenient file sizes will probably be higher priority than maximal compression transparency.

For multichannel audio, you would need to progressively increase the bitrate on a per channel basis.


Yes of course, sorry -- I was kind of in 'reddit mode' and being particularly pedantic about "all cases".

Agree with all your other points, and glad to see others that grok it.


But this still doesn't solve the original issue: if I buy music, it should be me who packs the files if I need to lose some information and I should own the original copies lossless. And as I also said, bandwidth and storage space are super cheap nowadays, so I really don't see so many reasons to use lossy codecs.


If you "buy" music to "own" you are downloading a file, not streaming it live through a web browser which is the topic of this HN discussion.


I store my music to cloud and stream it from there. So yes, I stream flacs anyways :)


They don't have to be conflating those things. For example are you sure Youtube doesn't apply dynamic processing to uploaded files? I wouldn't be surprised if there is some sound processing performed to help make the average phone video sound better.


Depends on the kind of music. If you're listening to the recording of an harpsichordist playing Scarlatti in a room, then you can measure the difference between what you hear through your speakers and what you would have heard if you were sitting five meters behind during the play. And then being more correct means having less difference.


When you refer to what an artist or producer is doing, you may be talking about a different kind of compression (dynamic range compression) than this topic covers (data compression).

I don't want artists or producers having any say in audio encoding formats – because the audio encoding format should be transparent and therefore have zero impact on the actual artwork.


The point isn't about audio quality. It's about having a free and open way to do it at all!

Who cares what uses it has? It's a whole new area of capability. Somebody could find a new use for it. If we limit our choices based what's currently possible, we get nowhere.


> objectively proven

Links? "subjectively indistinguishable" would be a bit less argumentative



Why re-encode when there's no need?


AAC is mostly Apple-stuff and I'm out of those pictures. Where do you have a music streaming platform that serves Opus? Flac is the best trade-off. I get the original quality, I can store my files to a cloud service and stream the music without using too much bandwidth (it's 2016 anyways). I have seriously good headphone system at work (for the price of a new iPhone) and at least with Style Jukebox I can switch between 320 kbps mp3 and the original flac where I definitely see a difference.

There is no price difference when buying mp3 or flac and storage + bandwidth is cheap nowadays. If I need a lossy compression, I want to have a control on the compression parameters.


AAC is part of the MPEG-2 and MPEG-4 standard. You probably listen to AAC-encoded audio far more often than you realise.


But not really with any of the music I buy...


All web browsers support the downloading of FLAC files.


If you can tell a difference between 320 kbps mp3 and FLAC, then your mp3 encoder has a bug.

I promise you that you can't ABX the two.


Mp3 is really useful if I transcode and stream some music while having data caps. But do I really have any valid reasons to archive the music I own with a lossy codec?

And as I said in another branch in this discussion, mp3 + bluetooth compression is pretty awful already.


If you care so much about music fidelity and are using bluetooth, then I'm inclined to say you deserve what you get.

I keep all the music I own in FLAC as well, but have written scripts to transcode to Ogg Vorbis for portability.


Why the comparison to TIFF ? If you want to insist in that analogy, are you aware of PNG and UTF-8 ?


That's the point. The point of the analogy is precisely that there are existing formats that are capable of the same practical quality with much smaller file sizes. PNG is smaller than uncompressed TIFF. UTF-8 is much smaller than UTF-32.


I got your point but just FYI, TIFF supports 16bit per channel images. PNG and JPG do not. I think TIFF might also support floating point images. Those might be useful for raw manipulation and HDR. Safari supports TIFF BTW (or did last time I checked). No idea if there is a way to get access to the high res data in Safari though.


Actually, PNG supports 16 bit channels (as well as 1, 2 and 4 bits for paletted or greyscale images). TIFF meanwhile is kind of a kitchen-sink container format, so you can do quite a bit of strange things with it (store JPGs inside it, support tiling), but I don't think many viewers support everything in the TIFF standard.

> The standard allows indexed color PNGs to have 1, 2, 4 or 8 bits per pixel; grayscale images with no alpha channel may have 1, 2, 4, 8 or 16 bits per pixel. Everything else uses a bit depth per channel of either 8 or 16.

https://en.wikipedia.org/wiki/Portable_Network_Graphics#Pixe...

> TIFF is a flexible, adaptable file format for handling images and data within a single file, by including the header tags (size, definition, image-data arrangement, applied image compression) defining the image's geometry. A TIFF file, for example, can be a container holding JPEG (lossy) and PackBits (lossless) compressed images. A TIFF file also can include a vector-based clipping path (outlines, croppings, image frames).

https://en.wikipedia.org/wiki/TIFF#Features_and_options


> Actually, PNG supports 16 bit channels (as well as 1, 2 and 4 bits for paletted or greyscale images). TIFF meanwhile is kind of a kitchen-sink container format, so you can do quite a bit of strange things with it (store JPGs inside it, support tiling), but I don't think many viewers support everything in the TIFF standard.

Likewise, not many viewers (and certainly no browsers) support everything in the PNG standard. If you expect more than a simple compressor for some pixels, you're basically SoL when it comes to browsers.

This has come to nip myself (and others) when doing screenshots for old games such as Doom, where the pixel aspect ratio is not identical to the way they were displayed. It would be nice if we could just set the flag in PNG to tell it the display aspect ratio is 4:3, but literally no browsers support that flag. Instead we have to depend on lossy scaling so the pixel aspect ratio is identical to the display one.


Couldn't you nearest-neighbour upscale in a 4:3 ratio beforehand?


HugoDaniel was referring to the fact that PNG is lossless as well. The rest of your point still stands, of course.


This isn't just academic. As OP pointed out TIDAL is not only using lossless but touting that as a key selling point.


so according to you leaving a castle fall into ruins is "good enough" because nobody wants to see it...


That would appear to be overwhelmingly the consensus of people with castles, too. Do they have some moral imperative to pay maintenance forever on something that has no use value and no hope of paying for itself?


Why this takes so long is because once such a standard is introduced, you can't take that decision back anymore without breaking a lot of things. They don't want to introduce standards which end up being used only by a small number of people while causing a lot of maintenance-work and then being tied to maintaining it anyways.

That also is the reason why these decisions often happen in the same time frame for different browser vendors. If another browser vendors decides to introduce it, you can be much more sure that you won't be supporting it without anyone using it. And obviously also just to not be left behind by web developers.


Maybe so. The counterpoint is that several of APIs added to the web platform in the last ~8 years have already been deprecated: stuff like WebSQL, File System, AppCache; there is no shortage of moving fast and breaking things when it comes to browser vendors.

Something like a codec is comparatively trivial, because as a content server you can never rely on everyone supporting the same codec, so you have to stream among multiple formats anyway. When all of these JS APIs that FF or Chrome added were later deprecated and removed, everyone had to re-code their sites. So I'm hesitant to accept 'let's be really careful about this' as a rationale here.


> The counterpoint is that several of APIs added to the web platform in the last ~8 years have already been deprecated: stuff like WebSQL, File System, AppCache

And two of these have been replaced by complex, over engineered stuff. WebSQL was replaced by IndexedDB which can't do a simple equivalent of SELECT ... ORDER BY ... GROUP BY... without ending in callback hell, and AppCache was replaced by ServiceWorkers.


I've seen more animated PNG than FLAC files. Animated PNG got removed from Firefox.


You're thinking of "MNG", which is a different PNG-related animation format than the simpler "animated PNG". MNG was removed, but animated PNG is still supported in Firefox and indeed is used in its UI.


I stand corrected.


No it didn't.


> Does every FF feature has to be 'parity-chrome'?

No, this is "parity-web". Web browsers usually simultaneously decide to implement or enable features. Firefox isn't copying Chrome; they're implementing the same feature at roughly the same time (presumably after discussing it first), so as not to cause further web compatibility rifts.


Fifteen years after the initial release of FLAC – have there been any significant developments in the lossless compression of audio since then?

I know there’s FLIF[1] for lossless image compression and Zstandard[2] for general purpose lossless compression that have recently hit the Hacker News front page. Are their adopted techniques not suitable for audio?

[1] http://flif.info/

[2] https://code.facebook.com/posts/1658392934479273/smaller-and...


Let's see:

- Wavpack [1], which is a rough contemporary but offers three tiers of presets (normal scale, high scale, extra high scale) and an innovative (and optional) lossy/hybrid mode

- TAK [2] which compressed better and decoded faster than either, but was initially closed-source until the dev was persuaded to open it up

- LossyWAV [3] which isn't lossless but chops off least-significant-bits while using noise shaping to pre-process audio and make it compress better when fed to a lossless compressor

Most of these developments were first publicized on Hydrogenaudio. But as for innovations in the last two years, not that I'm aware.

[1] http://wiki.hydrogenaud.io/index.php?title=WavPack [2] http://wiki.hydrogenaud.io/index.php?title=TAK [3] http://wiki.hydrogenaud.io/index.php?title=LossyWAV

EDIT (for some more background): generally in lossless audio compression you want to use linear prediction to predict an approximate signal for the next few samples, then encode the difference between your predicted guess and the actual signal in some entropy coder, like Golomb-Rice codes or Huffman or Arithmetic coding. Although most of Zstandard's improvements are algorithmic or implementation-related and not related to data theory, the part that could show promise is the tANS entropy coder [4] used in Zstandard; but Golomb-Rice codes perform well for data that comes from linear predictors; so I'm not sure what to expect [5].

[4] https://github.com/Cyan4973/FiniteStateEntropy

[5] 'Benchmarks' section under [4]


Golomb-Rice with base M is prefix code optimal for approximately geometric probability distribution Pr(x) ~ sqrt(2)^(-Mx). Arithmetic coding or FSE/tANS would allow to use the actual probability distribution. The question is how large the gain could be - how far from Shannon is Golomb-Rice for this specific type of data? If this probability distribution varies, maybe it's worth thinking about adaptive rANS, like in Oodle LZNA and BitKnit: https://fgiesen.wordpress.com/2015/12/21/rans-in-practice/ ps. Is M fixed or adapting?


Sadly my enthusiasm for compression greatly exceeds my math knowledge.

The linked resources from the RAD Game Tools people are really interesting; they've been pushing the state-of-the-art for many years but mostly avoid the limelight.

I found one paper [1] that muses about switching from CABAC to Golomb-Rice in video compression, and by doing so they reduce decoding complexity while achieving comparable compression efficiency. So I'm not sure if it'd be worth going the other way, and whether adaptive codes are a good fit (for LPC residuals).

[1] http://iphome.hhi.de/wiegand/assets/pdfs/2011_09_ICIP_entrop...

But I almost want to cook up some interactive 'build-your-own lossless codec' testbench where you could pick your transform, pick your linear predictor, and pick your entropy coder, and tinker until you're satisfied with the result.


Would lossless and/or lossy compression algorithms perform better if each element of an audio track was compressed and stored separately (similar to the MP4-based STEM audio format [1], but without any limitations on the maximum number of elements and the stereo master which increases the file size)? The constituent parts of the track would then be merged together during playback by the audio player.

[1] http://www.stems-music.com/stems-faq/


We had these a long time ago; 'module files' made with music trackers [1]. And yes, they compress very well.

Unfortunately, the transformations that make up 'mastering' can be pretty elaborate, and for professionally-produced music there is little incentive to let the general public see their project files -- although they are occasionally made available for remixers.

[1] https://en.wikipedia.org/wiki/Module_file


There is also ALAC (Apple Lossless Audio Codec), which has been open source and royalty-free since 2011:

https://en.wikipedia.org/wiki/Apple_Lossless


In all fairness ALAC is very similar to FLAC in its inner workings, but differs in arbitrary, minor ways that result in more complexity over FLAC (and of course, incompatibility) for little gain.

I'm paraphrasing from a post from the FLAC developer himself [1], after someone released an ALAC decoder created through reverse-engineering the format, back in 2005.

[1] https://hydrogenaud.io/index.php/topic,32111.msg279843.html#...


Judging a codec by a reverse-engineered implementation may be very misleading. For example, the original codec may have been written to satisfy a different set of goals, e.g. optimised for energy consumption on a particular ARM CPU.


Doesn't WavPack predate FLAC? Anecdotally, I first came into contact with Shorten (.shn), then a couple of years later, Monkey's Audio (.ape) and WavPack at around the same time, and that was a couple of years before I even heard of FLAC.


WavPack pre-dates FLAC by two years, but it reached popularity after FLAC did. There were many codecs that came out right around 2000; Monkey's Audio, LA, OptimFROG; they all compressed really well but were expensive to encode and/or decode. This was a huge drawback with the hardware at the time.

FLAC achieved some popularity early on because it was cheap to encode and decode, and produced acceptable compression ratios. But its mindshare shot up in 2003 when Xiph.Org announced FLAC was joining their banner of codecs as their preferred lossless offering.

Mainstream interest in FLAC resulted in WMA Lossless and ALAC; that's what people were using. But in enthusiast circles, WavPack became an alternative competitor to FLAC, because it had better compression in high (but not normal), it was also open-source (which was rare in those days), and its dev was a participant in compression enthusiast communities.

After the FLAC/Wavpack duopoly, TTA was a good effort from a Russian team but it fizzled because it had no compelling differentiator against codecs with better share. The next big news was TAK, albeit the dev was relentlessly pressured to open-source.


As much as I like FLAC, I do start to wonder about the necessity of native support for the more minority file formats in browsers when decoders can be written in js/asm.js/webassembly.

Every natively supported file format adds to the attack surface of the browser - another piece of decoding code running outside the sandbox/vm that js decoders would be forced to.


> have there been any significant developments in the lossless compression of audio since then?

One thing that is lacking in audio compression is the same thing as the difference between JPEG and MPEG - using past data to predict future data and only storing the difference.

Music has lots of repeating notes and passages. Isolating them (from other notes played at the same time), and only storing the slight change of how the note was played this time, vs last time should greatly increase compression ratios.

But I have not seen any audio compression that does this.

(Note this applies equally to lossless and lossy compression.)


> using past data to predict future data and only storing the difference

Actually what you've described is essentially how all of lossless audio compression really works. You pre-process the signal to make it so that the 'important' parts don't take a bunch of space to store, then you feed it to a predictor, and then encode your difference in a way that it doesn't take a bunch of space to store. You can try to tune each step to try to make your next step perform better.

Lossy compression can be made to work similar, because you can just store a less accurate difference between what you predict and the original.


My understanding is that the predictor only looks at the wave directly before the current one, and not the entire file. Delta encoding basically.

They also (as far as I know) don't attempt to isolate notes or voice phonemes. I.e arithmetic coding for sound.


The predictor looks at a 'window', which is usually defined in terms of n samples. If you make your window too short, you are not capturing any meaningful periodicity; if you make it too long you have sacrificed some error resilience and decoding convenience in exchange for hoping to get a better match for your predictor.

Speech codecs (lossy) basically operate just like you describe, see [1]

[1] https://en.wikipedia.org/wiki/Vocoder#Modern_implementations


It's a little odd to me that they're adding new codec code in C++ when they have this little memory-safe language in their back pocket waiting to be used for exactly stuff like this.


Rust is still very new. So far the only Rust code added to Firefox was a drop-in replacement for some other code. They haven't written any new features in Rust yet, and most FF devs wouldn't be fluent in it yet.


It seems to me that Rust would be perfect for a tiny, mostly irrelevant, self-contained feature like this. :-)

If it works, you don't need most FF devs to have fluency because the code base simply doesn't need fixing. The API surface is tiny. And the impact of it being broken is pretty low.


I wouldn't be surprised if this has been in the works since before Rust was successfully integrated into Firefox's build system. Now that it has been, I'd expect brand-new development of this nature to start considering using Rust, but I still wouldn't expect 100% of new development to be in Rust because I don't expect FF devs to immediately become experts in a new language before getting work done. And C++ still beats Rust in IDE support, which for many devs might be their most important consideration.


> Rust was successfully integrated into Firefox's build system

It's coming along nicely, but there are still kinks to be worked out. (E.g. Android!)

https://wiki.mozilla.org/Oxidation has details.


What is the point in this exactly? FLAC is useful for selling lossless audio, so you'd be able to re-encode it into any other codec when needed. But to play something on-line (which browser support implies), you can as well use lossy codec like Opus at transparent bitrate, and save the traffic in the process.

That said, it surely doesn't hurt to have that support in the browser, I just don't see it being very useful.


On decent speakers you can definitely hear the difference when you switch from a lossless to a lossy format.

The blind tests may be confusing if you switch between the formats repeatedly in a short period of time but if you listen lossless audio for a long time and then you suddenly switch to a lossy format you can definitely sense the difference.


Are you positive you can tell the way the difference is going, even (especially) after a long time listening?


Well, the very first day when I switched from Tidal HD to Apple music I could sense the difference. The interesting part is that I didn't really expect any difference because at that time I thought Apple is using a lossless codec too(ALAC) so that made me google it and I've found they are actually using AAC thus the reason why I think there is a noticeable difference.

Now I'm still listening lossy audio because for some reasons I decided to cancel the Tidal subscription. I can't say the difference is really like between SD and HD video so you don't miss much but if you get used with lossless audio chances are that you can sense the difference when you switch to a lossy version.


FLAC is useful for listening to high quality music which hasn't been filtered and compressed. It sounds significantly better than lossy audio. Iscit so hard to comprehend that as more bandwidth comes online all the time, people won't want to compromise on sound quality the way they were forced to in the past?


> It sounds significantly better than lossy audio.

Lossy audio can be completely transparent. If you disagree, you need to provide some objective evidence, because all objective evidence points in favour of good lossy compression being indistinguishable from lossless at sensible bitrates.

Look, I've no problem with LAME being supported, but don't make it out to be "significant" in terms of sound quality. Try AAC at 256kbps (what iTunes Store has used for the past decade) or better yet, Opus. Do a serious blinded test and amaze yourself with the results.


I've done ABX tests multiple times, and the vast majority of the time AAC at 200+kbps was indistinguishable from the source (CD WAV), and even 128kbps was fine for a lot of tracks. That said, one or two tracks tripped up the encoder and could reliably be distinguished even at 320kbps (IIRC 20 trials with 100% success rate). Not enough that I'd notice in casual listening, but the difference was audible.

You also need to use a decent encoder - I compressed a video using Handbrake with a 256kbps AAC audio track a few months ago (faac/Windows), and noticed immediately that the audio was bad (I initially assumed the source was bad, but the FLAC track in the source sounded fine). Replaced it with a 192kbps AAC track using Quicktime/OSX and the quality was significantly better.


Yes, ALL lossy codecs have some "killer samples" -- the source material always matters.

It's a sad fact that there are so many sub-par AAC encoders around. Best to use either Apples implementation, or either of those from Fraunhofer -- FhgAAC (free version distributed with Winamp), or the Open Source FDK-AAC that was developed for Android.


opusenc is quite decent, and not patent encumbered unlike AAC encoders.


Yes, Opus is fantastic -- been following it's development since the start due to Telecoms background.

For those of us that care and have software that plays it, it's great but AAC is still the next best thing for compatibility on a wide range of devices (whilst still beating MP3 for sound quality and file size) which makes ad-hoc sharing with non-techies possible.

Hopefully that will continue to improve as it grows in popularity.


For sharing, it's better to use FLAC originals anyway. For playback - Opus is supported by Rockobx, which is useful on Sansa players if you need something very portable (that's what I use sometimes), and is supported on Android too. On desktop systems it's not an issue. Any decent cross platform player can play Opus.


It all depends on how sensitive you are to these things, I've done such blind tests, and to my ears the fall in quality was significant, even painful.


If you had done such blind tests, you would have reached a different conclusion. Or, in the alternative, your hearing is orders of magnitude "better" than anyone else ever tested.


I think you're confusing lossless vs lossy with 192khz vs 44.1khz - in that case I agree with you, no percievable difference.

I can also hear differences between lossless/lossy encodings. If I tweak encoding parameters until I don't notice a difference, there is usually no real space saving afterwards - so why not go with lossess instead?


I'm not confusing these matters. I'm specifically talking about lossless vs lossy. The key point is I'm taking about nominally transparent lossy which for codecs like AAC and Opus is around 160-256 kbps depending on the listener. Even at 256 kbps, that's anywhere between half and quarter the bitrate of FLAC. (Which has an average bitrate around 700 kbps but can swing wildly up or down depending on the specific material.)


Blind tests comparing lossless against what codec at what bitrate? How many runs and how often were you able to correctly identify lossless?


Audiophiles believe a lot of things with no evidence. How else could you sell them cables costing thousands of dollars? Reminds me of homeopathic medicine. The only reasonable use case for FLAC is archiving music you expect may need to be reencoded later.


Firstly, on a personal note, I must say getting heavily downvoted on a factual statement and not an opinion is a new and confusing experience for me.

My comparisons were made several years ago using LAME 320 CBR, LAME VBR, OGG, lossless (WAV) 16bit 44.1kHz and lossless (WAV) 16bit 48kHz. I believe at least most professional musicians and sound engineers would be able to identify the difference between all of these; while they might not always be "worse", their sonic character is certainly different.


FWIW I didn't downvote you and I don't think you should be downvoted, but as sjwright points out, if you did indeed make all of these blind comparisons and were able to reliably spot the difference between LAME 320 CBR and lossless and between 44k and 48k, and were able to hear the difference so clearly that you would find the lossy formats jarring or painful to listen to, then given all published research on this topic you would have to have superhuman hearing as most professional musicians and sound engineers in fact can not... so to put it very bluntly there's a concern that you might be selling bullshit.


Exactly. Given how much we know about psychoacoustics, and considering how often people who claim to hear dramatic differences end up failing the most basic ABX tests, it's just not a credible assertion without robust supporting evidence.


ABX tests are putting sensors and short-term memory under stress, while users are reporting about their long-term feelings about music. It's like trying to spot difference between fruits of same kind while consumers have problem with lack of a nutrient.


Why to bother with various tradeoffs if you can get the best?(i.e lossless). You can make the same argument on video technologies(i.e HDR). The truth is that people usually adapt so "painful" audio or video quality stops being painful after a while.


If it was even remotely painful, it would sail through an ABX test.


It may sail through months long ABX test.


Artefacts that can only be discerned after months of sustained listening? Sounds like you're clutching at straws.


Not an artifact, but loss of variance in music. Same patterns are activating same neurons, so they are tired.


"Better" and "best" imply a detectable difference, and the point of a transparent lossy codec is that no such difference should exist.


I confirm that I feel myself tired when listening low-quality audio even when I cannot spot any audible problems with audio. It's bland.

I.e. I will definitely fail ABX test, but I still feel difference. IMHO, ABX tests are wrong tests for that problem.


If you will fail an ABX test then you don't feel a difference due to an actual difference in quality almost by definition. Maybe you feel a difference because outside an ABX test you know when you are playing lossless files? That's fine but it's not the same thing.


IMHO, lossy codec just removes some variation from music, so some nerves in ear are activated more often than other, so they are tired.


> FLAC is useful for listening to high quality music which hasn't been filtered and compressed. It sounds significantly better than lossy audio.

Doubtful. See https://people.xiph.org/~xiphmont/demo/neil-young.html

TL;DR: encode above transparency level when using the lossy codec, and it won't have audible difference with lossless playback. But again, that's only for playback. As soon as you'd want to re-encode anything, there is no substitute for the lossless original.


That article is almost entirely about how it's useless to encode audio at excessive sample rates and bit rates beyond the human ear's capacities—not about lossy compression. It does claim that modern lossy compressors are good enough, but admits that there can be reason to prefer lossless distribution since there is then no need to trust the distributor to use a good encoder with correct settings.


Yes. The point is not about that distributor can fail to use a good encoder (anything can happen), but that they can use a good one if they want to, and audible result will be the same as lossless.

When I buy music, I always try to buy it in lossless FLACs anyway. And then I encode it to Opus for playback. But for listening to something on-line, Opus would do just fine to begin with.


I can't for the live of me think of a phrase that gives me Google search results even remotely related to the topic, but I am pretty sure I once found a study that said that consciously inaudible differences in audio due to compression still made a difference - subjects who got the compressed sounds tired more quickly (of hearing them). Having taken a bit of neuroscience I can easily accept that - but I did not take enough (ns) to say for sure that it is so. I can easily accept that merely asking people about anything is not actually objective, it needs more credulity to believe my claim that even people in a blind test who can't tell the difference between two songs (lossless vs. compressed), which is more objective than just asking about qualitative measures, is not a reliable way to determine the question of "lossless vs. compressed".

So could anyone confirm or deny such a study and the mechanism exists from a basis of actual knowledge?


That falls into some speculation area, unless actually substantiated with serious studies. I'd be interested in it, if you can find a source. There are too many hoaxes going regarding audio, to be skeptical enough.

Here is one read on this: http://www.skeptic.com/eskeptic/10-01-06/


    > That falls into some speculation area, unless actually substantiated with serious studies.
I made that quite clear, and I asked for the latter in the prominently placed last sentence. It's the main point of my post, as I think I made clear. I really don't know what more I could/should have done apart from dropping the question entirely, which I don't think is fair - or useful?


You said you saw some study, but can't find it now. I guess if it's a known issue, it will come up eventually.


This sounds familiar. It brings a few things to mind.

I can't remember the term that was used, but I remember reading about audible frequencies possibly being affected by inaudible frequencies in ways that are perceptible to humans. So if you have two source files played using the same equipment with one including the inaudible frequencies then they will sound subtly different due to the interaction.

With the 'getting tired' while listening thing, I know just what you mean. Listening to music on a laptop for instance is a mentally draining experience for me. I've heard it explained like this: your brain knows what a piano sounds like and when it hears the poor imitation it is busy 'filling in the blanks'. Listening on good equipment is much more relaxing - as is listening to something that isn't 128kbps.

I don't claim to know that either of the above are true, but to me they are plausible.

I'm also a little sceptical of the 'lossless is no better than good lossy' claims that inevitably come up in these discussions. While I accept that there is likely a point that audible differences between lossy and lossless would be imperceptible, I've been hearing those claims for a long time (starting with "it's digital - it's cd quality'). When I was seriously interested in these things there most definitely was a noticeable difference. I was right about 128kbps. I'm confidant I was right about 256kbps. Maybe with 320kbps that's changed now, but I don't know as I haven't had a decent stereo for some time. I'm not about to be convinced by those blind test studies that people keep pointing to as objective and conclusive - they always make me think of that one where it was 'proved' that cheap wine is just as 'good' as expensive wine.

I'll stick to lossless when possible and compress to lossy when necessary. Can't go wrong like that!


Yes, I agree. I just think I better point out - for other readers - that my point was made for when people are unable to hear a difference even in blind tests. It was about a longer-term effect that is not part of the direct listening experience. Even if my memory was correct this effect would not change people's difficulty (or at some point, inability) in telling different sound sources apart.


My music collection is all in FLAC, and encoding it again would be a waste of space and/or resources. Streaming FLAC is completely fine for me, as I don't have bandwidth caps or such.

I can see myself building a streaming service for myself to listen to my collection while I'm on the move.


> My music collection is all in FLAC, and encoding it again would be a waste of space and/or resources.

Why so? My collection is also in FLAC, but I always encode it in Opus for actual playback. Hard drive space is cheap, but FLAC will bloat your mobile storage. So the same rule applies. For playback Opus works perfectly. For encoding - FLAC is required.


It depends what your needs are. I only even listen to my collection of music from my NAS via (W)LAN connected devices, so maintaining a version more suited to mobile devices isn't necessary or useful to me.


Yes, of course. I like using portable players, so FLAC is counter productive there. I see no point in streaming anything if you can just put files on your player and not depend on any connection.


> I see no point in streaming anything if you can just put files on your player and not depend on any connection.

Sure, if that works for you, but my collection is too big to fit on my mobile device (even after encoding it in Opus instead of FLAC).

I suppose my use case is rather rare, but even then, why not give the ability?


I imagine now a music store can sell FLAC-only music tracks, and provide FLAC-only stream previews. No need to re-encode anything.


Another possibility: online libraries, such as archive.org, could store FLAC only to save space and still provide in-browser streaming.


That makes some sense, yes. But so far I didn't see any stores which sell only FLACs.


That's a chicken and egg problem; if the store only provides FLAC, the browser can't play it, so the potential customers can't try it, so they don't buy it, so the store has to provide non-FLAC alternatives, so there is no need to provide FLAC decoding in the browser, etc... Somebody has to make the first step, and I'm happy Firefox did this.

Bandcamp is one of the few stores that does sell FLAC, maybe they'll be the one going that direction.


Hardwax[1] is selling aiff, which can be packed into flac. No metadata though, which is kind of a bummer. Beatport[2] offers wav and aiff as well.

[1] https://hardwax.com/ [2] https://www.beatport.com/


Meanwhile, the ChromeOS release of Chromium has had support for FLAC for a long time, but it's still feature gated on mainline Chrome.

4 year old issue: https://bugs.chromium.org/p/chromium/issues/detail?id=93887


Shouldn't casting FLAC on Chrome to a Chromecast be a priority? I may be a curmudgeon, but I have way too many CDs in storage in those 80 litre Rubbermaid bins that I only ever see as FLAC files on the media server.


The Chromecast can receive and play FLAC since 2015 (I can't find the exact dates on any release notes but [1][2]).

This is about adding audio/flac decoding into Chrome; meanwhile the Chrome -> Chromecast communication is entirely proprietary (notwithstanding the APIs), so Google controls both ends. Since so much of the Chromecast is about handing off content [3], there's little need to implement FLAC encoding in Chrome because the Chromecast can just be told to load the native (original) stream itself.

[1] https://www.reddit.com/r/Chromecast/comments/3n1to8/chromeca... [2] https://www.reddit.com/r/Chromecast/comments/3n3epm/flac_is_...

[3] (Annoying ad warning but good content) http://www.digitaltrends.com/computing/chromecast-features/


Thanks for bringing me up to date. (1st gen Chromecast)


For other browsers, you can use flac.js: http://github.com/audiocogs/flac.js.


I never understood why flac isn't more widely supported.


As an advocate of FLAC, it has limited appeal beyond Hi-fi geeks / archivists / or music makers. Heck even storing local MP3 files is dated (though not recommended) with the proliferation of comprehensive streaming services (no endorsement on my part)


Well, you can "stream" FLAC from services like Tidal (which I'm guessing makes audiophiles barf), but they're not doing so well.


You can buy music in FLAC format from Bandcamp, which is probably the number one music site for independent musicians.

https://bandcamp.com/

https://en.m.wikipedia.org/wiki/Bandcamp


Juno music also allows for FLAC download purchases.


DRM? The project discourages implementation of DRM:

https://xiph.org/flac/developers.html

>Anti-goals - Copy prevention, DRM, etc. There is no intention to add any copy prevention methods. Of course, we can't stop someone from encrypting a FLAC stream in another container (e.g. the way Apple encrypts AAC in MP4 with FairPlay), that is the choice of the user.


The existence of DRM in a format is usually the reason something doesn't get supported, usually due to licensing or patent issues on the DRM, if not because of philosophical/political/social issues around supporting DRM itself.

Do you mean to suggest that Mozilla would be/should be/are subject to third-parties who want to restrict access to only DRM-supporting formats (I honestly don't know if they are, but it seems unlikely)? It's not like Mozilla has a music store and needs access to publishers/distributors that will only get on board if there's DRM. Support for more formats can only serve to help Mozilla's users and their market penetration (even if supporting more formats is more of a burden, development- and support-wise).


Hasn't it been years since DRMs were last used for music?


I think it's overkill for some use cases where lossy compression does a sufficiently good job. Most people can't consistently distinguish between high bit-rate lossy encodings and their original source material most of the time. Streaming FLAC will gobble a lot of bandwidth, and some people might find that to be a problem, especially on a capped data plan.


This answer is just being contrary to be contrary. 'Most people' can't distinguish 1080p from 4k from 6 ft away, yet people still stream 4k content, which 'gobbles bandwidth' too.


>'Most people' can't distinguish 1080p from 4k from 6 ft away

6ft away (and screen size) being key. From up close, everyone can and the difference is obvious. Whereas with lossy audio, a well encoded track will sound transparent to ~99% of people, no matter how they listen or what equipment they use.

Digital audio is pretty much a solved problem in terms of transparent encoding. Consumer digital video still has a long way to go.

(That said, I do support the use of FLAC, just for that extra safety and because it doesn't really cost too much.)


Is say most people not being able to feel the difference between 1080 and 4K to be inaccurate. I'm always seeing people look at 4K video after watching 1080 content and remarking how crisper it is. Of course there is a very large majority of people where what they have is good enough. When the iPhone went retina people argued that the extra pixels won't make a difference. Then they tried it. Now look where we are.


I found it a perfectly valid comment that illustrates the trade-offs that developers who have to code in support for the format would make, trade-offs that content streamers debating their codec choices would make, and trade-offs that consumers choosing their preferred format would make. Historically while streaming, lossy has been 'good enough' for reasons other than nothing else was supported.


On youtube the 4k stream has a higher bitrate which means even if you're watching it downscaled to 1080p the quality will be far better.


I dont't see why people would want to use FLAC. The file sizes are huge and the audio quality difference is bearly noticeable compared to an encoding like .aac

Or am I missing something?


> The file sizes are huge

35-70 MB it's not that huge, especially if you have a gigabit Internet connection. And if you have a good pair of headphones you can hear the difference between flac and aac.


Yeah. Even with a 100 megabit internet connection the size does not matter at all. The playing starts instantly when you push the play button and it still uses less bandwidth than a normal youtube video.


I've never encountered this format. It was never relevant for me. I know about it but that's all.

Meanwhile there are other formats that seem to be more important. What about WebP?

At work we are currently developing a kiosk system based on a big ass touch screen in UHD running in Google Chrome. I suggested switching to WebP for the pictures and it is saving a lot of bandwidth compared to JPEG.


Incidentally Firefox is also adding support for WebP. Posted on HN 12 days ago: https://news.ycombinator.com/item?id=12338170

Bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1294490

WebP is far from state-of-the-art but outperforms JPEG because JPEG is missing the filtering step (where adjacent pixels are being run through a delta-coder) entirely. An alternative is JPEG squishers like Dropbox's Lepton [1] that tries to retrofit this deficiency in a clever meet-in-the-middle way, serving as an additional lossless compression layer while decoding into a perfectly-normal JPEG.

[1] https://blogs.dropbox.com/tech/2016/07/lepton-image-compress...


The most exciting parts of WebP, for me, are alpha transparency in lossy images, and animation support. This means transparent photos won't need to be in PNG, and perhaps animated GIFs could be replaced with a more efficient format (while still working in <img> tags and retaining their infinite loop when downloaded).


FLAC is very important indeed if you care about lossless music at all. Basically, when you buy a CD and want to preserve it digitally in its original quality, you go FLAC.


But where is my ipv6 link local address support?


awesome!


Web browsers prove once again that they are still the new Emacs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: