What’s wrong with digital audio? According to many long-serving (suffering!) audiophiles, it’s digititus – a subtle sense of nervousness and agitation felt whilst listening to digital playback: music converted in the studio to digital and later decoded back to analogue via CD player or DAC.
According to MQA’s Bob Stuart, the brickwall filters in a studio’s A-to-D converter smear the time domain accuracy of the incoming analogue signal as it is encoded to digital. We humans are apparently more sensitive to a filter’s pre- and post-ringing artefacts than previously thought. (Perhaps this is why many listeners prefer hi-res content, not necessarily because of its higher sample and bit rates, but because a DAC filter’s ringing takes places further above the upper limit of the audible band than with Redbook content?)
In order to rid us of this time-domain-blurring digititus at both ends of the chain, Stuart has established MQA (the company) as a vehicle for promoting and implementing MQA (the technology). Talking to the softly spoken Brit at length and long distance from Australia to the UK, I learnt that MQA is the culmination of several decades of R&D. Stuart’s MQA mission? To design and implement an encode/decode process that offers end-to-end control: a) the studio digitally signs their final master (via an MQA workstation plug-in) that when seen by an MQA-certified playback device b) triggers an LED that proves the authenticity of a ‘studio quality’ MQA master. Ergo ‘Master Quality Authenticated’.
According to Stuart, this goes some way to solving provenance issues tarnish the reputation of hi-res download stores; any digital download vendor attempting to sell down- or up-sampled MQA content would be found out – the end user’s LED would refuse to light up. MQA makes crystal clear which music has remained untouched since leaving the studio.
To Stuart, provenance is more important than file resolution. Such thinking seems to be ringing bells with those responsible for studio archiving and digital distribution. Through an intense travel schedule this past year, most extensively in Japan and Germany, Stuart has gained the ear of numerous studios, labels and digital distribution companies.
The Japanese market is reportedly the most promising territory of all: they are much more into jazz/classical, have a greater interest in hi-res audio, have a sufficiently robust obsession with new technology and the bandwidth to supply it. Paradoxically, what Japan doesn’t yet have is a music streaming service.
At the other end of the playback chain, a handful of DAC manufacturers are now officially MQA accredited. Stuart’s software gets embedded on a chip that stirs MQA’s special sauce into the digital signal immediately prior to signal decoding. However, with D/A converting silicon coming from a range of companies – Burr Brown, ESS, Cirrus Logic, AKM etc – the MQA code must be customised accordingly. In order to prevent signal blurring, it must be tuned to each make/model of downstream DAC chip.
This is why we don’t yet see MQA-accredited streamers that only output (digitally) to a user-selected outboard DAC box – how does the MQA software know to which DAC chip it is talking? Stuart says an adjustable MQA software decoder will come “eventually”.
Back to the studio. More widely touted thus far by the audiophile press has been MQA’s ability to zip hi-res audio files into containers that measure either 24-bit/44.1kHz or 24-bit/48kHz. No matter how large the sample-rate of the studio master, MQA can fold the hi-res portion of the file (that above 44.1kHz or 48kHz) into the inaudible ‘dead space’ beneath a 24-bit file’s noise floor. That’s good news for a digital audiophile’s download quota, especially on mobile devices, and even better news for those using a lossless streaming service like Tidal Hifi (discussed here). We’ll come back to Tidal shortly. In this MQA-shot YouTube video, Bob Stuart explains the techier side of hi-res folding:
An MQA-accredited DAC will render and decode the entire MQA-wrapped hi-res file but only as far as its own sample-rate ceiling will allow. For example, should an MQA app exist for the iPad, up to 24bit/96kHz is possible whereas something like Mytek’s forthcoming Brooklyn will decode and render up to 24bit/192kHz.
A DAC not equipped with MQA software will still read the portion of the MQA file fenced off by 24bits and 44.1kHz – as it would a normal PCM file – but it won’t carry out the hi-res unfold or rendering.
Decoding and rendering – we should take a moment to define and separate these two processes as they relate to MQA. Decoding refers to the first step: recognising the incoming stream as an MQA file and unfolding any hi-res content (should it exist). Rendering is where the MQA software optimises the signal immediately before it is converted to analogue, thus mitigating the DAC chip’s potential to blur time-domain information.
What news of MQA-encoded content? During this year’s CES, Norway’s 2L Recordings announced its first selection of downloadable MQA titles; 125 albums, personally overseen by Morten Lindberg. Translation: beautifully-recorded music but that of a more audiophile aesthetic. Jack White he is not.
That same week, the UK’s 7Digital proffered ‘indie label’ MQA content. A download store that promises “hundreds of recordings” out of the gate and “thousands within the first few months”.
It’s hard then to blame skeptics for eye-ing MQA as yet another new file format set to follow in the footsteps of DSD: loud-hailing of its next big thing status; message amplification by a handful of influential reviewers; ongoing promises of a broader catalogue (that’s forever just around the corner); a rush by manufacturers to include full compatibility so as not to risk losing sales.
Does past experience not tell us that it’s content availability – backing by rights holders – that ultimately determines a format’s success, especially in realising adoption beyond the walls of the niche that provided the initial push? This publication isn’t aimed at those who would determine what music to listen to on the basis of its delivery format. I sincerely hope you punch play on an album because you like that album, not simply because it sits on your server in DSD (or hi-res PCM). If it just so happens to be available to you in a better quality package, all the better.
On the other hand, audiophillia isn’t “all about the music” either. Hardware matters – it helps us realise the potential of superior recordings even if that means enduring the simultaneous exposure of inferior recordings. We must each find our a system sweet spot such that it maximises our enjoyment of listening to the music we like most. For yours truly, that means pursuit of better sound that continues to extract more enjoyment from artists like The Hold Steady, Bowie, Nick Cave, Tom Waits, McLusky, Built to Spill, Aphex Twin, Zomby and Plastikman…but not sound so revealing that any studio-determined shortcomings begin to subtract from the enjoyment factor. A balance must be struck.
I carried this viewpoint, underscored by a previous MQA experience at last year’s Munich HighEnd show, into MQA’s Venetian Hotel suite at CES. There I spent an hour talking all things MQA with Bob Stuart’s off-sider, Spencer Chrislu.
Few audiophiles will welcome the news that they’ll need a new DAC or DAP to reap the benefits of MQA-encoded material, especially with there being so little of it available for download. After all, it’s only new material encoded at the studio that stands to benefit, right? Wrong.
This is where MQA gets really interesting. It isn’t only freshly recorded fare that sees sonic elevation from MQA. All recorded music, past and present, can potentially, allegedly sound better with MQA applied. Stuart and his team claim that any existing digital file can be processed via the MQA algorithm and therefore have corrected (‘de-blurred’) any temporal damage wrought by the original ADC. That means better sound across the board for MQA-processed content whether played back on an MQA-equipped DAC or not. Let that sink in for a moment.
How does this work? If known, the encoder can be measured and mathematically modelled and then fed into the MQA algorithm used to process the file. If the original encoder is not known, the digital file can be pre-processed for a best guess as to the original ADC chip’s identity. Stuart says that ADCs used in the early days of CD were particularly poor and that we can expect MQA to bring some fairly substantial improvements to the table.
Our man from Huntingdon refers to this process as fingerprinting. And every piece of hardware used to get the music from the studio to listener can apparebtly be MQA fingerprinted. Remember how any MQA software loaded into a certified DAC must be tailored to fit the adjacent DAC chip? Each make and model of chip is fingerprinted so that MQA can optimise sound quality.
Recapping: MQA DAC or not, MQA-processed source material promises better sound quality than that which hasn’t been run through the algorithm. Add an MQA DAC to your system for a further improvement. If true, does this not render my 10,000-strong CD collection ripped to FLAC second rate? Does this mean I gotta buy it all over again?
For those listeners like me who are passionate about improved source file quality extending beyond the usual array of audiophile-centric titles, MQA has the potential to be a very big deal indeed.
No doubt this is why Chrislu told me at CES that MQA are not interested in becoming a plaything for the audiophile niche. They are seeking much broader adoption of their (de-blurring) encapsulation process, one that can be authenticated by any suitably MQA-equipped end user.
My opinion, one shared by an already-MQA-certified DAC manufacturer, is that such broader adoption hinges initially on Tidal flipping the switch on MQA-processed content; which means both hi-res and revved-up Redbook. Rumours suggest that Tidal are almost ready to roll but not before labels sign off on its licensing.
The number of hi-res titles to be streamed by Tidal (that will travel in 24bit/44.1kHz or 24bit/48kHz containers) will likely be ultimately dwarfed by the number of MQA-processed Redbook releases.
Bets have been hedged with cautionary language in this piece – e.g. “allegedly”, “potentially” because whilst I’ve heard a few MQA demos that did indeed sound downright marvellous – Bob Dylan’s “Don’t Think Twice (It’s Alright)” and The Doors’ “Riders On The Storm” in Lenbrook’s CES demo space and a 2011 Wilco cut (from The Whole Love) on Meridian actives in MQA’s own suite – I am unable to state with any authority just how much better MQA sounds than music encoded/decoded using traditional methods. There was no A vs. B.
With high street stores closing at a rate of knots and single-person dwellings and Internet usage moving in the opposite direction, is it any wonder we see so many folks willing to let their prejudices flow prior to a full understanding and/or first-hand experience? I’m reserving judgement until an opportunity presents for an A/B proper and I’d urge readers to do likewise.
It remains to be seen just how MQA intend to win hearts and minds beyond Tidal Hifi subscribers. Perhaps they have plans for the considerably more popular lossy streaming user base? Spencer Chrislu alluded to as much during our CES meeting (but I remain hazy on the specifics). Perhaps only when MQA usurps Ogg Vorbis, AAC or MP3 as the transmission method of choice for Spotify, Apple Music and Pandora will MQA be a game-changer in the truest sense: that is, improving sound quality beyond the audiophile space.
That said, the theory remains compelling. Those dropping cash on an MQA-compatible DAC stand to gain the biggest audible lift from MQA-d music but even those sticking to their existing non-MQA DAC should hear an improvement. MQA potentially offers something to everyone. How very democratic.
Further information: MQA