It’s not what you know. It’s what you can prove. Hello prosecutor for the Crown or State who must present his case beyond reasonable doubt for a jury to convict the accused. His mom will do the talking on motive, opportunity, means. The investigators are tasked to follow the evidence wherever it may lead. A cop show might include planting evidence to tie up what the investigators (think they) know but can’t prove. Such a show might also show how whenever cops like someone too much for a certain crime, their focus can overlook evidence to the contrary. Their approach of making the evidence fit a foregone conclusion renders the pursuit of multiple lines of inquiry a mockery. Really, these investigators are doggedly going down one path and one path only.
What’s any of this got to do with hifi reviewing? Let’s see. Prosecutors and police work within the law. By that standard, reviewers are outlaws. While they work, nobody looks over their shoulder. Whatever laws they follow deal in self-assessed fairness, competence and honesty, not jurisprudence. Reviewers are accountable to themselves and their grasp/interpretation of professional duties. They don’t work with a partner on the beat or in a patrol car. When they interview a suspect in interrogation room two, there’s no video recorder present, no attorney, colleague or other witness. Reviewers aren’t accountable to any oversight committee, internal affairs or a complex system of legal and procedural checks and balances. Their far simpler checks and balances are the editor/publisher, proofreader and rules of conduct set forth by the organization they write for. “Thou shall not hack-saw open a terminator box on a cable to see what’s inside. Thou shall not claim a loaner delivery as stolen from your front porch then sell it off on the side.”
Still, thinking about a connection between prosecution/defense + jury verdict versus hifi performance reviews makes for an interesting experiment. In those terms, a reviewer taps multiple roles. He/she plays investigator to gather facts for evidence that determines the case. He/she plays expert witness whose prior experience and understanding are called upon to make predictions, confirm overlap or diversion with similar cases and explain esoteric matters in layman’s terms. He/she plays prosecutor and defense attorney with pro/con arguments relative to the suspect aka review subject. He/she finally plays judge and jury by handing out a verdict. Innocent or guilty?
Hello mega gap. If an innocent verdict were to equal a rave and a guilty one a scathingly negative review, most reviews are neither. They mix guilt and innocence in myriad ways. Those render an either/or verdict moot. In court, an innocent verdict means the accused goes free. A guilty verdict means a punishment like a jail term, restraining order, fine or financial restitution. It’s an area where reviews and court cases share little common ground.
Things get more interesting in the areas of evidence collection, expert witness testimony and the expanded role play which leads to our form of judgment in a review’s conclusion or certain star or point system. How do reviewers collect their evidence? Here we distinguish between performance specs, subjective sonics and tangentially, how the two intersect. For specs, many reviewers will rely on what manufacturers provide them with. That’s the “I swear to tell the truth, the whole truth and nothing but the truth” approach of taking claims at face value. Others don’t extend blind trust so conduct their own measurements. Their accuracy depends on the measuring hardware, then the tester’s procedure, consistency and ability to interpret in layman’s terms what the measurements mean to a user.
For sonic evidence, we again have two chief approaches: comparisons and none. For comparisons, the reviewer is limited to what’s on hand at any given time. That may or may not be competitively priced. If there are no comparisons, a reviewer is expected—and believes him/herself capable—to judge a component on purely its own merit. Meaningful descriptions however rely on being specific not generic. “The X-Factor DAC cast an enormous soundstage with excellent image focus and had terrific bass.” What’s enormous? What makes image focus excellent, bass terrific? Relative to what? The writer’s mental reference? If so, how can readers know what that is? “The X-Factor DAC just sounded like music and made me forget all about hifi.” Isn’t that like a restaurant reviewer writing that potatoes tasted like real potatoes and made him all forget about being in a restaurant?
Reviewers who find that approach guilty insist on hardware comparisons to make their description relative thus more relatable. “The X-Factor DAC cast a wider stage than the Subito DAC, its images were a bit more diffuse and bass was tighter and more damped.” Such descriptions work on their own or may be tied to specific music examples. Those become extra useful when they embed as YouTube, Bandcamp or similar tracks so the reader can listen while reading.
What type of descriptions are made depends on a reviewer’s sensibilities. A professional or amateur drummer will have a particular aptitude for transients, damping, the power of a kick drum, the overtone signature of cymbals and general performance aspects related to rhythm and timing. A professional or amateur violin player will be especially sensitive to tone modulations, timbre and performance aspects related to bow and fingering techniques. A singer might zero in on intonation, breath control, micro-dynamic nuances, vibrato execution and performance aspects related to melody and phrasing. It’s intuitive, normal and predictive that how an electric bass player listens to music and what music he listens to will differ from that of a classical chamber-music aficionado. Likewise for a reviewer majoring on opera over HipHop.
All of it creates perspective. It builds an approach that influences vocabulary and trigger points. There really is no one-size-fits-all procedure by which reviewers collect and present their evidence. Next comes experience which informs exposure. To judge properly and know what to look for relies on knowing what’s possible. That relies on having heard it all. Nobody can. But it’s certainly true that a writer with regular monthly reviews across 40 years will have a different perspective than someone with three reviews per year who has done it for two. This speaks to expert witness testimony and how expert and credible of a witness a given reviewer makes.
In legal shows, the courtroom drama hinges on colourful storytelling. To win over a jury, the prosecution and defense are instructed to tell compelling stories. This routinely circles motive so ‘why’. It’s not enough to know the ‘what’ and the ‘how’ of a crime. No jury will convict an accused if it can’t understand and believe why the crime was committed.
The facts may be the truth and nothing but. A jury—or so we’re told–just needs more. The dry facts must be presented in a compelling way. Here we have a most direct connection to our audio review space. Just the facts—treble 30 points, midrange 27 points, bass 18.5 points—may be all the truth that scope jocks find necessary. Sadly only few readers will have an emotional response to that to want to read that writer’s next dry-as-dust tome. This goes straight to the utility of reviews. What are they supposed to accomplish: be ‘I exist’ notices for the hardware described; sales tools for the makers; ad anchors for the publication; comparative test scores; insider information; exciting entertainment; all of it; and if so, in what order of importance?
The best reviews follow the evidence wherever it may lead. Reviews that preclude certain evidence or don’t follow parallel lines of inquiries may be accused of making the evidence fit a foregone conclusion. That could be the review of a valve amp with zero reference to a price-competitive transistor amp. The best reviews also present their evidence in a compelling way. But when it comes to knowing what something sounds like, describing it and then proving it… that’s where all reviews fail abysmally.
Scope jocks may believe that measurements alone tell the whole story and prove that if it measures the same, it sounds the same, end of. The vast majority of the review-reading audience needs and wants more. They also understand that the only proof is in their own room and seat. The best a compelling review can do is motivate a reader to seek out a given component and try it for themselves. That’s when the real verdict happens, never before.