Wednesday, December 30, 2015

What if Audio were like Video

In several ways I am grateful that audio has moved in a different way since the 1960's than video.  I credit the subjectivist audio movement, pioneered by the likes of J Gordon Holt and Harry Pearson, for getting audio off the corporatist objectivist track.

Now, mind you, there was much more progress possible in video than audio.  I'm not questioning that.  The video displays of the 1960's were pathetic compared to what we have now.  (Though not perhaps so much as people imagine at the high end.  Professional video displays of the 1960's were beautiful, and by the early 1990's they were glorious.)

Nor am I questioning that most subjectivist advice is pure crap.

But when the corporatists take over as they have in video, here's how things go:

1) Endless planned obsolescence.  Just as with computers and smart phones, the moment you buy anything it is already obsolete.  Endless upgrades are not only made desirable, they become mandatory.  If you haven't upgraded every 2 years you are way behind.

2) Facts are marshaled to show how much things are better, even when actually they aren't better at all.

3) Bigger is always better.  More is always better.  Bombasticity is better than realism.

Now the underlying story is that digital systems and endlessly refined display technologies have made fantastic possibilities in resolution upgrades.  Old Standard Definition video was (quite!) adequate for telling stories, and most of the greatest TV programs ever produced were produced in Standard Definition, now also known as 480i.  Viewed on well performing equipment (which is always rare in consumer grade equipment) those programs still look beautiful and tell stories very well.

Strangely, as soon as higher resolution became possible it became hard to achieve the same quality in story telling.  Resolution itself became the story, and people were wowed by so-called Reality TV with little story or human meaning behind it.

Now it becomes clear, just as the cynics said, we don't actually "need" resolution as high as is now possible.  Diminishing returns set in quickly as you go above 480i.  1080i is really, truly, about as good as we'll ever need for frontal displays, but now resolution has become the game and we're up to 4k displays with no end in sight.  1080p is gravy.

We also don't "need" wall sized displays in most cases.  Especially if we don't have mega mansion homes.  If we are living in smaller homes, less than 2000 sq ft, as most people should be since larger than that is wasteful, there simply isn't enough space for super large displays in most if not all rooms.

That is one of the things I find especially frustrating--that if you buy anything less than a 55 inch monitor you are assumed to be fine with second class.  Not just in the endlessly ballyhooed but relatively unimportant "resolution" game.  But in the far more important blackness level and color accuracy game.

In that game, btw, we are still behind the best CRT's of the 1990's and 2000's, including my ultimate Top of the Line Sony 34XBR960, the best consumer CRT television ever made.  LED color gamuts have been limited, they were ridiculously limited at first, even now with quantum dot technology they are only catching up.  But even that doesn't matter if the display can't render true black.  True black is the secret behind truly saturated colors.  If you can't do true black, you really aren't there.  And ordinary LED's, even after 15 years up you-must-get-this upgrades, still aren't.

Only one kind of LED can potentially get there, and that is OLED.  I'm glad to see that LG is pushing OLED technology onto large screens with 55" and 65" displays.  But what about for us people who can't deal with a display larger than 40" ???

Sadly, my 34XBR960, which does 480i to 1080i native scanning, so you can watch SD to HD without artifacts, failed on Monday and I hope to get it fixed.  (One tech said, you can buy a new 32" TV for $200 so why worry?)

This TV sits about 5 feet from where I sit at the kitchen table, giving the equivalent of a much larger display at 10 feet.  40 inches is about all the display I could possibly fit in between the speakers, which are already beginning to encroach on essential living space.

Tuesday, December 8, 2015

The Most Important Adjustment

Speaker positioning is generally the most important adjustment that can be made to a hifi system.  Listening position is important too, actually both are important together, but there are more aspects to the speaker positioning.  One of those aspects is the speaker angle, and this is super important when you have electrostatic speakers, because their sound is enormously affected by the speaker angle

At my first ever listening party for the River City Audio Society, I got only one negative comment: the system sounding harsh.  I admitted there was some truth to that, and in fact it was in mind when I switched to the Krell.  The harshness went away.  Usually.  Not always.  It has also always seemed to be very listening position dependent.  I aimed the direct beam of the speakers to converge just behind my head.  If someone sits too far back (as the commenting person was) they sit in the direct beam, which sound harsh or peaky or something.  This sometimes happens to me also when I don't check the chair before sitting down.  It seems to migrate slowly away from the speakers over days of time, and often need to be put back in place.

I've been thinking about other things also, such as the piles of electronic equipment behind each speaker.  I decided that had to go back about eight months ago, and planned to buy a new rack in August.  Right then, other things came up needing to be replaced.  Now I'm putting new rack (to be positioned on the side of the room behind the equipment) back on the calendar and I hope to buy it in January.  There's also the not-so-little issue of making room for it by moving stuff around in the living room that needs to be done first, and I'm thinking it might be good also to replace the windows, because once the rack is set up that's going to be a monumental task.  That could push the new rack beyond January.  And so it goes.

But meanwhile there's no excuse not to re-examine the speaker angle, which is just what I started doing last weekend.  Just on a whim, actually (that's often how great things start), I rotated the speakers as shown above from a looking down view.  I rotated the beam of the speaker further away from the listening position so that now convergence only occurs about 7 feet further back, near the end of the room.

This changed the highs dramatically and for the better I thought.  Just before this change I was noting that the singing on Supertramp's Crime of the Century was more than a bit twisted, it was all scrunched up.  With the new angle, the singers sounded much more like real, if unshaven, people.  Overall this was the finest audition of this recording I've ever had, and I've been listening to it on many fine systems including a friends Quad ESL system in 1979.

But how can this be, I wondered.  On typical box speakers, the beam of the speaker generally provides the flattest response, and many box speakers begin to have serious tonal irregularities off the beam, such as collapsing upper midrange, not to mention loss of highs which is almost universal.  And then, compared to box speakers, electrostatics are said to be even more beamy.

But this simplifies the actual situation greatly.  And there is even a never stated truth for electrostatics: straight on the beam they sound horrible.  This is puzzling for a speakers whose very design seems to eliminate distortion everywhere on the membrane.

But there it is.  Something is definitely wrong on the beam, I determined the next day.  I played the Stereophile pink noise track on repeat, attenuated the left channel by 99.9dB on the Tact "Level" adjustment, and used various iPhone apps, the cheaper one called Analyzer providing an RTA and generator only seemed to give the best pictures.

On the beam, I saw rising response above 4k and very jagged and generally rising up to my super tweeter at 18k, which had a large peak I tamped down by 5 dB right then...though there's still a peak it's not out of line with the 20-20k peak and on this app even shows tiny rolloff just at 20k.

I tried many different angles, ultimately putting down yardsticks to get some idea of the angles involved.  Right about 1' from the center of the front base, and 5 inches over, the response is much better.  It's at this point it might have maximal flat extension, but still jagged, and still a rise from 4k on up.  That's about the angle I used to be on.  At 7 inches over, the rise at 4k is pretty well tamped down, and further over it becomes a broad dip instead, with the highs generally rolled off.  Curiously some people might like the sound as far off the beam as 50 degrees best of all.   From about 60 degrees on around, you have a very depressed but perfectly smooth highs.  There's sound even on the edge of the speaker just like that.

My thinking was that 30 degrees nows looks just right, I'm currently at 22 or so--which was a huge improvement over the 10 degrees or less I was at originally.  No wonder I had issues with harshness!  6k-10k, the so-called harshness region of hearing, was strongly elevated.  I'm not thinking 30 degrees looks best of all, and coincidentally that will give me "zero toe in" relative to the room itself, with the speakers being parallel to the front and back walls.

Which makes me wonder if electrostatic speakers don't have some serious HF resonance issues, but helpfully those resonances are very directional, making it possible to escape from them by being off the beam.  I had heard the need to be off the beam from a long time Quad owner, so I figure this is not an Acoustat only issue.

Here's an article on measuring speaker polar response.

Sigfried Linkwitz has a different idea.  He says that the on-axis response of loudspeakers should not be flat.  This is because they are normally oriented about 30 degrees from our head axis, where we are too sensitive to sounds around 4kHz.  He recommends shelving the response (though in this blurb, the amount he recommends is unclear).  His Orion speaker is designed to be dipolar like my electrostats.

Here's a critical discussion of Linkwitz' shelving idea.  Critics maintain his shelving only compensates for something, either an inherent boost due to speaker width, or it compensates for other changes in the design which added a boost.  But it also reveals (as Linkwitz himself doesn't, at least at the link above, unless you buy the speakers) exactly what the shelving needs to be.  It is said to be a -3.2dB shelf at about 4kHz.  There must be less than 0.5dB of loss at 2kHz for it not to sound too soft.

It could very well be that the peaking I measure above 4kHz is a reflection or beaming type issue.  And it could be that the best solution might combine off-axis speaker orientation (relative to listener) and a bit of electronic EQ.

It's worthwhile taking a look at the measured quasi-anechoic frequency response of the Quad ESL-63, a speaker I continue to admire (and have often wished I had) as measured by John Atkinson in 1989.  Scroll down to figure 7.  Notice that there is a general tendency toward rolloff starting just at 4kHz (call Sigfried!) which heavily notches out 5kHz and just barely returns to baseline around 6kHz.  By 9kHz another steep dip is starting, reaching minimum just below 10khz.  Then there is peaking in the 11-14kHz region.  It almost looks as though the design is intended to suppress a resonance around 12khz with general rolloff.

I did not do quasi-anechoic measurement, but my near field measurement doesn't show as much rolloff at 16 Khz (highest pure tone I can still hear) until I'm about 50 degrees off axis.

All this suggests to me that it's OK to be pretty far off the beam with Acoustat 1+1's, unlike Quads.

Sunday, November 29, 2015

NOSDAC: Cult or Cure ?

It was a very nice audio listening session.  The first ever where I'd invited audiophiles from the audio society to enter my main listening room and hear the magic.  All were favorably impressed, more or less, it seemed.  (You can never axiomatically interpret something like this, as if the system is Really Bad, you may not get any bad comments either.  It may only be where people actually feel threatened as you're nearing the pearly gates, but differently, and that can't be right.  So as they say among academics, the smaller the differences the greater the arguments.)

So from the most sage, the actual engineer with decades of professional experience actually making recordings, I got mostly (and actually somewhat surprising) approval.  What I had feared getting gored, the panel-to-subwoofer mating, as just got a not-so-quick touch-up days earlier, was pretty much accepted as successful.  But it was the panel sourced highs that got the barb.  "Harsh" or something like that was the word.  Despite all the praise, "I couldn't live with it" was the emphasis, which seemed out of place among all the other comments.

OK, I conceded.  I have indeed been aware of it and always struggle with it.  I deal with it mostly by moving my head into or out of the very central beam of the electrostatics.  You have to be sufficiently forward of the central beam of the electrostatics to get the smoothest highs, otherwise it sounds brittle.  Others later testified to what I described.  This near field speaker issue is real and needs further investigation.  I strongly believe this is the primary source of harshness in my system.  Later listeners confirmed the improvement from having head forward of the beam, and the criticality of this.

I didn't mention, but it may be clear to readers of this online diary, that much of my effort of the past 4 years has gone to addressing the harshness issue.  I recently bemoaned my trek in this direction as a loss to the then-alleged more important work of getting the bass decent.  But then I did some bass decency work just-in-time for the listening party.  But the kinds of things I've done were exactly replacing the Behringer DCX outputs, which aren't much praised by anyone (though objectophiles might say they were perfectly fine, having distortions well below audibility) with using a DEQ to drive an external DAC, currently the Audio GD DAC 19 which had inspired the engineer himself to obtain and now use the Audio GD Master 7.  Along with that change, the gain structure of my whole system changed, and now I run very close to 0dB with no attenuation, digital or otherwise, to compromise the resolution.  (I previously often ran with attenuation around 20dB, and despite my defending this with the claim that I have 24 bit resolution, I know it was suboptimal.)  So I've made major changes that I think can be defended on multiple objective grounds in attempts to eliminate the harshness, and they've been greatly beneficial.  Also, just using the Krell amplifier as compared to others I have, has a huge positive effect.

But my most sagacious guest had other ideas and didn't seem much interested in exploring the speaker/listener alignment possibilities during the listening session itself (though in much later comments, after the lamp had mysteriously become broken after someone tried to turn the lights off, he mentioned that speaker and listener orientation would be on the top of his list if I let him fine tune my system for a week).

Instead, he stepped into the measurements vs listening morass, and singled out the Krell amplifier as a potential part of the problem, effectively because of focus on achieving good measurements, and how many have moved to SET amplifiers, despite horrible measurements.

I look at the Krell FPB 300 as something very different from an amplifier designed to get "good measurements."  The very way it works is something very special in my view, with the output stage operating without loop feedback.  I believe this design is ideally suited for large electrostatics because it rejects electrical back EMF from the speaker, decoupling that from the actual amplification.

An amplifier designed simply to get "good measurements" could be made and used at far less cost than the Krell.

I have long experience with the Krell and a few other amplifiers, mostly the Aragon 8008 BB another very fine amplifier, I believe.  What amazes me is how much better the Krell sounds, and I believe this is precisely because of the special design of the Krell FPB, which isn't found on most other amplifiers, even Class A amplifiers, even most Krell amplifiers.  When I read about FPB amplifiers, the 300 and the 600, I just knew they were the ticket for electrostatic speakers which require vast VA swings, but accept vast EMF back from the speaker without compromising the signal integrity.  And everything in my experience has confirmed this.  They are the magic ticket for Acoustats anyway.  Now this is my experience, and my technical understanding, even if not acceptable to objectophiles.

Simply claiming that one might like the Krell because of good measurements demeaning.  That doesn't do justice to my sense about the Krell.  Actually, I suspect in a number of ways the Krell isn't the best measuring, and maybe not as good measuring as I would like.  If the amplifier has had to thermal limit down to the second stage of Class A operation, which is common, then peaks much above 100W are going to be increasingly distorted.  It is possible to get measurably significant distortion out of a Krell if one knows about how it works and can give it the right tests to show its weaknesses.

WRT SET's, I've never heard one that I liked, and have especially noted harsh sounding SETs at audio shows, sometimes besides very sweet sounding push pull tube amps.  There is every reason to believe that the high measured distortion is real and ultimately can do no good as it inter modulates with itself.  Many if not nearly all PhD audio scientists kind reject them (as do ALL contest for them...and amplifier with 10% distortion cannot possibly be virtuous, and there is certainly no measurable advantage in distortion at low levels, as might be claimed).  Most of my long time audiophile associates reject them too.

Would I call SET a cult?  Yes, although remarkably widespread.  The audio science behind SET is remarkably thin but the sound is preferred by many thousands of listeners.  I think in most cases the deficiencies only become overwhelming at loud levels because of the companion cult to super-high-efficiency speakers.  In this case we are not talking so much about claimed-to-be audible differences. SET's sound different, no question about that, can be easily distinguished from low distortion amplifiers once the SET's are operating with levels of distortion well above 1%.  You may well claim there is a difference, though, in SET Superiority.  SET cultists, of course, believe in the SET Superiority, which comes from the virtue of simplicity.  They can "hear" that among all the other qualities and limitations of real SET amplifiers, which they may admit to.  Yes it's harsh when cranked up, but played softly it has intimacy or whatever.

And along a similar vein, the sagacious advice was that I must try (and would likely love) the No Oversampling (NOS?  didn't that mean New Old Stock or do we just distinguish these things on context?) option of my Audio GD DAC.

My mind waffles about whether I should even try this.  If I call SET a Cult, then beyond any question NOSDAC is a Cult.  It's far smaller group of people who use this kind of DAC (though interestingly, a group that highly overlaps SET and especially openness-to-SET as could be assessed by including those who formerly-used-SET), the backing among objectophiles and PhD audio scientists equally or even more thin, etc.

Strangely, then, one advantage that NOSDAC has is a more technically defensible argument, one that can be shown With Measurements (ironically).  NOSDAC's preserve the timing information.   They are technically perfect with regards to timing.  You can just look at the oscilloscope photos.  All the pre-ringing, post-ringing, and similar effects go away.  Now, finally, what you see "looks" exactly like, or the closest visible approximation to, the signal going in.  (Apparently the timing aspect is a key part of the visible picture.)

The problem is that technically perfect timing is obscured by enormous mass of audible distortion.  Unlike SET's which may be sufficiently undistorted at low levels to actually suit many people by objectophile standards, NOSDAC's distort horribly at every amplitude level, in every way, etc.  It turns out those apparently small "visible" differences are actually huge, we don't judge amplitude distortion well on visible graphs of square waves.

And what do I mean horrible?  Like, say, 30% THD.

That number 30% I am taking from PeterSt, Netherlands based senior member of ComputerAudiophile, in this long threat started by NOSDAC'ers   He seems knowledgeable and respected.  He was once a NOSDAC owner, started to investigate how it worked, began to realize it was a horror, then devised his own minimum phase DAC based on 16x oversampling.

The ever reliable Archimago has an exploration of NOSDAC's with actual graphs from a forward thinking DAC by TEAC with multiple options, one of them being NOSDAC.

I might point out that many cult (and otherwise) respectable DAC designers, such as King Wa at Audio GD, prefer their own default oversampling settings.  That's where he put a lot of his intellectual energy, into getting right as-close-as-possible digital interpolation and filtering.  He also wants to show you what he has achieved by giving you the other options as well.  And it seems like many try the settings and end right back at King Wa's default.

Not to say that there aren't NOSDAC designers, in which category I would include Audio Note, who love their NOSDAC's.  This category has many cult famous companies, Metrum comes to mind, but by far Audio Note is the largest.  But of DAC designers overall, these are a tiny proportion.  In fact NOSDAC designers are what you might call anti-designers.  They "design" the oversampling by simply not including it.  And usually (following Audio Note) not including the filter either.  That's it, design over.  You don't have to be a very technical designer to be like that.  You don't have to have a PhD or even know math or audio science or anything at all.  You simply have to have a lot of Chutzpah, and if you know anything about the subject generally, combined with intense contrarianism.  But what if you had, one day, tried making a NOSDAC, and, voila, it sounded good by Listening?

So here we are back to the sagacious question: Listening vs Measurements.  Though perhaps my dislike for this drafting of the dichotomy comes from many particular issues with measurement per se.  By definition, a set of measurements cannot be complete, it's only a set.  There are infinite possible measurements, performed in infinitely many ways, as well as facing infinite inherent measuring limitations.  In audio, measurements have been fantastically exaggerated (such as Instantaneous Peak Power, occurring the moment the unit instantaneously self-destructs with man-made lightning), specified with incredible accuracy, or rounded up to nice numbers just to look like you don't care.  The best audio measurements usually come expressed by graphs rather than numbers, and the output of RAA is very nice, but still far from complete, and what manufacturer has given anything like the output of RAA?

You see the word Listening is being used here as if it The Ultimate Answer (because in a way, it must be), and Measurement as the thing we know which is often somewhat incomplete or misleading (and we have loads of other experiences with that in our expending enterprises), in a way to color this as if you must be a fool not to do what the questioner begs: "listen for yourself."

With a similar misdirection, I could redraft the question as Guessing vs Sound Engineering Principles.

While a seemingly simple argument can be made for the notion that you should just "listen for yourself" (and be fooled too, I would add) there are many fundamental problems with it as a tool of audio design and/or selection.  To wit:

1) Potentially biased (most likely, I would say, without thorough DBT)
2) Subject to future change (including the very next time)

Because of 1 and 2, the results of any one listening test have to be held in complete contempt, as not even having enough wrongness to be wrong, without either additional thorough testing (pref DBT), or reasonable objective story to back them up.

Further, the big problem with bias is that it feeds on itself.  Once you have developed the bias, it then becomes part of a very powerful bias to stick with it.

Now until researching this post I hadn't realized that NOSDAC's have a duller sound with proportionately less high frequencies.  I had known they produce more distortion, from aliasing, but had wrongly figured that distortion to add to HF making the sound brighter.  Instead what happens is the audible distortion is in the form of products which appear throughout the audio band giving it a kind of richness, not brightness per se.

So now that I know this, I could use the duller highs of a NOSDAC to balance out what may be the excessive brightness of the Acoustats.  I'm not sure if it would work like that, but it might seem to help in this way.  I'm tempted to try this simply for this reason.

But crucially any such "help" is truly a band-aid.  It wouldn't fix the underlying problem(s) whatever they are, just by adding a companion kind of distortion to cover up the original problem.

Certainly a better approach would be that rather than covering up problems, they should be sought out, and engineered out directly.

Of all the band-aid approaches, using something like NOSDAC to cancel out problems is the worst approach.  It's generally adding heaps of distortion, unrelated to the original signal altogether.  It's far worse than other band-aid approaches like using EQ to ameliorate room nodes.  Of course EQ isn't solving the actual problem (which is reflections in a bounded space), but it is using a linear means (which isn't itself non-linear distortion, nor produce more additional downstream non-linear products) to cancel out a major linear chunk of the original problem, and it is doing that directly by directly addressing the actual room mode frequencies.

Probably my NOSDAC loving friend would agree that NOSDAC's shouldn't be used as a palliative either, but rather that the problem is made to sound worse by being layered with the time distortions caused by oversampling.  If NOSDAC works as advocates claim, it should be used with the best systems as well as the worst, making the best systems even better.

And at this point, we can imagine a world of existing audio systems, many sounding fantastically good, and almost all of them, probably 99% or higher, NOT using NOSDAC's.  Ordinary OSDACs must be pretty good if they are used in most of the best audio systems.

Unless all the OS DAC designers, virtually all academic audio scientists, and most system designers are wrong, there is not really any need to even try NOSDACs.  For most people, this is simply a waste of time.  There isn't enough evidence to think this is a worthwhile change.  Everyone doesn't need to try everything for themselves, especially when the results of listening tests done improperly, as most are, can be confounding and create superstitions.

If we did want to verify NOSDAC claims of the importance of audible distortions, we could devise a number of possible blind tests.  One is a simple bypass test.

I haven't done this test, but I find it very hard to believe that the NOSDAC path would sound more like the original overall than the OSDAC.  In my imagination, the NOSDAC sounds somewhat different, whereas the OSDAC sounds about the same.  This would conform to the most important (if imperfect) set of tests done on digital audio by Meyer and Moran.  Listeners could not reliably hear the effect of digital encoding and decoding added to an audio path, even when done ten times.

The only way this imaginary bypass test could be claimed as a win by NOSDAC folks would be the claim that it maintains something special, amidst all the other clearly audible differences, a claim that would not be very credible.

While what one does with audio reproduction for personal enjoyment is a personal matter, I continue to believe that listeners are best served by the most accurate reproduction.  Harry Pearson called such a thing The Absolute Sound, and I believe in that, though not perhaps in his methods for gauging and approaching it.

Accurate reproduction is the goal of virtually all of the audio press, the audio engineering establishment, academic audio scientists, and so on.

There is a major dispute between subjectophiles and objectophiles, or black hats and white hats in Peter Aczel's terminology (himself being a self described white hat, John Atkinson being a self described grey hat) on the essential methods of achieving accurate reproduction.

Sighted listening tests are the standard of subjectophiles, double blind tests and in some cases just measurements are the methodologies of objectophiles, with subjectophiles often claiming measurements are unimportant.

My belief is not in measurements, which are incomplete and tentative, but in principles, when you can find them.  And the question doesn't come down to listening or not but to something more fundamental: how do you assess truth?

Is it really necessary to tour Auschwitz to know that the holocaust happened?  I think not, and in fact some denialists have toured such places only to bolster their opinions.

Truth in our experience comes not so much from direct observation as received wisdom.  In most cases, it isn't necessary to go beyond that.

Exploring the edges, where contrarians lurk, can be an honorable activity, but it is not something everyone need do.

Now this is not to say that in all our interactions with people and objects we are constantly assessing truth in various ways.  But most of these assessments are based on thinking and reasoning about known (or presumed) facts.  Not on direct observations.

Even our direct observations aren't really observations.  As much as science knows about perception, it knows it to be a largely constructive process, that builds models, often first, then tests them against the experience, or constructs them based on a perceived reaction to experience.

So there is actually no "need" to do listening tests at all.  It is generally a good idea to do listening tests on speakers, but largely I have purchased most of the speakers I own on pure speculation, without an opportunity to pre-listen (and such pre-listening tests, as at dealers, I'm not sure how much validity I would attach to, the best testing is always in-home testing, which is also the most difficult and most rare).  And I think I've done pretty well, with the Revel M20's, Acoustat 1+1's, and Gallo A'Diva's I own.  The speakers I have in my now unused pile wasn't all that bad either.  Given that I've been in this game for 47 years, it's not surprising I have a large collection of what I might now consider mistakes, though I'd try to rationalize them as stepping stones.

One merely needs to find the experts, or the ideas, one actually believes in.  Most of this is not a process of direct observation, it is a process of reasoning.  At best it is a measuring of other's consistencies.  At worst it is an assessment of other's tribe.  I fear I may not be doing the best always.

But that's ultimately what it comes down to.  I believe generally in what the majority of academic audio scientists believe, roughly the mainstream of the editorial staff of the Journal of the Audio Engineering Society, who won't accept subjective testing only double blind in most cases.

That means, I'm likely to find little or no merit in NOSDAC's.  Even if they initially sound pleasant, or uniquely satisfied, I'd still rationalize that as a euphonic quality I'd ultimately get tired of.

I believe if ultimately NOSDAC's actually are the way to go (which is entirely contrary to the way things look now), eventually this will be discovered, and pretty much everyone will switch to them.  There is no essential need for me to be on the cutting edge of such new things.  Not if I don't want to. Not if I think it's going to be useless.

Ultimately I don't think it's worth doing most audio listening tests.  There are infinitely many subjectophile "tweaks" one could try to make better sound.  There are endless cables one could try, and cable stands, cable separators, and on and on.  There's no respectable evidence (JAES is the standard, or the objectophiles at Hydrogen Audio) that such things make any difference.  As such, the best that could be reasonably believed about them is that they make small improvements.  How many small improvements does one need to bother with?

Exactly none by some accounts.  All by others.  I apply a little attention to everything, with more hoping to focus on the most important than doing so.  I excuse myself: this is my playtime, I'm just trying to have fun, and the important work has more difficult and boring parts.  And realistically there are indeed problems with exploring the most important kind of changes (or perfections!): loudspeakers.

One should always go after the biggest possible improvements, or at least the low hanging fruit.  In almost all audio systems, that means the loudspeakers (and the room, EQ, absorbers, etc).  Very little attention need be paid to the electronics and uncompressed digital sources because they are already very highly perfected.  And I do hew to that at least some of the time.  I pretty much ignore audiophile tweaks except as fashion accessories--I like them for that.

If loudspeakers are the biggest factor, as objectophiles and I believe, loudspeakers should be what almost all of our tests are about.

But what about the recording engineer?  It is commonplace that even experts aren't experts about anything in their own precise field, let alone others.  It seems to me he has keener observation about many things than most, but that does not prove his all his observations, ideas, or beliefs are correct when I find others with ideas I find more believable.

Given what I am, I can't simply accept any expert's opinion exactly.  And I believe that even many people with good powers of observation and refined expertise can still be profoundly wrong about some if not many things.

So I decide which ideas of which experts to accept, and which to reject.

How do I do this?  Mostly by my own reason.  Very little based on my own observations, but I have performed DBT's on myself and others, and that experience continuously shapes my ideas.  Most subjectivists haven't done such things, no matter how many sighted tests they have performed.  Basically what even the most minimal participation in DBT shows is that hearing is unbelievably biased and unreliable at qualitative testing, and audiophiles can't prove they hear the things they claim to. This is because we've evolved to correlate, to ferret out connections even if we get a lot of false ones, not comparators that can reliably compare things over time, and time is the ether of sound.

Even for audiophiles the ultimate sense isn't hearing, it's thinking, and especially determining what to believe and what is important.  And examining the thinking of others is not a distraction, it may be the best place to start.

Monday, November 23, 2015

System tuning requires iterative process

For several years now, I had not used my GenRad Oscillator because I thought it was broken.  I finally got around to removing the MDA 0.25A fuses, and found they measured about 5 ohms.  Then I looked up the cold resistance for such fuses, and it's rated as 9 ohms.  So by the book these fuses could still be good.  I put them back into the oscillator, and, It Worked!  Sometimes things are just like that.

This is the kind of oscillator best used for speaker system tuning because it has a big full turning logarithmic frequency control which sweeps the decade 20Hz-200Hz, and going a bit lower than 20Hz actually.  So it's easy to do useful frequency sweeps at any speed, and the resolution is immense.  I have other oscillators, such as a B&K with a linear control, and it's nearly useless for speaker tuning.

Now I had really intended to do measurements, and possibly parametric EQ calculations, by Room Eq Wizard.  I did my first measurements with REW in January this year.  And I haven't done any since, because there is some complication in setting up the computer, interface, microphone, and stand.  And it could be argued that is far better than my very old fashioned approach of sweeping an oscillator by hand and adjusting EQ's to fit by ear.

But just because I got the oscillator working, which was a little miracle, I went ahead and checked out the oscillator.  I wasn't planning to spend a whole day, just a few minutes.  But I ended up only finishing the tunings (they're only finished when you can write them down and get them recorded and measured without being forced, simply forced by one's own ego, to change them again) about 20 hours later, with 8 hours sleep and breakfast in the middle.  Now in the process, I did listen to a lot of music also, which then sometimes motivated me to make more changes.

So like I said, many times, including after the first hour or so, I tried to write down the adjustments, and move on.  But just going over them, I couldn't leave them alone.  I couldn't believe they were done correctly, I must have been mistaking the Q as I often do, not fully remembering the correspondences to bandwidths.

So that's how it went, just changing them, sometimes just because I thought they must be wrong.  Often making the changes, at first, or in the middle, I didn't bother to verify.  But then towards the end it became clear that verifying is the key step.  And not just at any one frequency, but sweeping up and down at various speeds.  And that's because the correction process is far more complex than you might think.  It's far more complex than most automatic systems can actually handle.

That's because the speaker and room system has a lot of possible states.  And those states, the position of compressions and rarefactions in the air at any one time, and the positions of everything in the room, and so on, are very very complex, even if ultimately "linear" it is far more complex than any single set of measurements can reveal, and more complex than a single deconvolution can address.  (And the simplification in most FFT's of what is going on in the nearly absurd.  You absolutely cannot do a tuning based on 1/6 octave FFT, though that can be an additional useful test and verification.  But imagine a problem with 10,000 variables, and you're modeling it with 30.)

It's very much like this, you have a particular peak, so you make an exactly equivalent notch to notch it out.  But it doesn't notch it out.  You're left with at least half as much as you started with, most likely, and possibly slightly shifted in frequency and q, or split into two separate frequencies and Q's.

So the only way a correction can be done is iteratively, not just in the mathematics but in the measurements.  You make one measurement, do one set of fixes, make another measurement, make another set of fixes to the fixes, and so on.

Since automatic systems generally just do one measurement and pray for the best, they can't possibly be as good as obsessive kid with his oscillator, sweeping up and down until all the bumps are gone, at least if he had infinite time.

Unfortunately, I was so unsystematic in everything, I can't assuredly say that my work here is truly great, though it's a vast improvement over the way things were before.  I really should do R.E.W measurements along with the oscillator sweeps, and so on.  But for now I am going to let it go, because I've already invested too much time in this for now, and it's actually pretty good too.  FWIW, here's the smoothed 1/6 octave measurements from Analyzer app on my iPhone:

That might look rough, for one you can only blame me for about 120 Hz and below, and it was far rougher as well as more complex at first.  The flat shelf from about 40Hz to 100Hz would be several huge peaks, falling off at the bottom for no good reason, and hugely bulging at 100Hz to sound like mid fi.

In the process of doing this, I discovered that my disc players, not often used, had channels reversed.  I think you can't blame the audiophile with so many different things that can go wrong, and constantly experimenting, for getting lots of things wrong.

Oscillator system tunings, November 2015

LR24 at 80Hz using two 2nd order Q.707 filters in combination
Right Only, 120 Hz, 1/2 octave, -4dB

32Hz +1.0dB Q2.8
40Hz -4dB Q6.3
45Hz -6.4dB Q4.5
57Hz -3dB Q4.5
69Hz -3.5dB Q7.9
96Hz -8.4dB Q6.3

32Hz +1.5dB Q3.2
40Hz -7dB Q7.9
46Hz -9dB Q4.0
51Hz -4dB Q7.0
71Hz -8dB Q7.1
LR48 (oops!?  I thought I changed that to LR24 to be symmetrical with panels)
-10.6dB (both)

Super Tweeters
Set down 6dB after monumental bulge on iPhone analyzer, still about 4dB bulge, believed correct for wider dispersion pattern in random noise.

Saturday, November 21, 2015

I love digital volume controls

Digital volume controls are one of the things Most Misunderstood by subjectivist audiophiles (and I'm one of those partly--but one of the rare ones who have come to embrace digital volume controls).

I was where many audiophiles are now in 1979.  That was when I began surgery on my Marantz 2270 trying different opamps to replace the line amp board in that unit, and finally going to a "passive preamp" consisting of a 32 position digital switched attenuator (same as used in G.A.S. Thedra...I had friends at G.A.S, and I installed that in a ModuLine box that had been used as the original prototype of the design which was ultimately sold to Dennessen as the Dennessen Preamp...designed by other friends of mine with my help...but I fried the prototype Dennessen board I inherited by connecting the unterminated 9V battery wires I ended up with nothing more than a passive attenuator box that I upgraded to the G.A.S. stepped pot, which IIRC was 25K).

I could see no reason to do anything other than a passive attenuator for two decades.  (I built a tube-based phone stage for turntables.)  I certainly believed it was the cleanest approach.  But with my passive attenuator box, I couldn't easily adjust balance, and I never could get the imaging right on my modified speakers.  And I didn't think I could set balance any other way than by remote control.  And that locked up my imagination for about two decades.  I cursed my lack of good center image, but I never did anything about it.

I could have added balance control in various ways, it's not wonderful to do this without a line amplifier which provides buffering, but it can be done.  But balance really has to be adjusted from the perspective of the listening position.  Any kind of balance control that you have to adjust while standing up and away from the listening position won't work.  You have to pop up and down making little balance adjustments and then checking out how they sound.  This turns out to be impossible because each time you sit down the position may be slightly different making the required balance control adjustment different.  And remember you have to do this every time you change the volume level with most stereo pots.  The worst of all are preamps with separate analog level controls for each channel.  By the time you get the balance and level right for listening to the recording it will be sometime the next day.

Finally around 1998 I bought my dream, a preamp with remote volume and balance controls.  I also at the time thought it to be about the cleanest possible preamp, an Aragon 28k.  So while it wasn't passive, I figured the added distortion and noise were sufficiently low.  It has a low distortion discrete transistor amplifier (essentially a discrete differential op amp with feedback) powered by a discrete transistor regulator.   (Actually, it's very hard to beat the best chip op amps--if you really have the best.  It's not hard to beat the early 741 op amp.  It is extremely difficult to beat the best chip op amp made, an OPA 211.)

Anyway, other than the all discrete part, which is debatable, it is straight forward no-nonsense, just a good circuit with good parts (except perhaps that motorized pot...).

It has very low distortion, always at the residual of my measurements (the last one was 0.004% or better).  And it has remote control balance, and quality jacks as I had been using for quite awhile on my prototype unit.

Now I tried adjusting the balance on every recording.  Still easier said than done with my very wide angled speakers.  And I noticed right away that it varied a lot.  Finally I figured out the #1 variable.  The position of the volume control.

Something else I became aware of at that time.  I was usually only turning up the volume about 9 to 10 o'clock.  Barely cracked.  It occurred to me (1) that it had poorly tracking channel balance in that part of the control, and (2) what a waste!  internally the signal goes through a pot reducing the level 100 times, then gets re-amplified by the line amplifier back up 10 times.  Would have been much simpler just to have a passive attenuator...well, as I had been doing.

So now that I had balanced control (instead of the fixed attenuators) I really did have to use it as a correction all the time.

I became aware of the Classe CP-35 (with digital volume and balance) sometime around 2000.  I should have bought it at that lovely store.  But I didn't actually buy one until around 2005 and on eBay.  Not only did I love the perfect volume tracking (now discovering I didn't need to use it anyway, now having perfectly matched high end Revel M20 loudspeaker) I just alway thought it sounded good.

Quite contrary to my long standing technical belief in passive preamps, the CP-35 wasn't just liberation from having to reset balance with each volume adjustment, or the limited range or size of adjustments with stepped attenuators which I had used before.  It Sounded Better To Me.  This rarely happens with so much clarity.   All the passive attenuators, and even the re-amplified potentiometer of the Aragon 28K, sounded dark.  The CP-35 seemed to lighten the music in the same way the volume display lightened the front panel.  It brought the music more to life, and I decided I liked this.

And then in occurred to me I should not have been ignoring impedances, gain reductions, re-amplifications and so on.  Every time you have a big R followed by some C you have a RC network which is rolling off the highs.  All the rolloffs are additive, and we sense not so much the higher frequencies as the slower rise times when nearly any kind of rolloff is added to the rolloff there already is.  So figuring bandwidth of 20k being "good enough" is wrong.  A reasonable analysis suggests that every electronic component should have bandwidth to 200kHz or better, to achieve sufficiently flat time delay.

Anyway, never even having a big RC in the middle is good.  There is nothing magic about a potentiometer, pots are horrible, though some less horrible.  That's just for starters.  People somehow believe that pure digital gain reduction (which, btw, is not what most preamps simply with digital volume do, and even fairly ordinary home theater receivers have "direct" modes that bypass conversion of analog inputs to digital) looses resolution.  That may not be true, if reducing no more than 48dB when attenuating a 16 bit digital source in a 24 bit digital pathway, as SPDIF automatically is, particularly with sources that do attenuation, and most things with SPDIF input these days for sure.  At least not in digital domain and output.

But meanwhile, even a simple pair of resistors to attenuate introduces lots of losses, among them automatically being resolution.  No matter how you figure resolution, it is lost in the process of resistive attenuation.  Relative to noise, absolute volts, absolute quanta, every way you look at it resolution is lost when you reduce level with a passive pair of resistors, even a fixed pair (though other kinds, especially pots, add other problems).  Plus whatever distortion from the resistors themselves, which is always plenty btw.  But even if the resistors were perfect, information is inherently lost (though you could say, not enough lost to make any difference, but there we are back in compromise land again, not the world of perfection that no-information-loss implies).  And this is especially true if you take the most hard headed Shannon's Laws approach which considers noise a limiting factor.  In addition to adding noise, the resistor bridge reduces signal.  It's inherently a signal to noise ratio loser.  Plus it looses bandwidth because of the R being added to the signal path with must always be followed by some capacitance, the worst case being a long hi capacitance cable to the power amplifier.

(Some would argue the pot or resistors also reduce noise, and that is true.  They reduce the incoming noise by exactly the same proportion as the signal is reduced.  But there no increase in S/N from that. To that, the resistors will always add a little noise.  So overall, S/N is decreased, though possibly only by a small amount, depending on the series resistance in the pot being used.  It's worst, of course, with a large value pot.  Old timey tube equipment often used 250K and 500K pots.  Even if followed by gain circuitry, those have to be understood as information loss devices.  With something like 10K of overall resistance achieved with high quality resistors, as in a device like the Placette passive preamp, there is far less information loss.  But still there is loss.)

It is better, at least in principle, best to avoid gain reduction.  If gain is being increased overall, it's pure stupidity to reduce the amplitude level first, then apply a fixed amount of gain.  And yet that's the standard analog preamplifier design, all the classics to now pretty much I can think of.  Some do switch in varying amounts of gain during the attenuation, such as reducing gain by 6 and 12 dB when possible, thus, the worse case attenuation down to 18dB is only 6dB, and the average less.  But that's usually only done at the very high end, like the top Mark Levinson.

Another approach is to go variable gain all the way, sometimes varying it far negative.  That may be what some or all chips do.  Only practical limitations apply to this approach, in principle it could be perfect, though of course the gain reduction loses information as all analog gain reductions do, it could theoretically be the most perfect and therefore lose the least information.  I think this is what many gain reduction chips do, though some may combine resistive switching with that for performance or hype.

Anyway, I'm not quite sure what the Classe did, I think basically it used a audio attenuation chip, which were widely available in 2000 or even 1995.  Some of the better ones are quite good now (though becoming harder to find).  Mark Levinson in their first preamp with digital gain controls, the 38, from the mid 1990's, said they varied the reference level to a DAC, in effect making an analog amplifier with digital volume control, from some of industries most refined parts.  But are DAC's really the way to do this, and not custom engineered parts, engineered as volume control, but with very high quality?  I believe the top parts now are quite good, at least since the top now discontinued Burr Brown unit, which I think was available in 2006, then discontinued and reappeared in a Finnish part.

I concluded in previous article that any weakness in the Emotiva XSP-1 (Gen 2) in unbalanced outputs is probably related to my guess that it makes everything balanced then unbalanced.  Nothing to do with the digital volume per se, which is clearly balanced and shows no weakness--no distortion down to invisible above a SOTA noise floor--in full balanced mode.  Then a SOTA preamp has to have both balanced and unbalanced preamps in parallel, depending on nature of inputs and outputs, so as to go through as few conversions as necessary for a given kind of inputs and outputs.  I don't think emotive did that, not even sure about Levinson et al.

 Anyway, with all that irrelevant personal anecdote covered, I have to think deeply about digital volume controls themselves, and not just my background story...

First, there are two kinds of digital volume controls.  One type operates in the digital domain.  Almost always when this is done, the signal is already in digital form for some other reason, such as it was recorded and distributed that way.

When starting with 16 bit, and reducing less than 48dB (the equivalent of 16 bit), I have heard it said and I believe that no resolution is lost (with proper dithering, which turns out to be necessary in maintaining resolution) when outputting and inputting 24 bit, and spdif is automatically 24 bit, it maintains that many bits whether they are used or not.

So when my Tact Audio RCS 2.0 reduces level in 0.1dB steps, it computes the needed reduction and applies dither with 48bit resolution (or something like that, it's done in a DSP) then outputs 24 bit SPDIF, there is no loss to deep reduction, way below anything I do, especially now in the Living Room where I can use as little as no reduction at all thanks to low gain.  So I can sometimes run right at 0dB and even wish for a bit more (which Tact delivers, up to 6dB more, though I don't like the idea of using digital multiplication, but technically it should be fine as long as you don't get to digital clipping, which can be seen and heard).

So, repeat after me: analog volume inherently loses information, can't be avoided, digital volume loses no information down to volume reduction you never seriously use.  Plus digital volume is accurate, has no channel imbalance so no endless worry with each volume adjustment, resolution to imperceptible-difference 0.1dB available in some equipment I have, and above average resolution in others.  Plus it does not essentially induce bandwidth loss, another kind of information loss.  Distortion is possible, but can be far below that of most other active circuits.

Ahah, you could say there's one catch.  Your fair DAC's aren't truly perfect to 24 bit.  Ahah!  But the output of the best DAC's, in my opinion, are far superior to most if not all preamps.  DACs are the most perfected thing.  That's why I power my power amplifiers now that I can direct from DAC's (though it limits my level capabilities a tad, the 2.5V output of the Audio GD DAC doesn't come close to driving the Krell either to rated power or actual clipping (though, reasonably speaking, one would think it to be enough).

It turns out, in electronic circuits, the hardest thing to do extremely well is drive a load.  So it is that analog to digital converters are the most perfect of all things, per dollar spent.  Digital to analog converters are next, up to generally limited outputs of 2 volts into high impedance load.  Preamps nowadays put out 8V or more, and can drive 100 ohm loads if not 10 ohm loads.  That is great, but makes it a difficult thing to accomplish with low distortion.

So in my view, it makes sense, if you are starting with digital sources anyway, of doing the attenuation if any in digital also, and then only driving power amplifiers with the outputs of DAC's.  That's sort of what I've been doing since 2006 or so, but the "DAC" has really been the output of a Behringer 2496 DCX, which isn't terrible at 2V output but isn't great above 3.5V.  I ran the Tact through a DEQ then a DCX all in digital mode.  So digital never left 24 bit digital domain until the DACs.

But anyway, as I've sad before, most audiophile stuff that takes analog inputs and outputs (with the exception of a fully loaded Tact or something like it) and reduces volume does so never "sampling" the input into the digital domain.  There are fancy parts that do that with the equivalent of varying the gain of an amplifier, or precisely switching in resistive elements when needed too,  And that's probably the way to go if you do actually have analog inputs and want analog outputs.  I now eschew devices that don't give you at least 0.5dB resolution down to -30dB, and I think 0.1dB is best.  Most audiophile solutions based on using relays to switch fixed resistors, such as the early Krell preamps or the Placette passive or active preamps, don't have enough resolution in gain adjustment.  And it may well be, the chips actually outperform them, except in the audiophile imagination.  (Mind you, I've lusted over these relay and fixed resistor solutions for a long long time.  They've either been unaffordable, or I've decided they weren't the best way to go, which is where I am now.)

As for me, I started using digital crossovers and room correction in 2006 and believing in their most important advantages (such as time alignment, and room mode attenuation) I haven't looked back.  Digital gain is the way to go when you do that, when you're already in the digital domain, level adjustment can be done effectively without loss of information in most cases.

But I still need analog domain digital volume controls in limited applications, such as the preamp setting the gain for my Masterlink digital recorder...that's the #1 job my Emotiva XSP-1 does now and might do for a long time, because it's hard to beat it's balanced outputs, though it would be nicer to have the 0.1dB resolution, balance, and polarity of a Mark Levinson 38/38S/380/380S/32/320/etc, so if I ever get one of those it could replace the Emotiva.  In the end (until it died) that is what I had used the Classe CP 35 for, except I also ran the main outputs in parallel to the Masterlink and Sonos, I figured it to be important to send the same 2V output to both.  (I've subsequently found that generally to be unnecessary, particularly with the high gain output (!) of the Emotive phono stage (at Processor Output).

The #2 job of the Emotiva, though actually the #1 in regularity of use, is as a selector switch for which analog devices feeds Sonos, as well as the into the aforementioned preamp-to-Masterlink.  This is fine and perhaps good enough.  Though I haven't seen measurements of the Emotiva Processor Output, I'd guess they are significantly better looking then the Unbalanced Main Outputs, which has a tiny tiny bit of tiny distortion spikes poking up above the very low noise floor.  If it did have those same distortions, or worse, I wouldn't want to keep the Emotiva in this selector-switch role.  And just because of the uncertainty I imagine myself ultimately getting (if not the Levinson) simply an unbuffered high end selector switch.  Which I could make out of my old passive pre (but it's too ugly by my modern standards).  So anyway I imagine making or buying a nice unbalanced selector...I use the upgraded (Teflon Jack) version from dB Systems to select the inputs to the Lavry AD 10 which allows me to run high quality analog into my digital system.  And that wasn't expensive at all.  I did chose to re-work it with some dampening material inside the chassis.  But that would be one acceptable solution, and there are no doubt others (and why wouldn't I use a higher end one than the dB systems in the living room, and move the dB systems to the bedroom).

But as far as the phono stage itself, if I continue using the Emotiva in that role, there's no much advantage in bypassing the Emotive-as-selector because, essentially, I can't bypass it.  I can't take anything other than the Processor output as the input to that hypothetical switch box, and then run it to the box with more cables.  Right now, the output of the Processor Output, which I'd have to use anyway, goes through a short low cap cable to the Sonos Line Input.  It Could Not Be Better, assuming I were using the Emotiva Phono Stage.  The only improvement that an external selector could provide would be for the other inputs, and then at the slight expense of the Phono Stage.

And as far as eliminating the Emotiva Phono Stage, that's easier said than done.  I could easily modify the dB systems to have the optimal loading for my cartridge.  That's the easy part.  The hard part is that it doesn't have quite as much gain.  And then it would be uncomfortable to use at the same Sonos Line Input level setting.  The phono would always be too soft, and I'd have to boost the gain...with a line amplifier of some kind.  So then, is my conglomeration of phono stage and line amplifier still better than the Emotiva?  My initial impression is that the Emotiva may be overall better sounding than the dB, and significantly quieter as well as having more gain.  So really to replace the Emotiva I might be talking about something much higher end.  Probably the only way to get what I'd want here would be to make it myself, perhaps with OPA 211's.  Short of that, and until I make that, the Emotiva may be good enough.  High End phonos rapidly become very very expensive these days. and mostly not worth it in my opinion.  The Emotiva is probably as low noise and distortion as most of them, only lacking in qualities perhaps now not measured, if they even exist.  Though mind you, I'd love to have such if it wasn't too bad technically and sounded good, and there are a lot of lust worthy ones in the $20k range.  But most audiophile preamps likely have far more distortion to the outlets having sufficient gain to make LP and CD levels comparable, 2V peak.


A discussion of the legendary Burr Brown audio volume control and its successor.

A discussion of audiophile superstitions against digital level reduction (digital domain digital control is the only kind mentioned, somehow even these geeky people don't seem aware of digitally controlled analog devices--which is the vast majority of digitally controlled preamps--but otherwise their analysis is devastating to audiophile superstitions, basically everything lost by digital reduction is also lost by analog reduction--which then loses more, but analog-o-philes can't see that, but of course an analysis that entirely presumes Shannon's Laws, as I don't, but similar principles still apply indifferent to the belief or disbelief in unmeasurable resolution).

The successor to the Burr Brown audio control made by Finnish company.

Clueless megabuck high enders rant and rave.

Glossy gloss.  Still don't get that there are digitally controlled analog devices.

Sunday, November 15, 2015

Krell FPB 300 back, now with Emotiva phono stage

It was cool all day Sunday, and by late afternoon I started playing the Krell FPB 300.  All the harshess and glare disappeared, leaving beautiful music in 3D.  I should listen to the Krell always, but have problems with my air conditioning when I do.  I need to get remote thermostat in different room than living room, but contractor has been reluctant to do that with my 12 year old system.

The Krell is 1.14 dB louder than the Aragon 8008BB I use during hotter months, so I increased the bass level from -11.7dB to -10.7dB and the  super tweeters from -1 to 0dB.  Using new Keithley 2015 meter the first time, reading seemed more stable than with Fluke 8060.

Somehow the bass is more authoritative too.  But before I had switched to the Krell I had increased the left and right notches at 45Hz and 66Hz by another dB.  At 45Hz they're now -6 and -7.  Going back years, I had notched to -9 or even -11 at 45 Hz, but while it makes the room less boomy, the sweet spot begins to sound thin.  In most recent adjustments I reduced the notches to -5 and -6, but that now seems inadequate.  Playing Jessy J it needed this adjustment.  Funny as I recall I had decreased the notches while listening to Jessy J also.

All Sunday I was playing the turntable with the Emotiva XSP-1 preamp I set up late Saturday night after going to the Symphony.  I decided it sounded best from the Processor Output into my Sonos uncompressed Line In, and thence to the living room or wherever.  From the unbalanced main outputs into Sonos Line Input, it was very very transparent but also, I feared, slightly harsh.  Just one inconclusive A/B test was performed after I first thought I hear the harshness, but then I decided I didn't need the volume control for just feeding the Sonos Line In, so I went with the Processor Output, which is a more direct path anyway.  (The "slightly harsh" finding was at best ambiguous if not totally dubious, and most noticeable actually when taking signal from the Alesis Masterlink playing a hybrid RCA Living Presence CD, which was harsh sounding either way, just somewhat harsher when being played through the unbalanced Main Outputs than through the Processor Output, I thought.)

Meanwhile I felt the main balanced outputs were just fine, wonderful actually, and I continue to use those for driving the balanced inputs to the Masterlink digital recorder, which was really the only reason I needed a "preamp" (instead of mere source switcher).  I made a quick test recording and it was perfectly transparent.

This must be my imagination, because published measurements show the XSP-1 to have remarkably low distortion through both outputs, including the unbalanced outputs.  But I just had this feeling, and anyway it can hardly be argued that the processor outputs would be worse--they do have less processing after all.  While using the Processor Outputs I also run Direct (no Tone Amplifier) and sometimes also Proc On (no signal running in output section) to get the best sound of all and the Processor Outputs.  But I need to do Proc Off to get output to the balanced main outputs, when doing a Masterlink recording.)  So it meets all my requirements if not as I had intended, by using processor output to drive Sonos and balanced outputs to drive Masterlink.  I found Sonos works OK with Nakamichi and XSP-1 phono straight into Sonos line input at max level, sometimes boosted one for low level recordings when max straight-thru gain isn't loud enough.  The XSP-1 MC phono, loading my Dynavector 17D3 to 100 ohms, is giving me the best sound I've ever had with the Mitsubishi LT-30 table, and perhaps the best sound ever.  The 100 ohm loading soaks up all the remaining resonances, some of which still had a speed instability sound.  Now the sound is rock solid, perfect pace, and sounds clear and less sibilant, more fully balanced.  Under optimal circumstances, such as a heavier arm with electromagnetic damping, the Dynavector might shine better just above 100 ohms, say 150 or 200.  Unfortunately the Emotiva gives me only 3 choices, 100, 500, and 1k, but 100 works fine under present circumstances anyway, possibly even better than the oft recommended 200 ohms because of too light arm and too little arm damping.  I'm loving the phono preamp, possibly the best part of the whole deal.  Previously my dB systems preamp had a fixed 1k ohm load, I think.  Possibly that preamp could be improved with loading and better caps, and the Classe repaired and juiced up (how about relay inputs straight to tape output, no buffering, pure).  Then I might have a pairing that would beat the XSP-1.  But right now the XSP-1 works and is working for me.  Even the processor outputs are at least muted if not buffered, and the muting works without relays, as does the volume control.  I wonder if all relays used for these tasks wouldn't be better.  That's my audiophilia nervosa.

Archimago did detailed measurements of the Emotiva through balanced and unbalanced outputs.  The technical quality of the balanced outputs is stunningly superb.  The unbalanced outputs are just a tad worse, definitely not as good, there are lots of tiny spikes rising from the noise floor, but still should be good enough for audio purposes.  Too bad he didn't also check out the processor output I am using, which I believe is better.

But those little spikes in the Unbalanced output are a bit worrisome, and especially if you think they might be induced by chassis EMF and RFI.  Another little tweak I did was to pull up on the shielded portion of the Ethernet cable so that the unshielded last-foot jumper I use does not come close to touching the Emotiva chassis.  I used a velcro tie to hold the unshielded jumper cable several inches above the Emotiva.

John Johnson of Secrets of Home Theater and High Fidelity tested the XSP-1, and the measurements are stunning, but note that he tested ONLY balanced input to balanced output.

I chose the XSP-1 precisely because of the precise digital volume control, which because of typically audiophile prejudices tend NOT to be found on high end preamps.  Other than the Emotiva, the only others I can think of are Classe and Mark Levinson at far higher prices.  And though I often scan ebay and Audiogon for Levinson preamps in the 38/38S/380/380S series, those are quite old products now and even when they were new many audiophiles didn't like them.

While I was at first inclined to be suspicious of the volume control on the Emotiva, I've moved on to another fear.  I think it may be asking too much of such a preamp to have both balanced and unbalanced outputs.  Likely the unbalanced output is taken from the balanced path in the end.  So if you are running an unbalanced input to unbalanced outputs, you are going through these conversions:

unbalanced to balanced
balanced volume control
balanced line amplifier
balanced to unbalanced

So if you are really intending this for purely unbalanced use, you are getting two extra conversions which likely do no good.

A cost no-object way to do this correctly would be essentially to have both balanced and unbalanced preamplifiers working in the same box.  Then unbalanced would stay unbalanced all the way through, and likewise for the balanced.

Now I loved the sound of my Classe CP-35 which was in some ways similar to the Emotiva.  The CP-35 had digital volume with balanced and unbalanced inputs and outputs.  But here's the difference.  I measured approximately 0.05% distortion with the CP-35, which is much higher than the Emotiva, but when looking at the spectrum it was all noise, not actually distortion.  There were no little spikes poking up through the noise floor, just a relatively high noise floor for a line amplifier.  As long as you don't actually hear the noise, noise is much more pleasant than distortion because it seems to be uncorrelated to the original signal.  We are correlation detectors par excellence (or run amok).

My current plan is to do my line source switching with an external switch box which will feed both Sonos Line Input and Emotiva.  Then just use the Emotiva to drive the balanced inputs of my Masterlink.  Balanced output is what Emotiva does best (and pretty much state of the art).  The unbalanced output should be good enough, but if you are perfectionistic or obsessive, "should be good enough" never is.

Sunday, November 8, 2015

iPhone audio output quality

It looks quite good, so long as you avoid 15 ohm loading.  At higher impedances, it looks to be lower distortion that the audio generator built in to my Keithley 2015 Generator/Analyzer.  iPhone is reported to have (iPhone 6) THD of 0.0012.  The Keithley generator has THD of 0.02%, while the Keithley Analyzer has THD of 0.004%.  I could measure the superiority of the iPhone to the Keithley Generator on the Keithley Analyzer.

Other performance is excellent too, except crosstalk is a bit high (still better than any FM tuner).

Sunday, October 4, 2015

Splitting tracks on an Alesis Masterlink

I'm using an Alesis Masterlink to do my dubs from turntable and FM.  Actually FM radio is first recorded to my Nakamichi RX-505 because it has a useful "timer start" feature.  Then I dub the radio to the Masterlink to make CD's, which sound remarkably good (yes!) considering FM and cassette came first.  It is very worthwhile recording the broadcasts of the San Antonio Symphony on KPAC Saturday evenings at 7pm when I'm often not home.

I have two hardcopies of the Masterlink manual, but forget finding those.  Instead I bring up a PDF copy of the manual I got somewhere.  It doesn't seem to say anything about splitting tracks, which is the most useful operation.  Not only is splitting tracks a natural thing to do when you record long broadcasts from FM or sides of an LP, but splitting tracks is the only way to edit out junk in between stuff you want.  You split somewhere inside the junk, then crop off the junk from the preceding and/or following tracks using the Crop operation.

Searching online, I first find the Sweetwater hints for using the Masterlink.    They say you mark the beginning of the new track by holding Playlist Edit and pressing Track Start and then do the split by holding Playlist Edit and pressing Track Start again.  I started trying that on my Masterlink, but the second time I pressed Track Start it goes back to the actual start of the track, not where I had previously paused.

Actually, the way to do this is very intuitive.  To split a track, in Edit Mode you pause it where you want the new track to begin, and hold Playlist Edit and press New Track (!).  Then you confirm.  Done!

Despite its quirks, Masterlink is easier to use than all the computer based DAW's I've tried, and doesn't require you to use a computer (which brings all sorts of endless trouble).  Computer timer recorders are aimed at recording digital streaming broadcasts, which are necessarily highly lossy compressed, rather than analog FM terrestrial broadcasts (analog stereo FM is a linear lossless system theoretically capable of zero distortion with the best FM tuners reaching into .007% territory).

Analog timer recorders used to be a common thing, I had a Sony All-on-one which could timer record FM to cassette using it's own built-in daily timer.  At the time I bought that unit, however, it was already becoming a vanishing breed.  Very few analog-to-digital recorders have even been made, the Masterlink is basically unique as a high resolution  hard drive recorder/DSP/editer/burner.  The Masterlink has no timer record feature, I think there might have been an analog-to-CD recorder with that feature, but there weren't many of those.

Back in the day…all the Nakamichi cassette decks I know of had the timer record feature, as my RX-505 and my 600.

Thursday, October 1, 2015

John Atkinson on MQA

Good, though I think I liked one of the angles in The Absolute Sound, where it focussed on the perceptual issues and the importance of time accuracy better.

The pulse accuracy graphic therein was the most striking.  That is what I have always thought needed to be done!

Here…the discussion is interesting, and John Atkinson reveals more in the comments.

Could I imagine MQA end to end?  Even replacing my ladder dacs with something else, and likely delta sigma, and replacing my input ACD with an MQA devices, my level controls, EQ's, digital crossovers, would not work.  But I already have 24/96 end to end, which is pretty far right on the chart of diminishing returns.  There isn't much left above that which, perceptually, MQA could handle.  But if I start with MQA streaming (and, if I could dream, analog-to-digital streaming connection like Sonos line in) converted to 24/96….voila everything is now 24/96!  So I could, in principle, use a Sonos-Connect-like network based on MQA with 24/96 outputs and (dream) MQA inputs.  If Meridian makes it I doubt I could afford it.  I have six nodes now and will need seven.

Sunday, September 27, 2015

iTunes Error Correction

I have always kept the iTunes "Use error correction" box set when importing CD's.  I have been unsure of how well this works, though I also use a TOTL external Plextor Plexiwriter Premium which has the most advanced features including error correction--in this case meaning that a track is read over and over until the internal validity checks show no hard errors.  I believe this combination of iTunes on Mac with "Use error correction" and a drive capable of re-read such as the Plextor gives you accurate reads or at least it tries hard to give you accurate reads.

Most discs read without a problem, especially recently.  I also manually read every disc twice and diff the files.  I have scripts which do this quickly for an entire album read twice.  People have asked: are you sure you aren't just getting the same garbage twice?  But I believe in most cases the answer is no.  This "twice reading" is exactly what the computer drives themselves do, and special high accuracy reading programs too (though nowadays the most special programs also check against online databases).

Today I had some kind of near proof.  While reading in an old disc, the Plextor was buzzing more than usual.  And it seems reads were not at the highest possible speed, only around 15x.  Finally, on the last track, the high speed scanning stopped near the end of the track.  From that point onward, it read very slowly, at something like single speed (during that time speed was not reported).  It did finish the track, however, and I ultimately read in the CD twice completely.

Sure enough, I got differences between the last two readings of the last track.

This shows that iTunes error correction does not guarantee a correct read, though it seems like with the Plextor it did seem to try (if that is what it was doing).  But in spite of best efforts, if a correct high speed read cannot be done, it goes with the best slower speed re-read it can do.  And I have one more datapoint that when this was obviously happening, my difference method caught it too.  I have caught other differences like this too, and it seems whenever the drive is going through fits like this I get differences, which isn't proof that my method works but it does seem like it is.

I read this disc many times with and without error correction.  Not once did the diffs for the last track match according to script.  When at I disabled "Use Error Correction" it read the last track straight through without slowing down, and all three times.  When I switched Error Correction back on, it continued reading the last track straight through for the next few reads, then went back to slowing down on the last track.

Doing this more than a dozen times, the last track checksum value did match twice, but only in the "second read" I discarded before getting the next one because my scripts work that way.  Really what the script should do is record checksum scores and keep all reads until you get two having the same checksum.  But once I saw the matching checksum, I couldn't get that same matching checksum in the next 4 reads and gave up.

All this tends to confirm my reading strategy of diff'ing files helps catch errors.  At least in some cases, the "fill-in" data after reading errors usually doesn't match in cases like this where they don't match once.  It's still possible in cases with different kind of errors they would always match, though I have no evidence for that.

MQA Reconsidered

Having read the introductory articles in The Absolute Sound, I like the look of the thinking behind MQA, Meridian Quality Assured.

In it's full encoding and decoding, when possible from an analog source, it preserves fantastic temporal response, so that tiny pulses are tiny pulses without the pre and post echo.

It does this by using more than one sampling rate, with the higher sampling rates having fewer bits because the greater resolution is not needed.

I was thinking about that sort of thing in my previous 'information" arguments against DSD and Delta Sigma in general.  Only a few times, if ever, I have noted my reservation that most of this "information" that DSD (and to a lesser degree, multibit delta sigma modulators) is at barely audible if at all frequencies.  So maybe I've been making a mountain out of a molehill.  I haven't known how to account for this before, but Meridian discusses this very issue (in their terms) showing that the information needed is a diagonal curve.

DSD's errors fall roughly in this range, but as I have noted before, the use of noise-shifting means the upper frequencies aren't really linear and accurate.

Meridian can achieve better quality, fully up to the required diagonal, in fewer bits than 44.k uncompressed.  Better than HD quality such as 192/24 they say…since they have better pulse accuracy still.

It sounds possible.  It certainly sounds better to me than DSD, and likely my older secret decoder ring, HDCD.

I'm a big fan of HDCD and I've always sought out HDCD versions and HDCD compatible players.  I have a way of transcoding HDCD into 24/96 by analog resampling, which sounds better to me than digital conversion, and I've heard experts say that.

While I was willing to pay the nickel for better sound, I wondered if others would balk (and, well, they did in the sense that HDCD became a very small player in the market) to pay the nickel.  Then, if you didn't have HDCD capability, you'd get a version that not only had less resolution than the encoded version, it might have an altered dynamic envelope.  So, ,in effect, a low-fi version in the unquestionably audible way.  That in effect makes it a secret decoder ring you must have to get the real music, which can be seen as  a kind of extortion.

Of course Dolby, Dbx, and other encoding schemes have been based upon those same principles.

And regards compatible SACD's…there has been much speculation that the CD layer is a different mastering, or dumbed down from true 16 bit quality.

And this even gets to audiophile releases…in many cases the main reason why they may sound different is different mastering.  And so too with LP's.

So ALL these are using the same extortive system as Meridian would be.

It's hard to pull these things off.  Dolby dominated cassette, though I was not happy with most dolby playback (except Nakamichi), done properly, it gave a nicer recording, though sometimes I preferred making my own recordings without dolby, especially if meant to be played on non-Nakamichi cassette decks.  I felt it was a useful but highly flawed system.  Actually back in the heyday, I hated Dolby B.  Now I'm inclined to use it consistently on my Nakamichi RX-505.

Elsewhere, I don't think Dolby was as successful, but I heard the claim they put a lot of pressure on producers to use Dolby products when the producers didn't feel it improved the products, or preferred others, such as Dbx.

I was glad that Meridian won the rights for DVD-Audio with MLP knowing that Meridian is, well I've generally had more respect for their thinking than Dolby.  I heard that DVD-Audio was held up by a battle between Dolby and Meridian.  I'm glad Meridian won.

And I glad and very surprised to hear that the "Dolby True HD" audio on Blu Ray is basically MLP.

So know the geniuses at MLP have given us a really really true HD format.  If it's anywhere as good as it sounded in the pages of The Abolute Sound, I wish them luck, and I am interested in getting the magic.

Sadly everything in my main systems must be converted to 44.1 to 96 kHz.  So I'm not sure I could even approach using MQA as intended, but I can get a version downsized to 24/96.  Or I can resample from the analog outputs, as I do with SACD, HDCD, and mostly DVD-Audio.

Tuesday, August 25, 2015

Two Too Noisy

Unfortunately my newest DVR, the Magnavox MDR-557 doesn't have a quieter chassis than the older MDR-537 I just started using recently.  It seems slightly noisier, and more importantly, slightly more irritatingly noisier.  While the MDR-537 does produce a slightly tonal operating noise, a midrange whine, the 557 produces a tonal noise that warbles slightly, making it significantly more irritating despite being barely louder.  This was with just the HDD operating, as it would often be overnight while I'm sleeping mostly.  There is also a random clicking as the HDD stores the previous 6 hours of video.  I'd like to be able to turn that on and off, but it does not appear to be a user-controllable feature.

It's very hard to measure most chassis noise.  It's clearly audible in my bedroom at night, but isn't above the self-noise of the SPL meter built into my Android phone, as I quickly determined.  Even with purpose made SPL meters, in my previous experience, chassis noise is still hard to measure.  The noise itself may be only 15dBA or so but still stand out irritatingly in a quiet room because of the tonality.

WRT measuring, or at least more objective comparison, the best thing to do is make low noise recordings.  I have a studio quality 1" condenser microphone which has 5dBA self noise.  It would make a fine recording, which could be measured afterwards, or analyzed with a post processing RTA such as a program I once wrote called GFFT.  I had been thinking about measuring chassis noise for many years when I bought my R0DE NT2000, so I knew to look for a microphone with low self noise, and that is good for general recording purposes also.  About the only lower noise microphones are from Neumann.  Cheap SPL meters have microphones 1/4" or smaller and these have many times more self noise.  Small microphones are good for high frequency response but bad for low self noise.

You may think it's curious to be concerned about chassis noise.  But it's a far more objectively important thing to be concerned about than whether source material is 16 bit or 24 bit.  The noise level in a 16 bit recording is 96dB.  At a recurring peak level of about 85dB which is a typical "loud" listening level (louder than most non-audiophiles listen at) that puts the noise floor -11dBA in the environment.  Meanwhile, if you have a disc player producing noise at 20dBA, it is 31dB, or 35 times louder.  And 20dBA would be a relatively average one, I think…the Magnavox could be higher (of course this depends a lot on how measured), my guesstimate is about 25dBA (537) with just HDD operating, as it always done when the unit is on.

Another thing I think is almost always overlooked is the fact that most equipment has internal resonances that react to ambient noise.  I'm less concerned about the effect this has on the signal path through transistorized electronics (which is minimal) than on direct radiation of the resonances into the acoustic environment.  Even with tube electronics, except for phono preamplifiers, the effect on the signal path is probably the smaller part of the story.  Direct resonances are one thing good about having "overbuilt" equipment in heavier gauge metal boxes than is absolutely necessary.  Of course the Magnavox players were built according to a different set of criteria, namely to be as low priced as possible.  This means they use thin gauge metal which is not helpful to keeping the noise level down, in fact it may increase the noise level as compared with having no metal casing at all.

My comparison so far is further corrupted by the fact that I have the 537 sitting on top of the 557.  Even when only operating one unit at a time, the other unit serves as a resonating chamber for the operating one.  Further, the position is slightly different in the cabinet, and might matter which unit sits directly on top of the shelf.  Obviously a fair measurement would only measure one unit at a time, sitting directly on the shelf and with nothing on top of it.

While some people might worry about the heat of having one unit on top of the other, that hasn't been a problem for me.  I've never noticed the case of either unit to get above ambient temperature, they have no vent holes on top, and are constantly fan cooled.  Also my house is continuously climate controlled to have ambient temperature no higher than 79F.

I plan to do the fair comparison, each unit on the shelf by itself.  Then I will compare the better unit operating by itself with having the other unit above or below it.  If it does really help to have only one unit sitting on the shelf, I will put the noisier unit somewhere else.  I will operate it less either way.  I will only operate both units at the same time if I need to, but mostly just the quieter one.  And I will use acoustical foam wedges and other measures external to the chassis to reduce the noise as much as I can easily do that way.  I have already carved a large piece of 6" thick Sonex foam that fills up the shelf space above the top unit, and blocks some or most of the fan noise coming from the back.

That's just the short run.  In the long run, I intend to make the 537 first, and later the 557, into the quietest unit I can.  I think the hard drive is the largest source of noise in the chassis, and the hard drive can be replaced, I hope, with a quieter one or even an SSD.  The fan can be externalized into a larger fan that runs slower.  I have done that sort of business with computer fans before, and it is not a walk in the park.  It took years to perfect the semi-external fan system I ultimately used with my Amiga 2000.  It featured a circuit to break the fan-sensor line in the computer when/if for any reason the external fan wasn't running.  It never failed in the two or so years I used it, in fact the nature of such "ultimate" tweaks is that shortly after you have finally reached the summit of a new invention, it becomes absolutely necessary to switch to a different unit, such as my subsequent Amiga 3000, for which all the old interventions are useless.

Then for all it's unsolved (though heavily tweaked, even fan replaced) noisiness, my Amiga 3000 is now unused, though it could be fired up again if I ever excavate the junk in the computer room back where I can set up the video distribution amplifier and hook it to the new home networking panels, bringing Amiga Vision back to life.

But for the last two years, I've been busy with other matters, now an all new bedroom video server, which suddenly became necessary because the last one, a Sony DVP-995C 400 disc carousel from 2006, quit working last month and nobody makes those for reasonable prices anymore.  Besides, a system based on hard drive like the Magnavox is far neater to operate.  You can switch between titles quickly and it even shows previews beforehand, all without doing any metadata editing.  But a constant whine is a big drawback, and the Sony was quieter, especially when there was no disc playing.