Saturday, July 22, 2017

R2R Magic!

I've been struggling with trying to make Hot House (Arturo Sandoval) sound listenable.

I've been adjusting EQ.  I balanced the two supertweeter channels, connecting the DAC directly to the Parasound Power amp, not using the Harrison Labs -6dB attenuator (which isn't perfect and increases load to 2k) and instead using the Parasound attenuators.

None of that seemed to cure the problem.  But one more change seems to have magically changed everything, not expected at all, and it's all probably bias and measurement error.  But sure enough swapping my second Denon DVD-5000 for the Emotiva Stealth DC-1, and readjusting the digital gain a bit since the DVD-5000 doesn't have gain control like the Emotiva (which I had set to -6dB after removing the attenuators).

I had to do a lot of rethinking the digital delays, since now for the first time both panels and supertweeters are on the same kind of DAC, and that wasn't true when I did the Tact setup.

Now, strangely, I have pretty flat response to 20kHz.  The added steelyness is gone.

And yet, there isn't really any audible output from the super tweeters any more, or very much.  I can measure the 18-20k they are putting out, pretty much as before.  But now I hear nothing.

Some strange force leakse out of sigma delta dacs, ,a force below the actual audio signal they carry, a kind of low frequency force which makes other sound waves vary their phase, making things sound bad, even though it's not a measureable sound as such, just a magic force field that affects other sounds.  The feedback of the sigma delta is "correcting" everything else in the room in an undesireable way.

Or maybe the story is just that I should always be using the same kind of dac for super tweeters and panels.  That might not seem like a bad idea for a lot of reasons.  And possibly the bass too.

In fact I made this change not expecting it to make any difference, but after testing the new DVD-5000 to be the same as the other (excellent, the best of the Dual Differential 1704 Dacs I have measured)  I thought it would make things more convenient because now I don't really need to bother readjusting the midrange delay for 44.1k vst 96k or whatever, since they are now the both kind of dac.  I don't know the sub delay even as close as the 0.44msec difference, so it hardly matters to the bass.  But maybe having another DVD-5000 for the bass would be the final magic touch (and it just seems so much like my kind of all antiques system).

Maybe in that case whatever the drift it, it stays the same, rather than constantly going out of phase with each other or something, even while the average delay has been compensated for.  There is still the matter of the supertweeter delay because of physical distance, and I've tried to re-figure that, but maybe needs redetermination using something better than the Tact program (which is very weird, btw, I think it uses an essentially analog stimulus and shows you the uncorrected response of that stimulus rather than a corrected impulse, which many programs show).  But even though changing that seems to make a slight difference, it's not like the vast improvement that has occurred.

But I'm not sure it entirely matters either, as the long length of the acoustats mean there are different possible delays for every position, and if most sound you hear is reflected, the average matters almost as much as the direct path distance.



Tough Tracks

My system can be loafing along, playing classical guitar, and sounding beautiful and pure.

And then, I can be playing "Hot House," by Arturo Sandoval, and it can sound pretty rough.  Abrasive, screechy, harsh.  I don't think it was always this bad, say when using the Krell amplifier.  When using the Aragon I become more aware of these things, possibly because the Aragon is not quite as good, and possibly because I let myself hear problems more.

I played Hot House on Thursday and then Friday night.  Finally I decided to do something about it.

My first idea was to substitute the Emotiva DC-1 dac for the Denon DVD-5000.  After all, the Emotiva is the cleanest DAC I own, by measured THD and distortion spectrum (which looks almost perfect, and I will not say that I disagree with the measurments either...it sounds like it's not there, only a bit too much so generally I felt in a few minutes of critical midrange listening).

But the distortion I've seen generated by the Denon is around 0.003%.  I think it would be hard if not impossible to hear that, even if it were all 7th harmonic, and actually the Denon has mostly 3rd harmonic with a fair amount of 2nd, plus higher harmonics but only at even smaller levels.

The amplifier might be a bigger factor, but the Krell has not been sent in for repair yet.  (I accepted a freight shipping quote on Friday so it goes out next week.)  I have measured the Aragon at 0.02% distortion at moderate level.  It had been over 0.1% until I fixed the bias problem 2 years ago.  Still, this doesn't seem like the big factor I'm looking for.

So, instead, I decided to go after some of the lumps in the high frequency response of the Acoustats using my DEQ.  I have avoided using EQ on the Acoustats except for the crossover itself and my Gundry/Linkwitz/Peterson dip.  And also a tad of resonance control around 115 Hz.  It may very well need more than I've done (though solving resonances by fixing things is better than correcting them with EQ, but I have zero idea how to fix the response bumps now).

Deeply rolling off the treble seems to do no good.  I still hear the harshness in there, no matter how well rolled back.  And then it just gets boring also.

I am thinking that small peaks which result from tiny resonances in the speaker catch on the natural odd harmonics of the brass instrument as the instrument itself is sweeping through the spectrum.  However you imagine it, it does seem like rough looking response could create rough sound.

I experimented first with attenuating a slightly rough spot around 12kHz, just before the speaker begins to roll off somewhat in my off-axis position.  I can see this same spot regardless of angle with the speaker.  I'm using the 1/6 octave display of an app, so I know that "12 kHz" isn't exactly the spot, but pretty close.  Really when tuning a parametric EQ, you should use something even finer that 1/6 octave, in my opinion a hand tuned oscillator is best--then you can totally zero in on the resonant frequency.

Then it seemed also that there was an elevated sticky frequency on the RTA just below 12 kHz, so I made the bandwidth 1/3 octave and moved the center frequency down to 11.8.  That's where it is now.  Before doing much wider than 1/3 octave a good oscillator test is called for.  After measurements and listening I settled on attenuation of -4dB.  I only weaken such peaks, never total cancellation, because overcorrecting is worse than undercorrecting IMO.  But this did seem to eliminate any tendency to either peak or shelf at 12khz (before plunging down above that), only now there's still a bit of bulge at 10kHz left that wasn't by itself visible before, an indication of the tuning of the parametric correction is still a bit off.

I similarly attacked a small peak around 638 Hz.  When I turned the parametric correction from off to "PARAMETRIC" a previous correction of 638 Hz was turned on (though, when such things are partly saved by the DEQ, the attenuation goes back to 0).  Now it's pretty much gone though there's a similar peak also around 500 Hz.

Despite my Gundry/Linkwitz/Peterson dip (centered at 3.8kHz) there is elevation at 6kHz, the worst frequency for there to be elevation at.  So I added a new 1/3 octave correction just at 6kHz, to keep that down, blending better into the rest of the dip also.

I had also noticed a large difference between the output in left and right super tweeters.  The much more wrinkled looking (from a previous high power mistake) left ribbon showed a nearfield peak (which it needs, in order to have any impact at all compared with the giant Acoustats) which was much larger than the right.  I had always assumed that the ugly looking eft ribbon had less output.  But in fact it has (or at least had) much higher output.  AND in this case, I decided to toss the Harrison Labs 6dB attenuators, dialing in 6dB of attenuation to the Stealth DC-1, and then also using the gain controls on the Parasound HCA-1000A amplifier driving the super tweeters, hand adjusting to about a 6dB (possibly inadequate) nearfield peak.  This may have actually been higher than before on the right side, and now I am worried if there isn't some issue in the right channel, but it could also be the crossover in the left is burned out.  Anyway now both supertweeters are adjusted to the same reasonable level.

One damned thing about the DEQ's is they don't have per channel level--just a convoluted "Image" control.  I'm now appreciating the level controls on the Parasound amp.  The DEQ's should also have  level to 0.1db, per channel polarity controls, and delay up to 10 sec (not 300 msec).

This set of changes did seem to improve the sound of Hot House.  I was able to listen to nearly the whole album again (after chains couldn't have done that) at a Tact level of 89 (approximately -3dB) which is quite loud.  Still, I'd say considerably more improvement is needed, and I'm thinking of doing more testing and possibly switching to graphic eq as well.  But I'm thinking a good oscillator test of the 11-12kHz and 6 kHz resonances might be revealing...



Thursday, July 20, 2017

Are Synchronous DAC receivers reappearing?

Truth be told, of course, synchronous DAC receivers had never gone away.  While some of the earlier ones are discontinued, the DIR 9001 continues, and is often the receiver chosen by DIY'ers, I have noticed.  I'm talking SPDIF/AES only, as you may already know I despise everything else for practical/personal reasons, AND that's a pretty cut and dried case about which I need not comment anyway.

But from barely respectable on up in manufactured gear, asynchronous receivers have/had become the norm.  Few except for cranks were raising the old "puts your jitter into the data" arguments.  Some equipment was giving you a choice--that's fine.  But generally since the asynchronous receivers were showing better measured performance, anyone who cared was using them.  Except for the universe of contrarians.

I'm using one right now with my DVD-5000 dac, which has CS8414 and dual differential 1704's.  CS8414 might not have the best self-jitter, DIR 9001 may be better in that regards.

But anywayz it seems to me that synchronous receivers are required for things like HDCD, aren't they?  You can get away with any sort of interpolation with that, I would think, there's meanings to certain exact sets of bits.

Then, I also think about MQA.  Once again, it seems if you are encoding any extra information into the audio, that isn't the kind of thing which may be interpolated or whatever.

So I noticed in review of the latest and greatest Meridian DAC, which is wonderful for sure, mention is made of a FIFO buffer which gets jitter to below 0.5 Hz.

OK, that sounds like a synchronous receiver with a 1 second buffer, though I could be wrong about the buffer size.

Most of the receiver chips get away with what seem like tiny buffers, then often spec jitter suppression only above 20kHz.  That is surely wrong, it should be at least spec'd down to 1Hz.  With a long enough buffer, the DAC clock can vary slowly yet stay in line with whatever the source does, and therefore only subsonic jitter remains.  I understand that people are quite sensitive in the usually subsonic 3-10 Hz range to FM, aka jitter or wow, so you have got to get below that.

Now I really do wonder what is going on inside the Denon DVD-9000 with it's 330 msec latency compared to other DACs.  Is it an anti-jitter mechanism?  How well does that work?

I know there's no question I have to get beyond my very limited so far jitter measurements and do a better investigation of all this, with J test and so on.

BTW, the J-test harmonics appear in most cases to be way way below -110dB, often in the -130dB range.  That's a worst case jitter situation, which maybe occurs for a few seconds in a lifetime of playing.  Mostly, the jitter sidebands must be way below that.  And that's so negligible it's a wonder we even think about it.  (Well, that's a long story of course.  And it also relates to the lack of controlled blind testing.  And people don't want to write off uncontrolled impressions they have had.  And with digital transmission there can only be two things, data and time.  And the data can be checked and shown to be perfect.  So the only thing left is time.  SO time MUST explain all!!!)

This seems way below the importance of more easily measured things, like harmonic (and therefore IM) distortion, which often reaches high in audiophile designs.  Often distortion sidebands reach -60dB or higher, possibly 3,000 times or more larger than the sidebands being caused by jitter even through toslink, etc.







DAC low level pictures

Marvey at SuperBestAudioFriends has posted pictures of low level resolution of 3 DAC's, the Schiit Yggdrasil, the MSB analog DAC (which has long been on my want list), and the Nad M51.

At -90.31 dBFS (basically near the limits of 16 bit audio) the Schiit is showing a trifle of notchiness around the zero crossing point, otherwise a fairly smooth recongnizable sine wave.

The MSB is showing notching all over the wave.  (And Marvey comments that John Atkinson didn't give this DAC any crap for only achieving 18 bits resolution, etc.) and looks a lot worse.

The NAD M51 (which uses a very high frequency PWM conversion--the very kind I find the least intellectually acceptable) is showing an almost perfect looking sine wave, just a slight bit of lumpiness at the extremes.  This may be the best looking sine I've ever seen at -90dBFS.



Sigma Delta "Impulsivity"

One thing I've noticed recently in both 1-bit and multibit sigma delta DACs is something I call "impulsivity."  Peaks of loudness seem to get peakier, like tiny highlight spots of glare.

Now, firstly this could be just my imagination, I'm not going to even try to prove otherwise.

Secondly, sigma delta DACs, at least the better ones for quite some time have had better S/N and dynamic range specs than even the best, yes even my sacred 1704's.  So you could really say, black is blacker and therefore things rise up from the black with more contrast.  More contrast means more "peakier."

I'm suggesting something different.  I'm suggesting that when the sigma delta DAC has to output it peak it has to "work harder."  The way sigma delta dacs work is by a feedback loop, which drives the narrow converter to push the output into the correct signal.  When there is an actual peak in the signal, the overload is higher and the feedback has to work "harder," or at least more consistently, to one direction.  This is precisely the kind of thing our neural networks are designed to detect: correlation and causality and therefore intensity of effort.  We feel when things are struggling, or just loafing along.

The PCM dac, on the other hand, never "struggles."  It has the large jumps pre-fabricated and ready to go when needed.  It simply assembles the pieces for any given sample, and that is that.  It is therefore imperturbable (just loafing along).

Now I'm not suggesting some sort of ESP being involved, so how could we sense such things?  The answer is: it's below the noise floor, and the ultimate timing of things.


Sunday, July 16, 2017

Time Alignment using Tact

It's pretty easy to set the supertweeter time alignment using the Tact as measurement tool, using the mid way Behringer DEQ to set the delay because the supertweeters are about 6 inches back.  The Tact stimulus is a high frequency rich "snap", and it's easy to see where the high frequency wiggles of the supertweeter line up with the initial impulse in the panels.  The initial impulse in the panels appears to go down in the Tact display apparently because of some weirdness in the Tact stimulus and display.  I use SpeakerPop to set polarity.

I've determined the correct setting to require 0.2 ms delay in the panels.  However, since Tact uses a 48kHz signal, which is delaying the panel approximately an additional 0.35 ms due to the DVD-5000, the total delay is about 0.55 ms, or about what it physically looks to be.  I actually set the relative delay at 0.55 because I am "optimizing" for 96kHz.  If I am seriously listening to 44.1 I can subtract 0.35 ms delay in the mid way DEQ to compensate for the added latency in the DVD-5000 at 44.1.

Setting the time alignment for the subs is as clear as mud.  The Tact stimulus doesn't produce very much bass energy, and if you play the system normally the "bass" is just endless LF ringing.

What I have historically done, only it isn't working as good anymore (and never worked great) is separate the channels.  To set right time alignment, I disconnect the left panel and sub, reverse the panel channels.  Then both Tact channels play the right side, with the left panel channel actually playing the right channel.  This gives a display with sub in one channel, and panel in the other channel, and you can sort of see how they line up.

But it's pretty hard to tell, because the sub is only making very small and slow waves in the Tact display, and it's not always exactly clear where the "beginning" is.

The Tact doesn't make this any easier.  It does not save the previous measurement(s) so you can see how things changed.  Because the snap stimulus has very little low frequency information, the subs are barely even audible.  This lack of information means that each plot is going to differ from the previous one, randomly, even if you didn't make any changes (unless you do endless averaging, I suppose, I do only 10 trial averaging which already requires long waits).

Anyway, after hours of futzing with this anyway, I finally came to the idea that about 5.4 ms was the correct panel delay time align the subs.  (Because the subs are further back, the panels need to be delayed to compensate, by 5.4 ms?)  This makes no sense because the panels are not that far back.  I made that judgement on the left side, for which the subs may be as much as 3 feet back (they're only as much as 28 inches on the other side).

Anyway, on the other side, things were even less clear.  The number could have been as low as 2 ms (about what it looks) or as high as 5 ms.

Finally I came up with a clever trick in programming the DEQ's.  I indeed do delay the subs (even though they are the farthest back) by 4ms.  Then I delay the panels by 7.5ms and the super tweeters by 6.95 ms and I can leave those two alone.  Then, I can adjust the sub delay simply by turning one knob.

And this control is very nice.  Now it is very clear how the sub delay affects the sound, and exactly the opposite as I had thought.  At the "minimum" measured sub delay of 3.5 ms the sound is a trifle boomy.  If I dial the sub delay more, it tightens up.  I had figured you always want the panels to "lead" and therefore provide the initial more perfect sound.  But it appears what is really important is to not have the subs lag too far behind.

Anyway, now I have a control and I can just keep tuning by ear.  By turning the sub delay up to 7.5 I could in effect make the relative delay 0, or by turning the sub delay down to 0 I can make the relative delay 7.5.  So I have all the range I need in one control.

(Sadly there appears to be no way to add relative delay to the subs.)

With this control, so far I have gravitated to 3.6 ms, which would mean the panels are delayed 7.5-3.6  or 3.9 ms relative to the subs (maybe some different numbers would make this easier).  I had previously been using a 3.3 ms delay as handed down from endless re-thinkings mainly.

I went ahead and put the panels at 10 ms delay.  That puts the super tweeters at 9.46 ms (only 0.02 adjustments available) and the subs at 6.10 ms for the same relative delays as above, and gives me full adjustability down to panel delay of up to 10 ms or down to relative negatives just by turning the sub delay knob which turns out to have a very interesting subjective effect.

Then I discovered that running the Tact test at precisely 6.25ms (relative to 10.00 ms in the panels) show the right channel going down (the sub leading) over 100's of MS, and the left channel going up (the sub lagging).  This has to be the perfect in-between compromise.  At 6.00 ms and 6.5 ms both channels go either up (one more  than the other) or down.  So I've settled on 6.25 as my best guestimate (equal to 3.75 ms difference from the panels which are delayed by 10 ms for convenience now).

Tact Impulse Response at 6.25 ms dub delay

At 6.50 ms both channels "rising"

At 6.0 ms both channels "falling"


When I play 96kHz, I merely dial up the mid way delay from 10.00 (calibrated at Tact's 48 khz) to 10.38.  To play 44.1kHz I should dial down the mid way delay to 9.94 (if I care).  To change the bass alignment I can just adjust the bass.

Actually I'm finding that 6.50 msec delay, relative to 10.38 (for 96kHz sampling).or whatever I've set the panels to, works better than 6.25.  6.50 subjectively lets the bass be bass, to hang a bit.  6.25, what I called the "compromise" position, is too dry, the reverse angle taken by the two sides cancel or something and it sounds dry.

Each time I do these alignments via Tact some trick like this arises and gives me an "angle" to make an objective/subjective judgement.  Who knows I could be off by more than 1 ms.