Living Room System

Friday, September 30, 2016

Yin and Yang

I'm beginning to fear that the wonderful sounding Pioneer PD-75, the best sounding digital I've ever heard in my home, is adding something.  It's like it's adding extra energy, a fuzz which makes attacks higher and so on.  It seems to be making my Krell work harder too.  This barely registers below 20kHz were it is far far far quieter than the DVP-9000ES playing SACD, where the top octave noise seems always to set new heights, is it -50dB still, or -40dB peak, peaking at the edge of the meter of course, 20kHz, have I seen that, meanwhile going above -90dB starting as low as 6kHz, a huge swath of visible noise below 20kHz.  Meanwhile the pioneer shows nothing above -90dB below 15kHz.  However, I fear, a bit, that the rise I see starting at 15kHz continues up for a bit, but my Behringer display stops at 20kHz so I'm not sure how far it goes.  However...the noise is unlikely to go as far as with DSD, precisely because it can't, the aliases would become too big.  So, in the end, some kind of digital filtering must being done, in the case of a 16mhz bitstream pwm, an interesting kind for sure.  Julian Hirsch measured no more than 0.004% distortion 20-20kHz, so the distortion isn't huge, though digital distortion is especially undesireable because it becomes proportionately larger at low levels.  But meanwhile Sony was getting 0.0015% distortion with similar technology, but not sacrificing much to pulse coherency.  Now 0.004% is comparable to many modern time coherent slow filters, while the culty NOS gets you above 1%.

So, that's Yang, extra positive energy radiating out beyond the envelope for the Pioneer, what's with the Yin Sony?

The Sony SACD as implemented in the DVP-9000ES with the same Sony bitstream chips as the SCD-1, which may have been the last bitstream models as bitstream models generate a lot of heat and noise which must be managed, generates quiet, better than 16bit quieting in the midrange, from 50% noise (1-bit 2.8mhz sigma delta operating at no more than 50% duty cycle) by creating holes, holes in time of the stream of pulses, holes and holes made of holes  That's all it can do, the pulses can't get any larger..

It can't get more obvious than that, can it?

Does the "holeyness" of DSD get better in highly oversampled and overquantized sigma delta implementation as nowadays?  Yes, buuut...the holeyness is there, it's made of holeyness, all the smoke can't make it go away.

Or, you could just say the Pioneer adds distortion--extra energy, DSD has perfect linearity in principle but ginormous HF noise, which obscures because it's random.

Thursday, September 29, 2016

You Can't Handle The Truth

The Truth is that CD players can correct small reading errors perfectly (these are in categories C1 and C2).  Larger errors (category CU), they conceal by interpolation from the surrounding good data.  Finally, only the largest errors cause the player to skip or stop completely.

The only way I know about to see counts of C1, C2, and CU is to have a Plextor CD ROM drive and the software with it.  The Plextor software had a feature to scan the disc for these errors.  Of course they are also limitations on the part of the cdrom drive hardware to some degree, but if you have a Plextor drive it's probably about the best drive anyway.

Outside of Plextor, no software or device I know of can do this.

What's especially troublesome to me is this "conceal" possibility (and, btw, "conceal" is exactly what it is technically called, I discovered after reading many documents).

I'd like to know whenever my player is "concealing" read errors.  In that case, I'm not getting the "perfect sound" of 16 bits.  I'm getting something just a bit better than garbage, at least for a tiny moment in time.  Listening to a CD, especially a old/dirty/hybrid/poorly-written CD which maybe very important to me, I have no way of knowing if a large portion of what I'm listening to is fill-in.

At that time, it might be a good time to:

a) trash the disc (or return it) and buy a new one
b) try to clean the disc
c) try to clean the laser with a laser cleaning disc (I've seen the turbo ones with teeny fins recommended, and not the brush kind, and never use liquids)
d) service the player, such as by a technician who will disassemble the player, clean the laser with alcohol and check the electronics
e) buy another player, perhaps newer than the antiques I have
f) switch to playing downloads
g) decide never to listen to recorded music again

Some early CD players, notably those using Philips transports and IC's, had a Disc Fault light which would possibly light under CU error conditions.  Or possibly it would take something worse, similar to that which would make the player skip or stop or not play at all (such as if you put the disc in upside down).  I have looked at the manuals of a few CD player models having these lights, and they were not at all clear.  None discussed CD error correction (always perfect) or concealing (always some kind of educated guess).  Such players include:

Cambridge CD3
Marantz CD74

Now it's a big bother to have to watch the screen where the Disc Fault light is during an entire CD.  It would be nicer to have a counter.  Then you could just look when you were taking the disc out and see how many errors there were on THAT disc.  That's what I think all large or reference CD players should have.

Perhaps, however, various companies involved would rather not even have you think about such things.  Perhaps, legitimately perhaps, there would be endless complaints (this machine gives me a CU when my old one didn't) about hardware and software.  Hardware companies would certainly rather not have complaints about their hardware.  And some hardware companies are also major software (in this case, actual factory recorded CD's) owners or sellers, who would get complaints both ways.

And, perhaps legitimately, even CU's are no big deal.  You could probably blind test a disc with 1000 CU's and nobody would tell the difference.  So why am I even worried, when I can "hear" the difference between my old bitstream, my old PCM, and my newer sigma delta?

Well, because I can't be sure of anything I hear, and having an unknown number of errors, any of which might be ameliorated by some corrective action I could take...well, it bothers me as much as anything.  It may not be clearly audible, but the other things I think I hear probably aren't there either.

BTW, after years of cursing DSD/SACD I am delighting in the sound of my newly acquired minty DVP-9000ES playing SACD's, and I was never even aware of how many SACD's I actually have (such as the old RCA Red Seal and Mercury Living Presence hybrids.  Sadly it is one (just one so far) of the Mercury hybrids which won't play on my 9000ES despite new laser.  When then makes you wonder about SACD that play many errors are being concealed?  It is said that the SACD format is automatically concealing because of the nature of sigma delta.  Anything bit lost is no more than any other, no concealing scheme can do much better than that.  I'm thinking there may be some dispersion on the disc also as there is with CD, so the loss caused by a large gap has only a slight effect on a larger area.

Perhaps the format is even more loss proof, like the CDROM and DVD formats, which use an additional layer of correction.

Anyway, I still love CD, concealment and all.  I think mostly people rarely hear the real faults caused by the CD system-as-a-whole when they complain about how a CD sounds.  The system is nearly perfect, extremely transparent, even at 44.1/16.  There are only slight differences you can make while still being faithful to high fidelity reproduction, and not intentional sound doctoring (which is ok too, if that works for you, as it does even for Mark Levinson with his virtual Cello Pallette, or did for Peter Walker--by all accounts one of the greatest audio engineers ever who designed the Quad speakers and amplifiers) which somehow, I never do.

I recall when I used potentiometers to set volume I always had to readjust the balance for every recording.  It never occurred to me that almost all of the balance twiddling I had to do resulted from the non-accurate tracking of the preamp.  Now that was with my otherwise beloved Aragon 28k, because I was way ahead of this as early as 1979 when I had a sealed stepped attenuator, same as was used in the GAS Thaedra.  Every step was perfectly accurate.  I continued to use that in various systems until the 1990's, then began to realize that this was not necessarily the best design, having a 25k impedance, for driving cables.  But, somehow, that didn't sink in and I continued using the volume control of a Citation One preamp, even higher impedance.  Then came the Aragon, and even that, though it measured about the lowest distortion I've ever seen, essentially perfect, using RMAA and my Juli@ card, I always thought it sounded dark.  And the problem with the volume and balance, it was never right!  I think this must not have had P&G as some think.  Anyway, I have loved digital controls from then on, starting with the Classe CP-35 which I strongly and strangely preferred to the Aragon (but unarguably the digital controls had perfect balance tracking, much appreciated), to the digital controls of the Tact which I have used since 2006 mostly as just a digital preamp, what I still feel is one of the most useful things (a big digital selector with volume trim and more) but hardly anyone makes.  It has 0.1dB volume settings, and balance which is never needed, and polarity but it's a bit inconvenient to switch polarity back and forth since they added parametric EQ feature, which I can't use unless I'm using Tact Room Correction, which I mostly haven't because I don't like it messing with the highs, a feature they never dropped from my 2.0 version.  Anway, it could be better, and I need to think about making something like what is needed, but the 2 Tact units I got during the Tact 2.0 factory blowout have served me very well, and almost irreplaceable so, for 10 years.

Anyway, LP's are far worse than that.  But most of the time, with CD's, you just put it on and it sounds perfect, or if not exactly perfect it would not be proveably discernable to anyone.

But, again, that's assuming that there isn't lots and lots of filling going on.  And you never know, because there is no indicator on all but a teeny proportion of players, and even on them there isn't such a convenience as a counter.  And the best of all would be a counter not just for CU but C1 and C2 also, but possibly most systems don't make that information available.  But just along the same lines that audiophiles want a bigger power supply than absolutely necessary, some think having a larger motor drive the CD spindle is better (that's far out, but of course part of getting a PD-75), knowing about C1 and C2 would help you see patterns, trends, etc.  Know if a cleaning might be good.  And, if they even knew, audiophiles would keep those numbers down.

Instead, in at least some cases, they applied CD tweaks which caused more errors or even breakdowns.

But there you go, does the tweak manufacturer want you to know that?  Certainly not, and he is going to badmouth the manufacturer that lets it happen, so a conspiracy of silence ensues.

This company makes a test disc that permits testing of CD player performance in correcting errors.

A good resource on SPDIF is:


I got my Pioneer PD-75, hooked it up, and it sounded so good, I'm not sure I care about the truth anymore.  This is the best sound source I've ever had.  On CD, it totally blows the Sony DVP-9000ES away, perhaps no surprise.  On hybrid SACD's, it depends, sometimes the greater low end authority, wherever that helps such as in Rock or Organ music that's not too abrasive, the Pioneer still wins.  The Sony characteristically has an Etherial sound, which I consider the sonic characteristic of 1-bit systems, including DSD.  Yin.  The Pioneer (which uses bitstream PCM and minimum filtering) has a Yang sound, perhaps even more than slightly macho at times, reveling in shaking the walls.  It does that, and is still the #2 sweetest sounding source in my collection, following the Sony.  It can create sounds I've never heard that still sound correct.

Tuesday, September 27, 2016


Jitter is probably blamed for bad sound too much, when it isn't to blame, and when there isn't actually bad sound.  Jitter as high as 10nS has been shown to be inaudible.

Anyway, I've always wanted to measure it, and now, with my Sencore DA795, I can.

Unfortunately the Sencore doesn't show numerical values below about 200pS, it just says "Low".  Most of what I'm reporting now is just from watching the logarithmic scale meter, which has markings at 100pS and 200pS.  So I'm reporting highly approximate values.

For the Pioneer PD-75, I see about 150-180pS jitter when playing a CD, which falls to 110pS when stopped.  Looks like some room for improvement in the power supply.  However even 180pS is typical for the best equipment ever made.

For my newest Sonos Connect, I measure 180-220pS jitter, clearly a bit higher than the PD-75 and more variable too, measured at coax spdif output.

(BTW, these are the RMS measurements.  The Peak measurements are over 500pS and very unstable, but the Pioneer is about 40pS better there too.  Also these are all unweighted measurements.  I have no idea how the Sencore measurements would differ from those using J-Test for example, but for sure the Sencore is completely insensitive to jitter in the recording.  It is only looking at the clock that can be recovered from the data, not the data itself.  I suspect John Atkinson uses a J-Test derived weighted peak number.  It's beginning to look like my unweighted RMS numbers are lower, but only about 50%.)

In both cases I used 7 foot Belden RG6 with Canare RCA's as interconnect to the meter.

In their original report, Stereophile reported 388pS jitter for the ZP80, which John Atkinson characterized as Excellent.  I thought that they showed 220pS in a later test, but I have been unable to find that.  In the 388pS test he's using J-Test on the analog output, you would think that to be comparable to the digital outputs.

I'm surprised that a heavy old mechanical device like the PD-75 actually beats Sonos on jitter.  But both numbers are excellent actually.  I would begin to feel worried with jitter above 500pS.  Some of the very best audio devices have jitter around 150pS, and often higher.  I'm not sure if I've ever seen jitter below 100pS for an audio device as such, though external clocks can have jitter spec down to 1pS or below.

Well, even assuming my meter to be perfect, there are 3 sources of jitter measured in the timing of the digital output:

1) The clock jitter of the pioneer, and everything that may ultimately affect the clocking out of data at the SPDIF output.  Audiophiles obsess over things like lack of power supply regulation for the transport section (which may be on a different transformer in this unit anyway) affecting the regulation of the clock.  And so on.

2) The cable losses and smearing (I proved this to be a negligible factor by seeing no difference with a far longer cable connected).

3) The jitter made inevitable by the coding of the spdif signal and practical transmitters and receivers.  I do not believe the clock can be perfectly recovered due to the nature of SPDIF data.  That is why the meter specified range only goes to 150pS with SPDIF input.  It goes to 35pS with a pure clock signal at the clock input.

3a) This varies depending on the actual data.  The Dunn test is worst case for 44.1/16.  Most music is less.  A continuing silence would probably be the least jitter, and for that reason I may record a test case.  I love being able to make my own test recordings btw.  Is this possible at low expense with DSD?  I can't make an SACD for 9000ES but I could apparently make a test disc DSD-File or something like that for my BDP-95.

3b) The actual properties of the Pioneer SPDIF transmitter circuit may also be a factor, say compared to a "perfect" implementation, perhaps an over-implementation.  I already set the baseline at "practical" and presumably the Pioneer isn't at that level perfectly, it may not be as good as something could practically get (say, without an Apollo Project--that's impractical for this) in impedance or risetime, for example.  I imagine some obsessing over such things (and even seeing me writing about them think I obsess over them to my casual observers, I merely pointed such things out, unlike say Lampizator I'm only unsure of how important they are in the overall picture, but I'd expect a Pioneer Reference player like the PD-75 to do a pertty good job already at the powering the SPDIF output, and so it seems too, but who knows, it might be 2pS better with a bigger FET running more ten times more current at the output, on it's own separate transformer of course, blah blah, modifier believe they can may everything better).

What is that "minimum" jitter for SPDIF?  I need to read Dunn's articles.

Friday, September 23, 2016

More on DSD vs PCM

Here is the most helpful and informative page I have ever read about DSD, by Charles Hansen of Ayre Acoustics.  DSD ("Direct Stream Digital") is simply a meaningless trademark term which Sony has in this case defined as 1-bit delta sigma modulation at 2.8224Mhz and 7th order noise shaping.  They own the trademark so they can say it means anything they like.  Any time you deviate from 1-bit, as is essential for any kind of mixing or mastering or even level setting, you are forced out of the delta sigma domain into the PCM domain.  The Sonoma so-called DSD workstation is really a PCM workstation that happens to operate at 2.8224Mhz with 8 bit data.  DSD and PCM are interpreted by the same delta sigma DACs just with different digital filter algorithms.  The difference in filters explains everything people hear--it has to because there are no other differences.  Any superiority comes from the loss of the need for brick wall filters in high speed systems.  Now that we have 4x PCM, we don't need brick wall filters in PCM any more either, so we can achieve the same benefits with PCM which is far easier to work with, but few have ventured into this new landscape yet (except Ayre of course, who has a QA-9 A/D converter with no brick wall filter, instead it uses a "moving average" filter which has not time smear or ringing).  The only "pure" DSD recordings are all analog then converted to DSD, or live performances.  And there are just a very small number of those.  [And btw, Charles Hansen is the greatest!!!  After reading this hugely informative yet no-nonsense post, I'm a fan.]

Here's a long discussion at Steve Hoffman, which features fans on both sides, and reasonable civility.

Lotsa people think DSD and HighRes PCM are pretty equivalent.  I think that's a reasonable view, though one I don't exactly agree with (I still favor PCM), with the equivalence being DSD is about the same as 88.2/20 or 96/20.

Of course, DSD fanboys have always claimed that DSD is some special magic, that NO PCM could equal.  (Don't tell them about the feedback that delta sigma systems rely on.  That might collapse the magic.)

Many if not most think 44.kHz/16 bit "perfect sound forever" is still perfectly fine, and I know a number of people who think CD quality PCM is superior to DSD, especially in the highs, with a common thread that the highs in DSD sound fake, for which there is a tiny bit of technical justification (more noise and noise shaping is going on, meanwhile there is less brick-wall filtering, so you could also take this the other way).

Famously much SACD and DSD content is made from PCM sources, often defeating one of the long running claims (which probably most serious audio engineers would regard as hype) that DSD bypasses several phases of processing used in PCM.

Although unleashing DSD onto the world, Sony supported it poorly according to many industry insiders.  AFAIK, and in contrast many earlier format releases, Sony did not sell any DSD mastering equipment.  Instead, they gave it away to specific "partners."  If you were not one of the handful of chosen, you were out of luck, you would have to do your mastering in high rez PCM and convert to PCM.  Even the equipment Sony gave away may not have been fully featured, apparently Sony designed a fully featured DSD mixing system with a European partner, then never actually bothered to make it.  It's not impossible to make such a thing, and I believe that there are now, 18 years after launch, fully DSD mixing and EQ system(s) available from companies other than Sony.

Speaking of how DSD allegedly bypasses the decimation and integration phases (the hype which some believe as the magic of DSD that makes it inherently better), there are a bunch of problems with the argument (in addition to the one that PCM processing is nearly always used anyway).  Even if you had pure DSD mastering and playback (almost never the case) the claim would be inaccurate because:

1) First, it assumes that 1-bit DAC's are being used at the DSD sampling rate.  This is almost never the case anymore.  Almost all DAC's used for DSD now are Delta Sigma DAC's.  It's still considered DSD if you use Multi-bit Delta Sigma DAC's at the DSD frequency or higher, which requires a lot of complex mathematics to do  optimally.

The last 1-bit DAC's were used in devices like my 2001 DVP-9000ES.  Those were Sony DACs which actually operated at 70Mhz if I understand correctly, which would be something like 24x DSD.  Sony was doing some kind of way up sampling to increase dynamic range.  So it was never as simple as the cute block diagram Sony used to make DSD look simpler.

(Interestingly enough, it does not appear that the spec sheets for the Sony converter chips used in DVP-9000ES and back to CDP-707ES, have ever been made public.  But Sony did advertise these as 70 Mhz 1-bit converters.  I wonder if Sony made these at the long closed Sony Semiconductor factory in San Antonio, Texas.  Sony subsequently found it cheaper to buy off the shelf multibit sigma delta converters from the likes of Burr Brown.  Cynics

2) Second it assumes that 1-bit Sigma Delta ADC's are used.  I haven't found much discussion about this, but I believe that in the early days of digital audio, sigma delta ADC's were considered too noisy.  Noiseshaping is required when you use a sigma delta ADC.   Also, very high oversampling.  I believe some if not all of the earliest ADC's were actually SAR (successive approximation) which is one of most widely used approaches for analog to digital conversion.   Even now when Sigma Delta ADC's are used, they are used with multi bit converters and high oversampling.

Even if Delta Sigma ADC's are used, there's a lot more going on than you might think.  Quoting from the above linked article:

These are usually very-high-order sigma-delta modulators (for example, 4th-order or higher), incorporating a multibit ADC and multibit feedback DAC. 
Sigma Delta systems are inherently approximate (aka noisy) systems which almost always require feedback to operate correctly.  This is something NEVER mentioned.  This is one reason why I've personally moved back to PCM as much as possible.  PCM does not require feedback to work correctly.  The dirty word "feedback" would destroy the claimed "magic" of simplicity.

Of course it is also because of feedback that delta sigma systems can get near perfect linearity without requiring extensive trimming the way PCM systems do.  You do the fine tuning before, when you can only guess, or you do the fine tuning after the fact, which can always be perfect.

Now this also is probably a non-issue.  While the feedback used in Delta Sigma would smear the highs, Delta Sigma ADC's and DAC's generally operate at such high frequencies that high frequency information might even be better preserved as compared with slower PCM systems.  It's actually quite hard to know without extensive analysis and/or testing which system preserves the high frequency integrity better.

However, one can also just look at the measured performance.  DSD does quite well compared to 16 bit systems in the midrange, but has much more noise in the upper octave 10-20kHz.  That greater high frequency noise means that by definition high frequency information is NOT being preserved as well.  OTOH, there is ultimately response to an even higher frequency, and there may be less phase shift in the upper audible octave.  So it looks like a toss up.

Listening Tests

The best published investigation of audible differences between PCM and DSD was done in Germany using some of the very best megabuck PCM and DSD equipment.  (IIRC the PCM was either 88.2kHz/20 or 96kHz/20, so as to have comparable bandwidth and bit depth.)  Monitoring was done with Stax headphones (you can't get more transparent than that).  And the result was: there is no audible difference!  Not only was the null hypothesis not rejected but most identification was no better than random for nearly all people.

I believe this is basically correct.  DSD is simply an inefficient high resolution system which takes more bits to achieve 88.2kHz/20bit fidelity than PCM does, and PCM is more easily worked with in many ways, including incrementally increasing fidelity with just a few more bits.  The very idea of 2xDSD and 8xDSD is monstrous--a monstrous waste of bits.

I've argued that DSD operates a bit as if it has an infinitely varying digital filter.  Varying the digital filters in 44.1/16 can make a slight audible difference (or larger if you throw out the book with NOS, which is not high fidelity IMO).  Once you get to modern apodizing reconstruction filters using ordinary PCM, it's not clear from published research that better can be achieved or is necessary, but an end-to-end apodizing system like MQA promises to be would be a step better.  That is, a step better than DSD-in-principle.

To DSD or not to DSD

If Only Sony had marketed DSD on a fairly straightforward technical basis, I might have signed on in 1999 and never looked back.  Forget the simplicity crapola, the real technical advantage of DSD compared to plain vanilla PCM is the superb impulse response.

[Update: after the nth revision of this post, I discovered that Charles Hansen had already debunked the above graph in great detail.  It's a pack of lies from beginning to end!  It's no wonder that Sony didn't plaster this on everything, more people might have called them out.  This is not to say that you couldn't come up with a relatively more honest graph to make the point that DSD has better impulse response that the usual 44.1kHz plus brick wall filtering usually used, but in that case there would be competing hirez PCM systems that could do as well.  The way the graph is shown no real systems can produce those results at all.  BTW, I'm now a little bit concerned that the MQA impulse response graph shown in TAS is also inaccurate, though in showing more rather than less time smearing with standard PCM.]

Now, PCM defenders will argue, and they have got to be at least mostly correct, that this difference, which is caused by high frequency phase response in the anti-aliasing and reconstruction filters, is not audible.  But it sure looks like it would be important.

I had hunted all around for a clear picture like the above, and found it posted by Hiro, a senior member of ComputerAudiophile, at this 64 page mega argument about DSD, which looks to be one of the better discussions on the topic.

Hiro starts by calling most of John Siau's arguments as wrong or misleading, then makes a pretty interesting (and misleading) argument himself.  He claims that Archimago measured the same noise from DSD64 as from 192/24 on the Teac interface.  So I had to go and read Archimago's blog.

Hiro is plain WRONG!

In this page of measurements, Archimago shows the astoundingly high noise level of plain DSD in a scope trace of a 1kHz sine wave.  Then, later on, in the 6th graphic on the page he shows the noise level of the Teac under several conditions, DSD with FIR1-4, and 24/192 with sharp filter.  These could hardly be more different above 20k.  The DSD with all 4 possible FIR filters rises from -145dB to about -75dB around 90kHz, reaching -100dB at 40kHz.  Meanwhile, 24/192 rises from -145dB at 20khz to -115dB at 90kHz, reaching only -140dB at 40kHz.  At 40kHz, the point where Hiro claimed that the noise levels were still the same, there is actually a 40dB difference.

Now, in the previous page of measurements, Archimago shows that 24/192 is noisier than 24/96 in the Teac and this is typical of all DAC's (and part of the reason why Dan Lavry and some others recommend 24/96 instead of 24/192 for the highest fidelity).  Perhaps it is not surprising that Hiro takes the worst case for PCM noise, 24/192 as his basis of comparison with DSD.  But even there he is wrong, as I just reported.

Even if you take the worst case for PCM noise, 24/192, and then combine it with No Oversampling (NOS) which as I always argue isn't really high fidelity or standard PCM, do you get the noise to rise a bit closer to DSD.  But the DSD noise is still higher.  Archimago doesn't show the NOS and DSD noise spectra on the same graph, or even the same page, but I can compare them and they are still quite different.  At 40kHz, 24/192 with NOS reaches -113dB.  Meanwhile, DSD64 has reached -100dB, which is 13dB worse.

Now Hiro was wrong in what he said.  But perhaps he merely misspoke.  Perhaps he meant to say that DSD128 is comparable to 24/192 in noise level.  And for that comparison, Archimago does show both on one graph.  At 45kHz, actually pretty much everywhere, the noise level for 24/192 is lower, but just barely until 50kHz and above.  At 40kHz it breaks down like this:

24/192/sharp   -139dB
DSD/FIR2      -137dB
DSD/FIR1      -131dB

I still wouldn't say "they are the same below 45kHz" but close.  But I'm not even sure why we are doing THIS comparison.  Well Hiro also mentions that DSD64 can very simply be up sampled to DSD128.  Now here we have an interesting case, however.  Upsampling will push the digital noise upwards.  But it seems to me very much unlike "noise shaping" in one critical way.  As a purely 1-way process, up sampling cannot possibly restore lost information.  The information loss from the original DSD64 encoding cannot be undone.  So while the noise will be reduced, the lost information cannot be restored, and I'd predict a kind of dark sound, the same thing you get in clunkier fashion with noise gating.

Meanwhile, I would have been (and was) rightfully turned off by a large number of things about DSD right from the start:

1) DSD recorders have been almost unobtainable (there were no consumer DSD recorders until 2007 or so, right now one is available for $999).
2) SACD discs are impossible for most people to make, they require a manufactured watermark (some old machines will accept a fake DVD/SACD, and the newest ones will read DSD files).
3) DSD does not lend itself to simple DSP for crossover and room correction functions--so conversion to and from PCM is required anyway, so the best approach is high rez PCM end-to-end.

I'm less bothered by (3) than I was years ago, for an interesting reason.  The reason is that conversion to and from PCM is extremely transparent.  It's so transparent that I find I often prefer taking the analog outputs of digital devices and resampling to digital at 24/96 than just letting the 44.1/16 pass through all the way.  So, if I'm fine with resampling analog to PCM, why not DSD to PCM, or even DSD to Analog to PCM?  I see now I can fit DSD into my system as a perfectly fine music delivery system, though not as a final digital conversion approach.

Of course, as many have pointed out, SACD was an attempt to impose DRM on an industry.  If Sony could have led everyone to abandon PCM, we'd be locked into their new system with a DRM system that has not even yet been broken.  Of course, we know in retrospect this was never going to happen.

But from the beginning, there was no consumer recording of the new formats, and, very curiously, the first generation SACD machines had problems dealing with user-recorded media that had already become well established by that time.  As if to send a message to the industry.  Well it was too late.

Now I just said that PCM conversion is very transparent, as was demonstrated by the Meyer/Moran experiments in 2006, up to 10 levels of PCM conversion/deconversion was still found to be audibly transparent.  Very little noise is added, however there is a increasing amount of high frequency phase shift.  This doesn't look good on photos but has never been proven to be audible.

Meanwhile, DSD is not amenable to multiple generations because of high frequency noise that keeps on growing until you get overloading in the highs.

However, DSD128 is looking like it might have reasonably low noise levels in the near supersonic, and still of course give you the natural (noise shaping aka feedback driven) impulse response.  DSD64 is so noisy you can easily see the HF noise on high bandwidth oscilloscope traces of sine waves, as Archimago shows.  DSD128 looks just like analog on the scope.

I'm not sure we've seen the end of this, since now filter designers are showing how perfect impulse response can be obtained with PCM and slightly higher sampling and end-to-end mathematical apodizing.  This retains the advantages of PCM in relative compactness and mathematical tractibility--it can easily be worked with in DSP.

DSD counfounds mathematics not because of sigma delta itself--that's the trivial part that had me fooled for the longest time.  Equally fundamental to DSD is Noise Shaping, based on continuous high level feedback.  This means, in effect, each pulse is NOT equal.  Each pulse is in the context of everything before and after it, which actually determines what it means.  This context dependence makes the mathematics infinite.  You can't just "add things up" to make a mixer, etc.

Meanwhile, PCM is reborn every few years with some interesting innovations, though I consider apodizing important but little else.  IMO, by the time we get to CD players like the Pioneer PD-75 around 1991 we're in the modern era of high sound quality, thanks to high linearity, low jitter and high stability, and flat but closer to linear phase digital filters: plain old 44.1/16 bits done fairly well is incredibly good!  For the longest time, the best published research in JAES was that it was perfectly transparent.  It may never have been perfectly transparent, but it's obviously quite close.  It has only barely been established in the AES literature that it isn't perfectly transparent, that significantly improved apodizing can be slightly audibly better and demonstrated in DBT (published by Meridian).  This has not been scientifically established for DSD, in fact the reverse has been demonstrated in the most recent and well done experiment--it is indistinguishable from comparable PCM.  Most talk to the contrary has not been well founded.

DSD stays alive simply by slowly making what used to be impossible less so.  And I'm happy to play with it as I can without huge expense.  I will never have full DSD end-to-end because that would require me to give up DSP.  But I can accept DSD inputs, converted through analog resampling to 96/24.

Which in a way, is not surprising.  Recall that DSD was originally invented, in the first place, not as a mixing or mastering format, but as an archival format.  Now I'm not sure it's especially good at that either, because of noise, but for an archival format there isn't much concern regarding mixing and mastering, and even distribution and playback.  Also, DSD56 was invented specifically for the mastering of 44.1 and 48kHz, the two popular rates of the time, but not for the high rez PCM formats of today.

Now certainly someone as astute as Ted Smith would understand the practical and mathematical difficulties of DSD.  Nevertheless, he built a DSD DAC.  Maybe he has some answers to the other problems too.  I find his progression from first time electronic builder to advanced DAC builder unbelievable.  In this story line, it all happens in a few months in 2010, while he's apparently also listening to Johnny Cash.

Archimago does usefully propose combining DSD128 with lossless compression.  If it can indeed be compressed to the same size as 192/24, perhaps it's not that bad.  But we have no reason to believe this complexity is needed.  As far as we know now, 24/96 PCM is as high definition as is needed, and it is far easier to work with than DSD128.

Tuesday, September 20, 2016

Clocks, clocks

We can tell the Pioneer PD-75 and PD-95 clocks are about the same by reading the modification instructions by Octave who makes clock upgrades.  The replacement instructions regarding the parts changes are identical.

So the PD-75 has the same clock circuit as the PD-95.  And BTW it's not the cheapest circuit, it's the step-up circuit.  In the cheapest circuit a crystal is simply connected to the relevant chip with suitable passive parts.  In the step-up circuit, the crystal and passive parts are buffered, in the PD-75/95 by two layers of opamp buffers.  An initial layer which then drives 3 separate buffers to send clocks to different things.

Not to say, there are a bunch of more sophisticated designs than the first step up.

But I think it's clear that these were intended to be in the elite upper category of CD clock performance (there were two categories established by the Red Book).  So we could expect perhaps jitter at least to be less than 300pS, allegedly 1/30 or less of what would be audible, or less.

The PD-95 might benefit even in jitter performance simply by having better power supply.  That has huge effect on simple oscillator clocks.  Good clocks are designed to be immune, though nothing is ever entirely immune from influence, within the same electronics.

Meanwhile it's bugging me that nobody talks about simply replacing the Sonos Connect (formerly known as Zoneplayer ZP80 and ZP90) clock the way one might do for a CD player.  (BTW, in some accounts replacing the clock is essentially the only thing that ultimately determines player-added jitter at the output terminals.  The clock ultimately controls the clocking out of bits, and it doesn't have to vary that rate one iota because servos earlier in the system keep buffers filled with enough data.  If not, there's a skip, which rarely happens anymore with good players and discs.  In other circles, everything counts.)  Instead, it seems the major recommended replacement now adds a whole circuit board which does reclocking on the output.  I would rather avoid reclocking because it only just smoothing the underlying jitter.  It's not really eliminating variation because it can't, it's shifting it somewhat to a lower frequency, so jitter becomes very low frequency wow.  And not only reclocking, but ASRC to 96kHz, or your frequency of choice.  That might actually be good for me, but I think I'd rather do analog resampling than ASRC.  And I've had a longstanding bias against ASRC as simply "buring the jitter in the data, where it cannot be removed".  So why not just replace the clock.  The blurbs all say "multiple lousy clocks."  I'd like to see that for myself, I still think fixing the actual clock is better than any band-aid ASRC.  If it's true there are multiple clocks, why not replace the one that counts most, or all of them, or something?

I suspect that the difference between the existing PD-75 clock and something better would not be by anyone audible in ABX DBT.  But of course, I gotta have the best clock anyway, just to be absolutely sure.

I've asked Kingwa if he still makes his Audio G_D jclock.

Monday, September 19, 2016

Replacement for Sonos

Something like this would be a start, the RoonReady Sonore Sonicorbiter SE.

Article also mentions a bunch of network audio frameworks:

HQ Player NAA

Sonos recently has made it hard to select 0dB on a Connect so you can properly adjust level on actual system.  (Sonos is more and more thinking of connecting to all their own devices, rather than serving as a transport layer for music for all devices.  A slippery slope going the wrong way IMO.)

Sonos has never supported Hi Rez digital, or DSD, and probably never will.

Sonos gives no ability to adjust buffering size (and conversely latency) when sharing analog sources over the network.  It seems to be limited due to small buffering to a maximum of 5 uncompressed analog source connections on a Sonos network.

The one good thing that Sonos does that nobody else does is allowing analog connections across the network in the first place.  I cannot switch to any other system until it provides similar or superior analog source connectivity

You might notice this little Sonore device has no analog inputs.

Article comments also mention some other devices:

Raspberri Pi Audio

In this article about a modded Connect, yet more things are listed:

Denon's Heos Link
AURALiC Aries/Mini

BTW, I don't think RE-clocking and up sampling Sonos output to 96kHz is necessarily the way to go.

Reclocking in general is band-aids, all that re-clocking can ever do is low pass filter the timing variations.

What is needed is much simpler: add a word clock input so external clock can be used!  (OR, bolt high quality clock inside.)

Sunday, September 18, 2016


I'm very much enjoying the sound of my new DVP-9000ES playing SACD's.  I'm feeling that, indeed, Sony did something special, at least with these early SACD players, that I have not heard in non-Sony branded players.

Something different anyway, perhaps not all to the good I'm wondering.  Nowadays even Sony uses highly oversampled multibit DAC chips, sometimes even from the likes of Burr Brown, to implement SACD, just like everyone else.  Back when the first SACD players were introduced, Sony was using their Pulse Converter chips, CXA8042AS.  Those pulse converter chips are used in both the SCD-1/777 and the DVP-9000ES.  Outside of that, and the use of one OPA213, the output circuits are quite different between the super high end and merely upscale, with the super high end showing far more additional stuff, and discrete circuits in the output.

A leading audio engineer, Stanley Lipschutz, and several others, revealed inherent faults in 1 bit delta sigma in AES papers in 2000, just after the public release of the SCD-1.  The principle fault with 1 bit delta sigma is that the background noise isn't gaussian, it's tonal, with idle tones.

Sony then denied, to John Atkinson, that Sony was using 1 bit conversion, in response to a paper by David Rich describing the concerns of Lipschutz and others, published (of all places) in Stereophile.

I'd always wondered if that denial was with regards to future SACD, not past.  In the past machinery, as in all the machines I just mentioned, still for sale but designed years earlier, they might have used 1-bit, but that was now water under the bridge.

I don't know enough about these CXA8042AS chips actually to confirm my version of this, that Sony abandoned true 1-bit when it became apparent that it didn't down scale well, or they hit a brick wall in making further improvements, or were covering up faults all along, or were embarassed by Lipschutz, or something like that.  They had been able to achieve brilliant sound in the 3 early players by perhaps mixing things up a bit (especially in the SCD-1/777ES) so that the 1-bit fundamental character was somewhat obscured.  I think they may have been further oversampling the 1-bit, IIRC 70Mhz was claimed for an earlier Sony ES CD player.  But in this regards, the 9000ES is actually pretty straightforward in the analog section, aka simple, and in that regards very different from SCD-1/777ES, but neverthess (or alternately) similarly good sounding.

The alternative is they never did anything like true 1-bit all along, but that seems to be rewriting history.  1-bit arrived on the scene I think starting with Pioneer machines, and perhaps others, in the late 1980's with great fanfare.  One trademark name was Bitstream, another was BASH.  It may have been Sony, who had been plodding along with Philips chips, that was the follower, coming up with their own 1-bit chips to compete with Pioneer in pushing linearity beyond -80dB down to -100dB and maybe even -110dB and beyond.  So the 1-bit race kept on during the 90's, with Sony ultimately achieving within 1dB of the theoretical possible CD THD+N performance with their 997ES and possibly 707ES and pusing linearity out to -110dB.  Pioneer missed this considerably in their PD-75 but I'm not sure about the higher end ones.

But already, by the time say of the PD-S06, Pioneer seems to have been questioning 1-bit.  That machine and an increasing number of later machines, switched to using full width multibit PCM chips like PCM56 and PCM63 especially.  Or at least they were letting customers have a choice, as early as with the PD-93--which used PCM63, but I think Pioneer had completely abandoned 1-bit by the time of the Lipschutz papers.

But Sony had already tied its hands to 1-bit distribution (if not implementation) with SACD, and kept promoting 1-bit until the launch of SACD, which featured 1-bit based players, but were quietly walking back the principle superiority of SACD from being 1-bit to being, well, whatever the marketing agency could think of.

But why all this fuss and bother if the first SACD players sounded just fine?  Well perhaps it's not just about making things good, it's about making them better and better year after year.  It was too hard to keep making 1-bit better (through various trickery and overengineering?).

Anyway let me also say that DSD is perhaps not as bad as I've written about here-to-fore.  While a pure 2.8Mhz 1-bit delta sigma system would be horribly information-lossy, and correspondingly noisy, that isn't at all like what DSD is.  DSD achieves low audible noise, AND low information inefficiency, by noise shaping.  Noise shaping is not just an add on, it is fundamentally what makes SACD possible, and that may be what I was not thinking about properly.  It's improper perhaps to call DSD a Delta Sigma system, it's a Delta Sigma Noiseshaping system.  Noiseshaping takes the place of the structuring imposed in PCM by coding.  That's the difference here: noiseshaping vs coding.

Noiseshaping compensates for the information loss below 20kHz in a comparable 1-bit delta sigma system, and somewhat more.  If it weren't for noiseshaping, the noise would be the clear sign of information loss.  But actually DSD achives better noise performance than CD in the midband.  Therefore, it is transmitting more information there.

And here, I'm not sure how to make the full comparison.  One way would be to consider the full band 20-20k noise performance as showing the "information loss" of the system.  And the other way would be to consider something like the A-weighted noise performance as showing "the effective information loss" of the system.  By the latter measure for sure, and I'm uncertain for the former, DSD arguably achieves not information loss compared to 16bit 44.1kHz PCM, but information increase.

There's no question still that DSD is an inefficient system in transmitting information, PCM is far more efficient.  But it could be said to be satisfactory in total information transmission, having reached CD quality and somewhat besting it, and having different character in the details, in ways that could be audibly pleasing.

Anyway, I'm thinking that the noise performance is a good indication of information performance, and one can just look at the noise curve of DSD and say it has more information than 44.1/16 in the midband.  I'm only not sure to which the increasing noise in the upper octave cancels that anymore, it might not cancel all of the advantage.  But anyway, the noiseshaping component means I can't simply apply my previous simplistic calculations.  I have to account for the effect of the noise shaping in shifting available information bandwidth from super high frequencies down to useable ones (in the reverse direction to the noise).  So another term might be information shifting.

And I wouldn't worry about the gaussian noise as much as tonal or correlated noise, and noise whose proporition increases or changes character at lower levels.

One way to explore "the details" in the sound would be to make a PCM or DSD recording at artificially low recorded level, say -60dB.  Then playback with lots of gain, and see which sounds better.  (This is actually more complicated than it sounds.  Should the gain used be PCM-like or analog, for example.)

Of course we don't really have to be cynical to see that the big win for Sony with SACD/DSD would have been IP and the big industry selling point was DRM.  Meanwhile I don't doubt that the lack of openness was a big hinderance in the end.

I don't think it's possible to consumers to make SACD's the way they could make CD-R's and CD-RW's, and that's part of the plan.  Interesting that the earliest SACD machines from Sony may not even have supported CD-R and/or CD-RW, though most CD machines of the time did.

The lack of CD-R/RW capability may have been a subtle hint to industry.  We own this, and we're not going to let consumers destroy your profit margins by permitting consumers to copy.

Anyway, WRT Sony, they visibly dropped any major concern for SACD and DSD sometime around 2006 when the Blu Ray vs DVD HD war was heating up.  SACD had not gone the right way, they might have figured, but lessons had been learned to win the next corporate battle, which Sony did.