Tuesday, December 17, 2024

Update on Kenwood 600T

I'm finally about to remove the Kenwood 600T from my kitchen audio system, where the tuner is by the side of my chair and so I do the most "tuning" of different stations.  I had only moved the 600T into the kitchen to test my antennas, but ended up keeping it there for 5 years because I never got around to testing the antennas before that.  It sounds fairly decent (never have hopes THAT high for FM anymore anyway) when the Mode is switched to Filter.  That rolls off the highs just enough to remove the high frequency brightness and glare that is sadly characteristic of Kenwood tuners until they got into analog multiplier MPX circuits, as with the most famous L-02T, and the later lesser known but still stellar unit I have, the L-1000T.

I had sold my KT 917 in the 1980's way too cheap because I never liked that sound.  Much later I hoped the 600T would be sonically nicer but it wasn't.  They are somewhat similar in concept and build but differ in circuit details with the  KT-917 being much more famous.  I think the KT 917 also had a filter position, as well as a separate blend control, but I didn't pay attention to those things then.

Anyway, I don't think either tuner has the there-ness as many other high end tuners because the primitive Pulse Count Detector used by both 600T and KT-917 is fundamentally inadequate, something Kenwood themselves were aware of and fixed the problem in the L-02T and later high end tuners, with their PLL Detector, aka "linear" detector.  This should not be confused with PLL multiplex, which was already a common feature in the late 1970's.  Basically all tuners with chip MPX have PLL multiplex, and that is not even such a big deal as getting the detector right.  Before the PLL detectors, which had a slow roll out in the most high end models of the 1980's, most tuners used old fashioned methods such as ratio and quadrature detectors, which have known non-linearities.

Anyway, the slight lack of ultimate transparency in the Pulse Count Detector is of little consequence in practice, since few take FM broadcasting seriously enough to utilize the full potential of the carrier.  Later similar pulse counting detectors were made with much higher resolution (such as the Accuphase tuners of the 1990's and beyond).  Interesting that Accuphase itself was created by former Kenwood engineers in the mid 1970's, and their original tuners which competed with the top Kenwood Pulse Count Detector models used conventional detectors.  It's almost as if they left because they felt it wasn't any good, and they wanted to prove it.

But we'll see how I feel after I switch in some other tuners, notably a Pioneer which was one of their "greats" from the late 1970's, like the Pioneer ST-9500 MkII.  I think it has a commonly used quadrature detector, well perfected by that point.  I know when I used the Kenwood L-1000T there was always an incredible transparency which no other tuner I'd tried possessed, it will be interesting to see how the 9500 MkII compares to it.

Anyway, I'd long denigrated the DX'ing of both the KT-917 and 600T.  I've described the 3 position IF bandwidth switch (which as labeled as Narrow, Normal, and Wide) as "Wide, Wider, and Widest."

But now I can say with absolute certainly that while the Narrow on the 600T is nothing at all like the Super Narrow on the McIntosh MR 78, it still has some use.  I've found several stations, typically low power stations not that far away, that can simply not be tuned in until you select the Narrow IF band.  (The narrow band lets the tuner be more selective, by blocking stuff on either side better.  It doesn't help with capture issues, however, for which Wide may work better.)

Stations are still listenable nearly down to 10dBf (on the meter) which is pretty amazing.

An attached scope is the best way to adjust tuning and/or antenna.  The multipath meter is nearly useless.  The signal strength meter can be used in tuning only if you very carefully dial in the very highest peak.  Around the peak for quite aways it changes very little so you have to look for very tiny deflections of the meter.  That correlates exactly with the picture on the scope, as expected the now quite out-of-alignment center tune meter does not, though it seems like on every tuner I've ever used, aging has caused the correct tune position to be at the far right of the central mark in the center tune meter.  This is usually just before the stereo light goes out for having tuned too much higher.







Saturday, December 14, 2024

My two FM Antennas Compared

 


I have two outdoor FM antennas that were installed by electricians 8 years ago.*  One is normally connected to the Living Room FM tuner and the other is normally connected to my kitchen FM tuner.  For almost 20 years I've kept the Living Room tuner set to 88.3 KPAC for background music listening, and I use the kitchen tuner (conveniently next to my kitchen chair) for searching for and tuning in other stations, especially 90.1 KSYM and 91.7 KRTU, two college radio stations.

(*This was a very expensive project because of the grounding requirements.  Each antenna has its own grounding rod, and all grounding rods are connected to the main house grounding rod by a thick wire that runs 3 feet underground across the back yard.)

Five years ago I decided I wanted to measure and compare the two antennas on the same tuner.  So for this purpose I moved my classic Kenwood 600T FM tuner to the kitchen.  I was going to use the 600T's marvelous 10dB calibrated signal strength meters, multipath meter, and scope outputs with a scope to compare my two FM antennas.  Otherwise, the 600T is not my favorite sounding tuner.  I planned to do the test in a few days and then replace the 600T with one of my other tuners that sounds better.

But after setting it up in the kitchen, I discovered the 600T sounds fairly good if the MPX Filter position on the Mode control is selected.  (Otherwise it sounds bright and tinny.)   And because of that and other things, you know how it goes, I never got around to doing the tests until now (last week).


One antenna is the famous Magnum Dynalab ST-2Another is the less well known Godar FM DXR 1000.  They are both whip antennas not because that is the best but because that is the easiest to install, since they can simply be screwed to the side of the house and not atop a mast.  I've long dreamed of having an antenna mast but it would be a very complicated outdoor project, too complicated for me so far.  Whip antennas also have an advantage in that they are omnidirectional and never need to be rotated.  This can also be a disadvantage if you want to block some strong unwanted signal.

(Most of the differences here probably have little to do with the whip antennas themselves, which should have fairly comparable performance in the FM band, but in the differing heights of the antennas as installed, and the fact that one of them--the Godar--is now bent.  Strangely, however, the bent antenna actually worked better on some stations in this test.)

I have the ST-2 is mounted as near as it could get to the peak of the roof, as high as I can go just mounting the antenna to the side of the house.  I couldn't mount it right at the peak because it would interfere with my locked attic door.

The Godar is mounted about 4 feet away, and situated about two feet lower because of the roof slope.  The Godar is tuned to what I determined to be best by SWR in the low FM band, probably 88.3.  The ST-2 has similar fixed tuning, best at the low FM band (which is where all my stations of interest are anyway).

Sadly tree branches messed with the Godar last year, and is bent to an angle less than half way up.  So how well is the Godar still working anyway, I wanted to know.  Do I need to buy a new antenna?

The Godar antenna, because it has greater RF bandwidth* than the ST-2 is also used for my Uniden SDS200 scanner--whose importance I'm beginning to question now that police bands here are 100% encrypted, which Scanners can't legally break even if they could do it technically.  Supposedly I can still get fire and EMS communications though I need to update the programming in my scanner for the newest systems, which changed about 2 years after I bought my scanner.

*I long figured the Godar has greater specified RF bandwidth, up to 800 Mhz because of having a lower inductance choke.  However it might also be because of the thicker whip made out of aluminum rather than steel, and because you can make the antenna much shorter.  I have it extended to nearly the maximum length so maybe I am not getting the top RF bandwidth after all, but it did work OK with my scanner at frequencies around 850 Mhz when the scanner was still programmed correctly for local services.

So in between the Godar and the living room FM tuner there is a splitter, which reduces the signal sent to the FM tuner by 3dB.  I tried an FM/TV splitter, which could theoretically have zero reduction in the FM band, but decided that it curiously caused non-linearities in the FM band which were even worse than a splitter (or so it seemed).  So I went back to a plain vanilla nice quality 1000 Mhz bandwidth splitter.  The signal then passes through permanently wired RG-6 panels in kitchen and living room to reach my living room tuner, a Pioneer F-26, one of the best FM tuners ever (like many of the other FM tuners I have collected) and from the late 1970's.

I was motivated to do the test now both because of the Godar antenna being bent and because I really really want to start playing my other tuners, including the Pioneer ST-9500 MkII that I finally got down from the top shelf in my climate controlled storage building a few months ago.  A friend believes that to be one of the best tuners ever, possibly even better than the F-26.

It just happens that I haven't been listening to KRTU much on the kitchen tuner recently, but I continue listening to 2 hours of weekly space music on KSYM.

Now finally testing the Godar on the 600T playing KSYM, I was shocked to find that the signal level was 13 dB lower, and not inconsequentially (26dB vs 39dB).  But over time I became aware that it was actually sounding better.  This became more and more clear as listening went on for a few hours.

At first I was thinking the noisier sound with the ST-2 was because of some kind of distortion, possibly caused by ground loop isolator.  Over the years, I'd used a number of ground loop isolators with the kitchen tuner because otherwise I'd get a serious hum on the kitchen audio system.  Some of those isolators seemed to cause notable distortion.  But it appears that I've removed all the isolators, and now I simply run the ST-2 signal through the RF surge suppressor built into a Monster brand AV power strip.  This also ensures it is firmly grounded to the kitchen audio system ground (or vice versa).  For a moment I was thinking of bypassing the Monster surge protection.  But it would be a big hassle because of where that power strip is located.

So what else could be causing this problem that makes the ST-2 sound worse on KSYM, I thought to myself, despite having much higher signal strength.  The Multipath meter hardly ever even moves, so it didn't look like it would be "multipath."

It was necessary to hook up the scope to find out.  And it turns out the scope is far more revealing of "multipath" distortion than the multipath meter.  In fact, the 600T multipath meter is basically useless.  You need a scope to see what is actually going on.  

Mitch Cotter, who had done a study on FM tuners for Consumers Union, convinced Saul Marantz of this, and the result was the legendary Marantz 10B tuner, inroduced in 1962 with a built-in scope, and many successors with built-in scopes made by Marantz and other companies.  After the Marantz 10B was introduced, most top tuners from other companies, even if they didn't include an actual scope, have generally included scope output jacks that could be connected to scopes.  McIntosh top tuners had the scope outputs and McIntosh themselves made a purpose built scope for consumers, whose last incarnation was the Maximum Performance Indicator MPI-4 (leave it to McIntosh to give it such a fancy name).  Kenwood made various scopes primarily for amateur radio (I have one, it's too big and clunky to mess with much, and doesn't show a good picture except for Kenwood FM tuners) but also a few for consumer audio purposes.

I've normally used an aging and increasingly dysfunctional Tektronix CRT scope for this purpose, but this time, for the first time ever for me, I used my relatively easily moved and situated Rigol DS1102E digital oscilloscope.  I had never figured out how to use this as an X-Y vectorscope which is what is needed for seeing FM multipath.  It turns out that this is an X-Y option in the Horizontal menu for this.  


KSYM using Godar DXR 1000

Hooked up to the Godar antenna, the scope shows a notable V pattern.  The thickness of the V is no more than 2 screen divisions.   That "thickness" is what indicates the degree of "multipath" distortion (which technically means unwanted in-band signals which can be caused by many things including multipath, adjacent stations, and other in-band stations).

KSYM using Magnum Dynalab ST-2 antenna


Hooked up to the ST-2 antenna, the pattern might best be described as "blob" but more importantly the middle doesn't condense down to just one screen division.  You see a smattering of dots going nearly up to the top of the screen.  It's not much of the total pattern, but it's enough to make the FM demodulation much noisier.  The multipath meter wasn't showing the total spread, just some kind of average which loses these details.

I believe what the V pattern is indicating is the presence of an on-band but weaker signal, which is being rejected by the FM tuners "capture" characteristic, which gives the upper hand to a slightly stronger signal at all modulation levels.  This is specified in an FM tuners Capture Ratio which is very good for the 600T tuner, 0.8dB in wide mode, which is almost a miracle except that a few tuners, like the legendary Sansui TU-X1 do even better.

(And tellingly, the Wide FM band selection always sounds better on KSYM, even though there are alternate strong channels on both sides.  This alone suggests the interference is in-band.  And there are several in-band candidate stations possibly causing this in-band interference, including one at 100 miles away at a higher elevation--KTXI)

What seems to be happening for the ST-2 is that since it rises higher, it's picking up more of the offending in-band interference signals, which are coming from farther away.  So it's not so much the channel I want that's getting stronger, as shown by the signal strength meter, it's the other channel I don't want that's getting stronger even faster, and just enough to start overwhelming FM capture at the fringes, enough to add some very annoying noise.

I can just as well listen to KSYM on the living room system with the F-26/Godar for space music as background music from now on, and take advantage of the superior sound of KSYM on the Godar.  In the kitchen, where I'm usually sitting in front of computer screens, the sound can be too in-your-face for background music purposes, especially during the announcements.

And it turns out that the Godar is also working better on KPAC as well, the station I usually use it for too.

On KPAC the Godar measures about 10dB weaker (only 3dB probably caused by the splitter, 7dB caused by the lower height antenna) but the multipath trace is about half the thickness of the ST-2.  Despite the lower signal strength, the sound is just a bit cleaner with the Godar antenna, as suggested by the mulitpath traces on scope.

KPAC using Godar DXR 1000

KPAC using Magnum Dynalab ST-2

Both antennas have the V outline similar to the Godar antenna on KSYM.  I think this is another capture situation with the higher antenna picking up more of an interfering station.  However, not as bad, it doesn't have random dots reaching to the top of the screen.  The main strong signal (at about 60dB on the better sounding Godar as compared with 72dB on the ST-2) is sufficient to suppress the interfering stations much better, even though it's 10dB weaker.  So the overall situation is similar to KSYM on the Godar antenna, just somewhat less so.  The meter says it's worse but the scope and actual results are that it is better.  But the sound and visible multipath on the scope are very good with either antenna, it's hard to tell the difference.

I see it may actually help for KSYM and KPAC that the Godar antenna is now bent, since this might add some directivity, suppressing other directions.  Fortuitously, it's bent in the direction of the transmitters of KSYM and KPAC.  The bend would tend to block stations coming from the sides FWIW.  Since it's working so well I don't need or want to touch it.  Perhaps a brand new antenna would be higher and therefore worse just like the ST-2.  If the Godar finally breaks off I'll probably have to replace it then.

So it appears that both the signal strength and the "multipath" meters are useless in situations like this, and probably many others.  You simply need to have a scope to get a feel of what's going on.

The best FM antenna is a big multielement antenna on a rotor on a mast, which has long been my dream, but I've hardly been able to contemplate setting it up, and basically nobody does that kind of thing for hire (at any price a mortal can afford, anyway).  I called "antenna installers" and all they wanted to do was tack up a small VHF antenna for digital TV to the side of the house, just as I have done with the whip antennas.  (I have two such brand new unused antennas in my junkpile, a legendary APS-10, and a classic Radio Shack FM-6.  How to put em up has always been the problem, as well as the grounding stuff I got sorted out for my whips.)

I also dug up a quick RF disconnect for one of my antenna cables and if I install another one on the other cable, I can swap antenna cables as needed (if I want to listen to KSYM on the kitchen tuner, for example).

KRTU still measures and sounds better on the ST-2, and by no small margin.  It has 10dB more signal strength AND less "mulitpath" visible on the scope.  Even less than on KPAC with the Godar.  It's obviously best to listen to KRTU on the kitchen tuner with the ST-2, which is what I always do.

KRTU on Godar DXR 1000

Despite having less apparent in-band interference, the multipath display for KRTU on the ST-2 still shows the V shape--and perhaps even finer than the stronger KPAC.  So I tuned around the dial a bit, and it appears all the stronger stations show that "V" characteristic, even with signals as high as 75dB.  So it appears that the V shape is just what the 600T tends to show on its scope outputs, given the way it's designed to produced the scope outputs.  That actually varies a bit from one tuner to another.  I would have expected the ideal display to be a flat line.  But "idealness" for the 600T appears to be how thick the trace of the V is, or if you can't even see the V at all because it's so blurry.  It's also great for tuning as the tiniest off-tuning results in a detectable tilt one way or the other at the bottom, which is flat right at the center.  The meters are more difficult to read or interpret.  For precise fine tuning, the center tune meter is fairly useless as it's correctly tuned at the far right of the inner bubble (a through tuner alignment would fix this).  The signal strength meter is not biased like that, but around the center tuning position there are only very small changes.


Saturday, November 2, 2024

"Scoping" cables?

 A friend wants to know if it would be useful to examine high end cables on one of my oscilloscopes.  I tell him my equipment is inadequate for such purposes, it would be much better to have a 500 mHz or 1 gHz scope.  He incredulously asks me why such is needed for audio frequencies.  It's hard to answer that question in a phone text, so I'm doing it here.

There are a number of different aspects to this, so I may seem to go all over the place.

1.  Macro vs Micro parameters

Oscilloscopes and voltmeters measure what we could call macro parameters of audio transmission.  Such parameters could include DC voltage and risetime.  Basically things having to do with the large outline of the signal transmission: bandwidth and phase response.

Micro parameters like noise and distortion are not so easily measured with oscilloscopes and voltmeters.  Best thing to do is store the signal and apply an algorithm to compute them, such as the program RMAA (but you'd probably need something much better for audio cables...*)

Noise can come from many sources: internally generated and picked up.  How well audio cables accept or reject noise from EMI and RFI is in fact very important but often ignored.

2.  How do audio cables differ?

Decent audio cables don't generally differ that much in their macro parameters.  The bandwidth of a typical RCA terminated coax cable is around 10 Mhz or greater.  Generally you can use such cables to transmit composite video which has sidebands up to 6 Mhz (though good video cables will often have a bandwidth 10-100 times greater than that for the best performance).  I have often used regular audio cables for both video and digital audio and it nearly always has worked (though I generally like to get the proper cables, etc).

It's not surprising, therefore, that many engineers and audio scientists believe "audiophile" cables are not necessary.  There's no way, they might say, that you need more than the 10 Mhz bandwidth of typical interconnects.

But they aren't generally considering micro parameters.

3.  But I've seen scope comparisons of cables and they showed differences.  How is that?

I've seen such photographs too, though I can't remember the exact details, I remember once when I really checked it out they were using a very high bandwidth scope to resolve differences at very small timescales.  That means high bandwidths, and they were using 500 mHz or 1 gHz scopes.

Now it should be remembered that you always need about 10 times the scope bandwidth of the signal you are trying to view.  If you view a 10 mHz square wave on a 10 mHz scope you may see a sine wave attenuated by about -3dB, because that's how bandwidth is defined.   You see none of the harmonics that make the wave square.

The square sides of a square wave are created by a series of harmonics which in principle go up to infinity.  That's why they needed very high bandwidth scopes to see details of very small duration (representing high frequency effects) in "audio frequency" squarewaves. 

4.  But I'm sure I don't always get 10 mHz bandwidth.  I can hear the rolloff.

Yes, but such issues likely related to the input and output impedances being applied, in combination with either the capacitance (for an interconnect) or the inductance (for a speaker cable).  These macro parameters can serve as a low pass filter.

That isn't really a fault of the cable, it's a mismatched system.  Now audio cables are too short to show what are called Transmission Line Effects at anything like audio frequencies.   Such effects can be shown for audio cables running many miles, but not a few feet.

So audio cables are operating as a lumped sum system.  You can easily calculate what its response would be if you could accurately know the output impedance, input impedance, and cable capacitance and/or inductance.

So to "match" cables, for example, you would select two having identical capacitance or inductance.

Just by doing that, you've matched the macro parameters.  No need to use a scope.

You can do that on a scope, too, but what you are showing, strictly speaking, is not the performance of the cable itself but of the system with input and output impedances.  It would have be measured connected to the intended input and output equipment of interest.

I've measured many 'plain vanilla' interconnect cables and they tend to have between 11 and 60 pF of capacitance per foot, with 30 pF being typical.  That is not going to have any issues for most audio equipment, but sometimes audiophiles have equipment with unusually high output impedances or unusually high input impedances (in some tube equipment for example) where there might be issues.

With speaker cables, the impedances are all more critical, and I don't have any "inductance" meters.  A scope would be the way to see the macro parameters.  But with reasonably well matched impedances, you aren't like to see any differences among cables at audio frequencies.  The way that speaker cables interact with amplifier and speaker loads in through very tiny effects.  Say, the impedance may be 0.05 ohms at one frequency and 0.04 at some other frequency.  Against an 8 ohm load, that would cause a very subtle change in timbre, but not something you are going to easily see on a scope.  It's a miniscule, but potentially audible effect.

*5.  You're not trying to tell me you don't believe in cable differences?

I'm trying to remain open minded and non-judgemental as to what you or other people can hear, in writing this post.

Frankly I don't believe there are significant audible differences among decent audio cables.  You could probably get by with plain vanilla radio shack cables in most cases (and I still have boxes of them**).

If there are significant differences, they are most certain to be found among the micro parameters and not the macro parameters, because in the macro parameters such differences just aren't there at audio frequencies, you can't measure any such differences easily, which is why many engineers and audio scientists believe that decent cables are the same.

**Nowadays I try to go with what is objectively best anyway, just in case it might be important.  So I have PTFE coated speaker wires because PTFE has the lowest Dielectric Absortion of all solid insulating materials, and a number of other considerations.  I have also chosen my speaker cable gauge (according to some principles that are too complicated to explain here, but end up making 12g the optimal for twisted pair) deliberately. But are such things necessary?  I doubt it.  For most interconnects, I use BJC-1 from Blue Jeans because it has the lowest capacitance and best shielding.  It also uses polyethylene dielectric, which is nearly as good as PTFE, but that might not even be important.

Dielectric absorption, in which an insulator operates as a battery, is largely a low frequency and low level phenomenon, something you are generally not going to see on a scope unless you test then with DC levels and very high resolution measurements.  There is very little difference among the 3 currently popular materials: PTFE, FEP, and polyethylene because they are all so good.  They are largely unaffected by electromagnetic fields at the molecular level.  Vinyl is a crappy dielectric, but probably adequate for audio purposes, I used vinyl speaker cables until around 15 years ago.  

But one of the worst things about old speaker cables is copper wire corrosion, which most frequently occurs with vinyl cables.  Sufficient corrosion can cause distortion, potentially audible, but again not measurable on a scope.  (And maybe you'd even like the effect if you liked a "fuller" sound.)

6.  Haven't you heard cable differences?  Here I can show you right now...

Yes, I have sometimes heard 'differences.'  However not under the circumstances that I would consider made it a valid experiment.

A valid audio experiment includes these details:

1) Level matching of all sources (etc) to 0.1dB.  You must have perfectly repeatable audio levels, such as with digital controls or stepped attenuators, rather than continuously variable controls.  Tiny level differences (as low as 0.25dB in my experience) can change the quality of a sound without changing its apparent loudness.

2) Listening position and environment must not change.

3) Double Blinded testing should be done, where neither the experimenter nor the subject knows which is which, along with a suitable protocol such as ABX.  This is especially true if there is any amount of time passes between one audition and the next, because it is relying on a fading memory for comparison.  Repeat trials are necessary until the null hypothesis can be rejected all but one times in twenty (p < 0.05).

Without these controls, I don't consider any listening test to be definitive, but just a guess.  Peter Montcrief created the term Golden Eared Subjective Reviewer (acronym GESR) to describe many well known audiophile reviewers (not that he didn't have his own faults).

Listening tests done without such controls can be fun if people don't take them very seriously, as they should not.


Music Distribution Services

I was elated to be able to finish two new music albums and upload them for distribution in October.  First time since 2022.

But now it appears that the "free" distributor Routenote has disapproved the first album I uploaded without any messages to me.  The album simply disappeared about 1 week after submission, without any email, explanation, or anything.  You have to ask what went wrong now.  I think it's likely the second will meet a similar fate, as it's not any better.

Routenote has been successfully distributing my 'Mythic Rocks' album, though the earnings so far ($1.36 on 1500 plays) are pitiful.

Pitiful earnings are likely anywhere, but I need to go ahead with some kind of distribution, because my music is not something I can play at street corners or bars.  I can't 'develop' without some kind of audience.

A paid distributor like Tunecore or Anti-Joy might work better.  Routenote has become notorious for their unhelpful rejections.  I released my very first album on Tunecore in 2006 but quit paying for distribution long ago, thinking I could do just as well with free distribution.  Free distribution worked OK for Mythic Rocks anyway.  (It didn't work with a second album I tried to release in 2022 because I was recycling some of my older songs I had previously distributed as if they were new songs, but under a new artist name, and I should have known Routenote would figure that out even if the song was no longer being distributed the metadata persists.  But I didn't make that mistake this time, everything was all new in the albums I attempted to release in 2024.)

Now it looks like the charges at Tunecore are reasonable and anti-Joy might be even better.

Here's a (slightly outdated) survey from about a year ago.  The author mentions 10 services and doesn't even mention Routenote (which figures I guess, they've become so notorious--not just for me).

Here's another survey with very colorful user reports (many are fed up with most or all distributors).  Routenote does come up with one poster initially saying it's great but then calling it awful after all 10 of his submissions were rejected.

Here's another 'complete' review of "all" the services, but lacking the colorful comments by users about their experiences.  They only put down Routenote as not having the most complete service (and nothing about its infamous review process, "customer service," etc).  This is a useful resource*, but limited, and it seems a few services highly praised in the earlier reports are missing (notably anti-Joy, which seems to be the favorite at the first link I posted).  Actually, what we really want to know, is which distributors are assholes.  Personal reviews call them out, though problematically every service is called asshole by someone.  The linked detailed reports are more useful and may give some personal experience, but limited to that of the author who is an accomplished musician and knows how to deal with these people.

It appears now that a submission to Routenote is more like an audition, where you get a phone call three weeks later if you've been chosen.  They aren't a 'distribution service' but more like a record producer (in fact, they are a record producer).  The want the reputation among streaming services as a consistent provider of Good Music, which has it's perks like getting to the front of the line.  They also don't want to bother with music that isn't likely to pay them sufficiently in commissions to justify the costs.  IOW, they're likely looking for a set of albums to gross around $75/year or more of which 15% would be theirs, or about $10.  From an artist who hasn't yet, but just might break through, which they could see a month of or more.  Though they accepted my Mythic Rocks album when I submitted it, and it's only paid about $1.36 in two years, of which they have pocketed everything for now because it doesn't meet their $50 or $100 minimum, they may have fine tuned their algorithms since then to reject such freaky stuff in the future, because it's not just worth the costs for them.  And then the fear of bots, etc.

One musician described Routenote's opaquely applied standards as Fascism.

Routenote has never been the top distributor, and often they aren't even included at first in lists.  The 'free' model got them a lot of attention at first but it's all cooled now as people find out that means they're very picky, slow, and opaque, and people who are good enough to pass muster can probably afford a better service with faster updates, more info, more customer service, etc.  I only found them because they advertised a high definition capability, and I figured that would mean I could upload 'high definition' files.  But, as it turned out, you have to be a major artist for that capability to even be available.  I have always sent, and they accepted, 24 bit flac files, but streamers will only see 16 bits because that's the way the system works, High Definition is a "premium" for "top albums," everyone else gets 16 bits.

If they'd give me a list of what's actually wrong that would be helpful.

While I originally thought the commission model would give them some incentive to help me along, in the final analysis the opposite is true.  The commissions they expect to make from me aren't worth their efforts at all, they figure, so no I get help at all, as I might at least get modest help from a company who at least hopes to keep pocketing my annual fees.

It appears I'm just going to have to go back to a paid service.  I'm leaning towards Tunecore again.  Everything is compared to Tunecore, the pioneer, and they've now re-designed their current pricing tiers to do better than major upstart alternatives.  Annual fees cover unlimited albums, and most authorities say you have to release about one a month to make money now.  They're widely reported to at least have reasonable customer service.

*The overall review section is useful, but devoid of actual experiences, and therefore reminds me of the reviews I used to read in Stereo Review magazine in the 1970's.  Everything is good, but some things have more good features than others.  That sort of industry-submissive attitude was undermined by audiophiles who wanted more dirt, and Stereophile and The Absolute Sound were born and Stereo Review went bust because nobody wanted to bother with their pablum.  But Audio magazine was a bit more informative than Stereo Review and Audio went bust anyway.  Actually I think Stereophile and The Absolute Sound were and still are too cranky, but sometimes fun and I don't miss Stereo Review.

Tuesday, October 15, 2024

Optimal Surround locations

 For Dolby Atmos, it's 1 foot above and 6" in front of ear, according to a UK magazine:

https://www.audioadvice.com/blogs/expert-advice/home-theater-speaker-layout-options

That's a bit less than 90 degrees.

Google AI suggested 90-110 degrees from center from some other source for both 5.1 and 7.1 "surround."  That's what official Dolby Atmos website shows for 7.1.  But for 5.1 Dolby Atmos shows 110-120 degrees in their diagram but the picture shows 90 degree direct side placement.

Crutchfield says "to the side pointed at listener" for 5.1, without giving any specific number of degrees, but the picture shows the 5.1 surrounds directly to the side, showing the Dolby picture (but not their diagram).

Discussion at AudioScienceReview quoted ITU-R international standard which specifies 100-120 degrees.  They also clipped a bit from Floyd Toole's book showing arguing as you get closer to 90 and 120 the effect diminishes.  (So, in this telling, 90 and 120 are equivalent, so it seems to me the range should be 100-110 or 90-120.)

But Floyd Toole doesn't actually show 100 or 110, just 90 and 120.  And the 90 and 120 graphs (which are not entirely comparable, as the 90 degree one has no center channel) have virtually identical scores!  

Toole also shows a surround setup with speakers at 30 degrees and 60 degrees, and that seems to have the best score of all!  But I haven't seen any discussion of that.  Yamaha famously had a 7.1 or 9.1 system with wide front Presence speakers as one option.  I have one of those receivers still in my Kitchen which I use mostly for stereo but will do some semblance of 5.1 and 7.1 either direct from Oppo or synthesized by receiver with my own manual fine tuning.  I have never done wide front 'presences speakers, though my front speakers are 30 degrees (I myself prefer 45 degrees as in my living room system for the widest possible stereo, which also means the most information).

Here's what leading Harman scientist (under Floyd Toole) Dr Sean Olive said about surround layout (in 2018):

 For example, if you’re setting up a stereo system around a TV or project screen, you’ll want to try and get the right and left front speakers flanking the screen at ±30 degrees angles relative to your seat in front of the screen to ensure a best-class sound experience. For surround sound the left and right side and rear speakers should be at angles of ±60-100 degrees, and ±135-150 degrees, respectively. After setting up the surround sound, it’s critical that consumers leverage the test sequence in the receiver, which will help ensure that every speaker is set at the proper level and time arrival.

Perahps Harman was not on the same page as Dolby and others.  The Harman ideas were based on their own objective psychoacoustic research in their laboratory in California.  (However, I need to get more detail on Dr. Olive's reasonings. than provided here.)  Perhaps it's in Toole's book, which I think I have.

But also note that Dr Olive is implicitly assuming both side and rear speakers!  So it's not really answering the question here (what's best for 5.1) but 7.1, and for 7.1 nearly all authorities either include 90 degrees in their rage (often showing it in pictures) or specify it exactly.  But for a room of listeners, only those lucky enough to be on axis with the side speakers will get 90 degrees, and those in the back will get...well 60 degrees if they're lucky (a point seldom discussed).

Random commenter at AVSForum suggested 90 degrees for 7.1, and somewhere in between that and rear for 5.1. 

I could easily do 90 degrees in my living room for listeners on the back couch which is right on the back wall.

Some don't think that is right even for 7.1.

Then what are you supposed to do for couch on back wall?  Every other side position is further front.

Friday, September 20, 2024

"New" sub polarity is non-inverting

 I strongly believe now that the reversal in sub polarities I did during the Janus 1.0 system tuning was correcting a number of earlier mistakes, but the polarity of the subs is now non-inverted, which is what I intended them to be.

I most recently replaced the plate amplifier in the right sub.  That was a couple years ago, and I decided I'd do it right this time for sure.  So before hooking up the plate amplifier, I connected an extension wire to the sub driver so I could stand in front of it and watch the woofer go in or out as I connected a battery to it.  I had never done that test before.  Once the plate amplifier is connected, you can no longer do such an easy test.  If you try to use a polarity matching app as I have done, you quickly find it is not reliable and even probably wrong at subwoofer frequencies.  This is because room reflections become part of the mix before you've even had one "cycle" at low frequencies.

That right sub has a polarity menu item, and it is now set to "Normal" (it had been "Inverted" for some unknown previous amount of time, perhaps since I did the plate amplifier replacement 2 years ago).  Right now I don't remember why I set it to Inverted or perhaps it was a mistake.

The left sub, OTOH, has its polarity inverted because (so it seems) I installed the replacement plate amplifier incorrectly.  I knew this was the case the first time I replaced the plate amplifier, so I ordered a polarity reversing XLR cable to connect it with, rather than having to take the plate amplifier out all over again (which requires moving a lot of stuff around).  That was about 12 years ago.  I used the polarity reversing cable on the left for awhile, but at some point (while I was still using Behringer DCX 2496 as crossovers, and they have polarity control) I decided on a trick: I'd reverse the polarity on the left and then reverse the polarity to both subs, making them both correct.

In 2013 I started using Behringer DEQ 2496 as crossovers (because they have a digital output option, allowing me to use high end DACs) but they do not have any polarity control (though it seems now that older ones are polarity inverting), and then in 2021 I started using miniDSP's as crossovers.  Somehow in all these changes, I lost track of the fact that the left sub polarity needed to be inverted (as with the inverting cable) to be in non-inverting polarity.  I also replaced the left sub plate amp a second time, but I think I may have made the same mistake as the first time.

The problem I faced recently had nothing to do with absolute polarity (which, for subs, I now think isn't even an issue at all) but with the relative polarity between the panels and the subs.

With the 24db/octave filter, the top or bottom polarities can be reversed without having any effect on the frequency response.  Linkwitz pointed out that the "inverted" phase connection with the 24dB/octave filter has the lowest group delay, but I've always believed in having everything in non-inverted polarity simply because it's possible and more intuitively nicer.

(Although it's possible that at some point I decided to use the inverted sub polarity instead.  It should have had no effect on frequency response.)

I was thinking he 48dB/octave Phase Linear Linkwitz-Riley crossover I am using now has that same property, you can connect the drivers either way and get the same frequency response.  However, I now know that is not true, it makes some difference which way you do it and you get some cancellation at the crossover frequency doing it incorrectly.  So it actually seems that the "Phase Linear" FIR filters I am using don't have the same property in this regard as the normal Minimum Phase LR filters.  Perhaps it's because they don't have any phase shift at all, so the phases are either matching or not, there is no in-between.

Mind you, the crossover is so steep the "cancellation" (which isn't perfect either) is confined to a very narrow band around the crossover frequency, and can mostly be masked with EQ (as I had been doing without knowing it) mostly.

Sunday, September 15, 2024

Janus 1.1

The stereo was sounding great on Saturday night, and I was doing some rare serious listening.  But in the morning the weird looking left channel response was bugging me, compared to the right.  And it had been much better looking, so it seemed in the last post, just before I added the last 182 Hz boost.  Then the lower midrange somehow got all messed up.  I had to solve this riddle.  And by Sunday afternoon my knees were no longer hurting from earlier days of adjusting (even using my 3" foam kneeling pad).

First I dialed out that boost at 182 Hz, which was seeming to be the culprit.  But it wasn't.  With no boost at 182 Hz it looked much worse overall, however the region 500-1000 Hz was perhaps slightly better, mysteriously.  Very slightly.  Not worth it.  Only then I did a good re-measurement of the 182 Hz boosted version for comparison, and it was mysteriously looking quite a bit better than a few days ago just by chance (not EQ), and obviously better with the boost.

Left, 182 Hz Boost removed

Left boost restored, "Janus 1.0", maybe not so bad

I tried half as much boost.  That was neither here nor there.

So then I just tried fixing the issues in the midrange while leaving the boost in place.

There was a depression centered around 400 Hz (it was a deep depression in the measurement I made a few days ago, today it was a fairly mild depression).  And there was also a -3.5dB notch at 437 Hz.  So I eliminated that notch, the the depression looked much better.  I left another -2.5dB notch at 366 Hz untouched because 300 and 400 were about the same level with the 437 Hz cut removed.

Now that I'd given up the 437 Hz notch altogether, I had an available PEQ I could use for a notch at 833, which had been used in earlier smoother iterations, but (mistakenly it seems now) given up at one point as unnecessary.  I might have intended to remove the 437 and ended up removing the 833 instead.

I moved the 903 Hz boost up to 1013 where it had been, but that left 900 depressed and added to the peak following 1khz.  So I found a happy medium where 900 and 1000kHz are about the same, tuning the boost to 957 Hz.  It's a very narrow boost so it doesn't raise the 800 Hz (I didn't change that).  For the smoothest response above and below 1kHz I increased the boost from 4dB to 5dB which gave the flattest response around 1kHz.

With that, I needed to increase the depth of the notch at 654 Hz because that was bubbling up again.  It had been precisely that I before but for some reason I'd then rolled it back.

And that was it, I got a smoothness from 500-2000 Hz which was similar if not better than the pre-182-Hz-boost response I'd measured a few days ago, but without sacrificing the 180-300 region which that boost fixed.  Actually this is about the smoothest I recall ever seeing the Left Channel.  Still not quite as smooth as the Right but pretty close.  It's now good enough that I think I can relax on the EQ'ing for awhile.

Left Channel, "Janus 1.1" EQ

Janus 1.1 PEQ's (only left panel changed)

One notch was a restoration of a notch I had figured out by sweeping and was using earlier, the notch at 833 Hz.  I basically just added that back and it helped so I'm thinking removing it was a mistake.

The boost was moved to a frequency based on the RTA effect and not sweeping this time.  That was what I actually do a lot but not what I claim to be my method because it's leaving the ultimate hinge frequency to a wider than necessary analytical technique.  But I've actually done that in many cases like this before.  Sweeping the mids and highs is not easy at all because the distances from micro peaks to micro valleys (which you would see in a very slow sweep graph, it would be "thick" because of all the ups and downs) becomes a smaller and smaller portion of an octave, and the frequencies you can choose for the PEQ's get wider and wider and there's not much you can do to get really "in there."  It is possible it might be better if I moved the boost a tick or two higher or lower based on sweeping or, gasp, sound.  But it could not get any flatter RTA, since that was being optimized.  I recall when I was sweeping that there was a dip at 903 Hz and also around 960 or so.  When you get up to 1khz, it doesn't sound like it needs boosting at all.  So the 957 may actually be just right and it's certainly pretty close to what I would have chose by sweeping.