Home Propeller Head Plaza

Technical and scientific discussion of amps, cables and other topics.

Re: Same problem all along

"but unfortunately for him, no one else would agree that his amp was "perfect"."

That's not quite true. Some others would not agree, certainly not "no one". Disagreement or agreement without evidence makes no point. And what you are neglecting is that the test was not to compare a dB number, but to listen to a sound. Someone listening to the test need not know or care how many "dB down" the null was, he was listening to the *actual difference using his ears*. If the difference were to be inaudible, listened to all by itself at the same level as it would be experienced in use, then how does the difference become audible when the difference isn't all by itself?

"Finally, based on my experience in the recording studio, error signals need to be BELOW -90 dB in order to even start to be considered as being in the realm of inaudible, and that it may be necessary for certain types of signal abberations to be below -100 to -110 dB .."

That is going to take some more explaining: -90dB, -100dB, -110dB relative to what?

The test is being done with music, cancellation is happening on a music signal, not on some "full scale digital" or "0dB analog" tone. It is the relative reduction of the entire program level, not just the peaks. So, unless the music is highly compressed pop junk, it will have a crest factor of 20dB, probably quite a bit more. So, per your experience, do aberrations have to be 110dB (or -120 or -130dB) minimum below full scale to be inaudible? That's about 40dB *below* the noise floor for exceptionally clean vinyl; and 14dB below the ideal case noise floor for CD! I know of no actual recording system (microphones, electronics, plus cutters or converters) that can do anything like 110dB signal to noise ratio (about 3 parts per million).

Tones can be detected or even heard well below the noise floor (probably not nearly that far, though!) in both dithered digital and analog playback, but that applies only to continuous tones. It's a phenomenon similar to narrow band filtering or coherent averaging in signal processing, you can detect a waveform if you narrow down what you are looking for and it is around for long enough for you to track what you are after. But not with transient and constantly varying program material like music, and I think it is generally agreed that no one is listening to steady tones on their systems.

"Additionally, most sound cards are NOT rated for any sort of jitter performance, "

I don' see how that is relevant. If jitter caused an error, it would cause a larger difference result, not a smaller one. Masking due to such things is a phenomenon of hearing. Not relevant in subtraction where the presence of other stuff, or distortion of it, could only result in more difference signal, not less.

"The problem now is that if you look at what most computer based sound cards systems can do, merely altering how the input/output cables are placed, how long they are, exactly where they are at any given time, can cause changes in the I/O error detection at these levels and higher. Part of this seems to be due to the huge amount of hash generated by most computer systems, and leaked into the sound card IO.

Some of it seems related to a particular computer system's software details, leading me to speculate that different operating systems: DOS versus Windows, as well as different methods of accessing the sound system - ASIO versus WinCrappyXXX, etc. all affect the timing, repeatability and cleanliness of the overall performance of the sound card system."

Same comments as before. All these potential problems will cause a LARGER difference, never a smaller one. If the difference track were still to be inaudible, then such things are not causing a problem and the point is moot. If the differences are audible, though, then you COULD say that the sound card is maybe causing it. The test might show things that are not actual differences or differences that would not normally be inaudible. But noises, distortions, etc. are not going to make two different recordings turn into identical ones. When you subtract signals, things only go away if they are the SAME.

"Of course, it would be a mistake, but that person would not be very receptive to the actual facts or truth of the matter, nor would they be likely to take into account not using simple purely resitive load resistors instead of actual audio components, nor would they be likely to be exposing the DUT to actual sound playback levels comnesurate with those in the home (or profesional) environment, thus a lack of typical operational vibration stimulus would be missing, and so on and so on."

Sure, bad tests can be done under meaningless conditions. Typical listening evaluation can be done under meaningless conditions, too (in a noisy room, with a headache, intoxicated...). And it is always possible that any one test run may not show some unique condition that makes a difference occur, one that might not otherwise happen. But there is nothing stopping us from monitoring the signal in a real system while it is being used normally (in a situation where we are told that some change in equipment is making a very audible difference with the particular recording and associated equipment), and with vibrations, magnetic fields, or whatever.

From the objections listed, you seem to imply that differences will NOT be found. Why that suspicion? The comparisons have not been done yet, and the results are not yet available. Why assume that NO audible isolated differences will be found with changed cables or other things, in a test that hasn't yet been run? The opposite might happen, a non-conventionally-explainable difference might be found -- will you discount the result if that happens?

"and we are back to the banality of a THD measurement number faux pas."

No, we certainly aren't. We are not getting a number or a graph, nor are we looking for some particular one dimensional error pheonomena.

We are looking for ANY ERROR that MIGHT be audible in a signal during ACTUAL use with a musical signal and we obtain something that is *listened to*, i.e., we are directly in the realm of "what can be heard". The evaluation of the results is actually subjective, but each person can evaluate it's audibility or severity himself, not take have to rely on an "expert"'s judgment. The result is just made less subtle, more available (we all can download, play and listen to WAV files), and more relevant (first-person experience of sound rather than numbers or graphs or testimonials).

The question is, are we to "trust our ears" or are we not? Or should we just say that "everything isn't always ideal in every possible way, so no test could ever mean anything"?



This post is made possible by the generous support of people like you and our sponsors:
  Schiit Audio  


Follow Ups Full Thread
Follow Ups


You can not post to an archived thread.