I’m sure it was a mistake but there’s a difference between bitrate and sample rate. The bitrate for DCC is always 384kbps (though MP1 allows other bit rates), and the supported sample rates for DCC (and MP1) are 32000, 44100 and 48000 samples per second, usually expressed as 32. 44.1 and 48 kHz.
The bit depth for DCC was 16 bits (per sample) for the first and second generation recorders. For third generation recorders, the bit depth was 18 bits and some Japanese recorders feature 20 bit encoders and decoders. I think I already mentioned that an increased bit depth basically means that the recorder has more dynamic range. 16 bits represents a dynamic range of about 97dB, 18 bits is about 108dB and 20 bits is about 120dB. See Audio bit depth - Wikipedia.
If you look at the specifications of your amplifier, you will probably notice that it has a noise level that’s usually above -97dB, unless you have a really expensive one. Many high quality amplifiers don’t do any better than -80dB or so. So most users probably won’t hear the extra bits just because their equipment is not good enough, and we’re not even talking about how the frequency range of your ears gets worse as you get older.
The point of using the 96kHz sample frequency has to do with filtering. The Nyquist theorem says that an analog waveform can be represented by digital samples as long as the sample rate is twice as high as the maximum frequency in other original analog waveform. Even a waveform that has frequencies that are really close to half the sample rate can be accurately represented because (according to the math, which I admit I don’t fully understand) for the decoder, there will only be one way to generate an analog waveform. However it’s important that, for this to work, the decoder has to have a filter that completely allows all frequencies up to the Nyquist frequency and completely cuts off everything about that frequency. Without that filter, the output will have distortions in the high frequencies. Such a filter is impossible to make because the sharper the cutoff curve is, the more phase distortion it introduces.
So the idea behind the 96kHz sample rate is to make the Nyquist frequency so high that it doesn’t matter if the filter isn’t good (or doesn’t even exist), because any distortion will have a frequency so high that nobody can hear them anyway. This is also how (and why) oversampling works: by artificially inserting extra samples during decoding, the Nyquist frequency is increased, and the filter can be simpler or can be left out completely.
If a source is converted between 44.1 and 48 kHz sample rate or vice versa, there’s a little bit of quality loss because of quantization errors: The conversion tries to guess what the sample should be, based on surrounding source samples. But for 96kHz to 48kHz, there is no need for guessing (interpolation) because all there are exactly two input samples for each output sample. So if the DCC museum ever wants to release a commercial tape based on a 96kHz master, conversion to 48kHz and then encoding in PASC is the best option.
=== Jac