Wait a minute.
If the dcc PASC doesn’t need 3/4 of the sound, does it mean that the amplifier and the speakers work only about a forth of the equivalent CD play?
Wait a minute.
You are referring to the 4:1 data compression of PASC. No, that’s not how it works.
On a CD, the music data is stored as a wave form: 44100 times per second, the level of the audio signal is described as a 16 bit value (a number between -32768 and 32767). All those numbers are necessary to reproduce the CD in its original quality.
When you record a CD to DCC,the PASC compression analyzes the stream of samples, in little chunks of about 8.7 ms (in case of a CD). It does what’s called a Fourier transformation to figure out what frequencies are present in each 8.7 ms chunk of music, and divides the data up into 32 frequency bands.
At this point, all the data of the original system is still there, and could be used theoretically to reproduce the music basically exactly the way it was by synthesizing it from the 32 frequency bands.
However, the next thing that PASC does, is to scale the data and apply an acoustic model to reduce the accuracy.
Scaling works similar to how calculators use an exponent to represent large numbers: if you type the number 123456789 in a calculator, it might convert it to 1.23456 x 10^8: This makes it possible to do calculations with very large or small numbers but still gives you the full accuracy that can be represented. But because the numbers in the music data stream aren’t intended for humans, PASC uses a scaling method that’s based on powers of 2 instead of 10.
The acoustic model helps PASC decide which of the frequency bands need to be reproduced as accurately as possibly, and which frequency can be scaled back in accuracy. For example, our ears aren’t very good at hearing details in low frequencies, so the algorithm might decide to scale a number back (basically rounding it off). It might even decide to scale a frequency band back to 0 significant bits, when two adjacent frequency bands make one frequency inaudible during the current 8.7 ms of audio.
In the end, this yields a block of data that’s about 1/4th of the original data on the CD, which (if the acoustic model correctly describes how our ears work) sounds exactly like the original when you synthesize it back to a waveform. That’s what the PASC decoder does: it generates a waveform based on 32 sine waves with strengths that are described by the scaled numbers in the PASC data.
So, to say that 3/4 of a CD’s audio data is unnecessary is a bit of an inaccurate statement. You can’t just take 3/4 of a waveform away and expect it to sound the same. But by representing it in a different way, PASC can reduce the amount of data that’s needed to represent the music, by reducing the accuracy of how individual frequency bands are represented, without making the music sound distorted. The trick is to remove only the parts that your ears can’t hear anyway.
Note: All lossy audio compression* basically works the same way. MPEG 1 layer 1 is basically identical to PASC except for a few limitations (because the data rate is fixed at 384 kbps for PASC) and one small implementation detail at 44.1 kHz (where PASC inserts zeroes in “padding slots” and MP1 just generates extra data).
Other compression methods add other ways of reducing the amount of data, for example analyzing the bit patterns and finding repeating bits.
*In this post I’m talking purely about compression meaning “the use of fewer bits/bytes to represent the same data”, not about dynamic compression, meaning reduction of the difference between the loudest and quietest fragments in a piece of music. That’s a whole different, unrelated subject.
Thank for a most professional answer