5

In the early nineties, as discussed in What specific technical advance(s) allowed PCs to play "Full-screen full-motion" video? it became commonplace for PC games to use the storage capacity of CD-ROM to supply clips of full motion video.

CD-ROM has error correction so that in the event of a slightly scratched CD, you won't lose any data.

But video uses lossy compression anyway. So it seems that when you are playing video, it would be more efficient to gain access to the raw bitstream, and have more bits per second to use for the error correction built into your video compression.

Is this something that was done, or optionally available, on early nineties PCs?

5
  • 1
    Erm. These are two questions. One about if PC could access raw data (yes) but a second question if there were as well such formats used as well (also yes). It might be a good idea to (preferably) separate this into two questions - or at least clearly divide the question text between both.
    – Raffzahn
    Commented Feb 7, 2020 at 17:47
  • 1
    I doubt there was ever a need for it because pressed CDs are so cheap to make, about $1 each. So if you can't fit your video on one CD, put it on two. Also, lossy compression typically doesn't have any kind of error correction or redundancy. HD Radio is an exception. Commented Feb 7, 2020 at 18:49
  • 1
    @snips-n-snails CDs were anything except cheap. Up until the late 1990s non trivial setup cost worked quite well as copy protection. And using two instead of one for a game or alike does as well require this additional investment. So it was as well a considerable hurdle for game companies. Then again, a game not fitting on a single CD usually needed more additional space than just 10%.
    – Raffzahn
    Commented Feb 7, 2020 at 19:01
  • 1
    @Raffzahn It costs hundreds or thousands of dollars to make the glass master but after that the incremental unit cost of pressing a CD is low, so the final cost of bundling the second CD depends on your expected sales volume. Commented Feb 7, 2020 at 19:34
  • 2
    @snips-n-snails That's the point. A glass master cost in 1990 still way above 5 grand. And for a production usually several were needed. So it was still a major hurdle. After all. One of the reason why software was delivered on diskettes way into the 90s. Not every product is a GTA style hit :)
    – Raffzahn
    Commented Feb 7, 2020 at 19:44

2 Answers 2

13

TL;DR; Yes and yes, but.

  • CD-ROM Mode 2 offers all 2336 bytes of a CD-ROM block for user data.
  • By default all CD-ROM drives can read Mode 2 disks as well
  • Mode 2 CD-ROM have been produced, but it never really took off.
  • Mode 2 is not raw, but still incorporates basic error correction.

Background:

CD-ROM has error correction so that in the event of a slightly scratched CD, you won't lose any data.

CD-ROM use error correction on two level. First there is the basic LEC, as defined for basic CD-DA (Digital Audio) format in the Red Book. Second there is Mode 1 ECC as defined in the Yellow Book for CD-ROM.

For Audio-CD the Red Book defined a continuous bitstream separated in tracks. This format already includes error correction. It was the Yellow Book that defined individual addressable sectors. Two modes were defined

  • CD-ROM Mode 1 - Each block of 2352 byte is structured in

    • Syncronisation(12 Byte)

    • Header (4 byte)

    • Data (2048 byte)

    • Error detection (4 byte)

    • Spare (8 Byte)

    • Error correction (276 bytes)

      This ECC is in addition to the basic Red Book LEC

  • CD-ROM Mode 2 - Does leaves everything after the header to user data:

    • Syncronisation(12 Byte)
    • Header (4 byte)
    • Data (2336 byte)

While Mode 1 is the most common data format, Mode 2 was used for some video/audio data storage, but never became much popular. Both modes can be mixed within the same data track.

Mode 2 did not only leave out ECC, but as well error detection. This got modified with the 1991 CD-ROM/XA format, which is essentially using a Philips CD-I format, now called Mode 2 Form 1/2:

  • CD-ROM/XA Mode 2 Form 1 - Each block of 2352 byte is structured in

    • Syncronisation(12 Byte)

    • Header (4 byte)

    • Subheader (8 Byte)

    • Data (2048 byte)

    • Error detection (4 byte)

    • Error correction (276 bytes)

      This ECC is in addition to the basic Red Book LEC

  • CD-ROM/XA Mode 2 Form 2 - Does leaves everything after the header to user data:

    • Syncronisation(12 Byte)

    • Header (4 byte)

    • Data (2336 byte)

    • Subheader (8 Byte)

    • Data (2324 byte)

    • Error detection (4 byte)

      (CD-I did specify the last 4 bytes as reserved)

But video uses lossy compression anyway. So it seems that when you are playing video, it would be more efficient to gain access to the raw bitstream, and have more bits per second to use for the error correction built into your video compression.

Lossy compression doesn't mean no error correction, but reduction in data bandwith by dropping information on purpose. A lossy compression does, per se, not include any error correction Involuntary lost information will destroy its usability the same way as with lossless formats.

Also, switching from Mode 1 to Mode 2 only delivered about 14% more bandwidth. Not really a gain worth much, especially not the risk of damaged data.

Is this something that was done, or optionally available, on early nineties PCs?

Mode 2 was present on all Yellow Book compatible drives, so essential all.

And beside some special formats, Philips CD-I did store video using (their version of) Mode 2 by default.

5
  • 6
    Stating that mode 2 "never became much popular" ignores the most common usage for it: the Video CD standard, very much popular in Asian markets for quite some time. Contrary to your assertion, involuntarily lost information does not destroy the usability, MPEG video is intentionally designed to degrade gracefully and still play back functionally with errors present in the data stream.
    – mnem
    Commented Feb 7, 2020 at 19:03
  • 2
    @mnem So was CD-I in some markets. Both still, marginal when considering CD-DA and CD-ROM. Equally important, Video-CD isn't Mode 2, but CD-ROM/XA Mode 2 Form 2. Further, MPEG and Video-CD wasn't a thing until the late 1990. Last but not least, graceful handling of errors is an additional layer added and not a feature based on use of lossy or lossless formats - you may want to read the paragraph in full.
    – Raffzahn
    Commented Feb 7, 2020 at 19:14
  • 3
    I'm not sure I understand your comment. I considered the VCD standard relevent in the context of the original question being about video compression, and specifically mentioning the "early 90's". Note that in the case of VCD, that extra "14% more bandwidth" you dismiss out of hand was a critical and intentional design decision, as it allowed VCD players to use a single speed drive to keep costs down, by virtue of ensuring that the required bitrate stayed within the 1X limit (60 minutes of VCD video fits on a 60 minute audio CD, 74 minutes of VCD on a 74 min CD, etc).
    – mnem
    Commented Feb 7, 2020 at 19:19
  • @mnem Serious? I do not see the case you try to make by taking parts out of context. The usage was already defined and implemented with CD-I a good 10 years before MPEG or Video-CD. It is helpful to read at full and in context.
    – Raffzahn
    Commented Feb 7, 2020 at 19:27
  • 3
    This may surprise you, but I don't know everything, so yes serious. I don't understand why you don't think its relevant, probably because you know more about the subject than I do. Please illuminate why if you think I'm completely off base, I'm just trying to understand things better (and probably fundamentally misunderstanding something).
    – mnem
    Commented Feb 7, 2020 at 19:38
7

I think your premise is a bit off. Compression (lossy or otherwise) makes media much more susceptible to errors, not less. If you flip one bit in an uncompressed image, the probable result is that one pixel of the output is incorrect. If you flip one bit in a compressed image, the probable result is something like this. If you flip one bit in a video file with inter-frame compression, the probable result is several seconds of garbage persisting until the next keyframe/IDR.

In an information-theoretic way, compression and error-correcting codes are nearly opposites. Error-correcting codes add redundancy to data; compression removes redundancy.

1
  • On the other hand: some bits are more important than others, and most of the time you just get a small glitch or some macro-blocking that is annoying but not fatal. Contrast with flipping a single bit in your Zip archive, which destroys everything from that point forward. VideoCD and CD-i could use less error correction because the video compression formats are designed to be fault-tolerant.
    – fadden
    Commented Feb 9, 2020 at 15:44

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .