View Single Post
  #32   Report Post  
Gary Coffman
 
Posts: n/a
Default

On Sun, 05 Dec 2004 10:25:31 -0800, Tim May wrote:
In article , Lawrence
Glickman wrote:

So to be fair, we do benchmarking using different algorithms, to give
us an idea of SYSTEM performance, not raw Mips. But again, I am not
going to buy an elephant rifle to kill a mosquito. Maybe scientific
laboratories can profit from these high-end machines, but unless I get
into analog to digital interpolation, it is highly doubtful I am going
to benefit from having a cannon on board where a pellet pistol would
do the job just as well.

And even then, the throughput of the A/D converter is going to
determine to a large extent what limitations my *system* would have.

I try to stay digital all the way. Keep analog out of it, and you've
solved a lot of problems before they even got off the launch pad.


You're missing the point. The ADC is _not_ what makes rendering take
many, many hours. In fact, the output of a 13.7 GB digital video (DV)
is fed directly into a computer via Firewire ports. Digital at all
times once it's in the camcorder, so digital at all times in the
computer.

What takes time is the resampling/resizing/reformatting/rewhatevering
to "fit" 13.7 GB of digital data, for example, into 4.7 GB of space on
the the blank DVD. This is akin to when a printer driver says "The
image will not fit on this page...would you like to crop, scale, or
abort?" Except with 13.7 GB that's a lot of data, a lot of swapping to
disk, a lot of work.


What the computer is doing is MPEGII compression of the video. That
requires having at least 7 frames of video in memory at the same time.
The algorithms look at the time domain data of the individual frames,
translate it into the frequency domain, ie they do a fast Fourier transform,
and then using the preceeding and following frames they find the most
compact way to represent the group of frames as a MPEGII compressed
data stream, which is what's laid down on the DVD.

MPEGII is lossy compression. Some of the data is discarded, repetitive
data is compressed. The algorithms do this in such a way so as to not
produce visible artifacts (most of the time anyway). That's what makes
it difficult. The algorithms are actually making *judgements* about
picture content. Moving objects can be rendered with less resolution
than stationary ones, so data is discarded for parts of the picture
determined to be moving. Fixed areas in the picture don't have
to be repeated in each frame, they're flagged for repetition instead.
Etc. Very complicated.

Decompressing MPEGII is easy. The compressed data stream has
flags embedded in it telling the decompressor exactly what to do.
There are commodity codec chips to do it on the fly in realtime.
Compressing it is hard. Even the professional purpose built
massively parallel processing boxes we use to do it for DTV
broadcast lag realtime by about 2 seconds.

Doing realtime with a single processor PC having limited memory is
impractical. You'd just keep losing ground to the realtime signal.
You can do offline processing with the PC, but it takes a very long
time. 11 hours to do 1 hour of video sounds about right for a 2.5
GHz PC with a gigabyte of memory. And that's just for regular
resolution video.

Doing HDTV with a PC is out of the question. With only a gig of
memory, the PC would be thrashing the disc all the time. HDTV
MPEGII converters are very expensive. Only a few production
houses, and the major networks, have them. At the local station
level, we just pass through the already compressed streams.

Uncompressed HDTV is a data stream at 1.2 Gb/s. Compression
brings it down to 45 Mb/s. Then we do a complex modulation called
8VSB which allows us to stuff it into a 6 MHz TV channel bandwidth
for broadcast.

Gary