View Single Post
  #59   Report Post  
Posted to sci.electronics.repair
Arfa Daily Arfa Daily is offline
external usenet poster
 
Posts: 6,772
Default 120hz versus 240hz


"Geoffrey S. Mendelson" wrote in message
...
Arfa Daily wrote:
Well, I dunno how else to put it. I'm just telling it as I was taught it
many years ago at college. I was taught that short persistence phosphors
were used on TV display CRTs, to prevent motion blur, and that the
individual images were integrated into a moving image, by persistence of
vision, not phosphor. I was also taught that the combination of POV and
the
short decay time of the phosphor, led to a perceived flicker in the low
frame-rate images, so the technique of splitting each frame into two
interlaced fields, transmitted sequentially, was then born, which totally
overcame this shortcoming of the system. Always made sense to me. Always
made sense also that any replacement technology had to obey the same
'rule'
of putting the still image up very quickly, and not leaving it there
long,
to achieve the same result.


It's more complicated than that. You only see one image, which has been
created in your brain from several sources. The most information comes
from the rods in your eyes, they are light level (monochromatic) sensors,
as it were and they are the most prevalent. This means most of what you
see
is from the combination of two sets of monochrome images with slightly
to wildly different information.

Then there are the cones, or color sensors. There are far less of them
and they are less sensitive to light, which is why night vision is black
and white.

There are also blind spots where the optic nerves attach to the retina.

None of these show up on their own, they are all integrated into the one
image you see. You never notice that you have two blind spots, you don't
notice the lack of clarity in colors (due to the fewer number of spots)
and rarely, if ever do you notice the difference between your eyes.

If you were for example to need glasses in one eye and not the other, or
have
not quite properly prescibed lenses, your image will appear sharp, not
blurred
on one side and sharp on the other.

Lots of tricks have been used over the years to take advantage of the
limitations of the "equipment" and the process. For example, anything
faster
than 24 frames a second is not perceived as being discrete images, but one
smooth image.



The difference in resolution between the brightness and colour receptors in
human eyes, is well known and understood, but I don't think that this, or
any other physical aspect of the eye's construction, has any effect on the
way that motion is perceived from a series of still images.



The 50 and 60 fields per second (a field being half an interlaced frame)
were
chosen not because they needed to be that fast (48 would have done), but
to
eliminate interefence effects from electrical lights.

Color is another issue. The NTSC (and later adopted by the BBC for PAL)
determined that a 4:1 color system was good enough, i.e. color information
only needed to be changed (and recorded) at 1/4 the speed of the light
level.

In modern terms, it means that for every 4 pixels, you only have to have
color information once. Your eye can resolve the difference in light
levels,
but not in colors.

This persists to this day, MPEG type encoding is based on that, it's not
redgreenblue, redgreenblue, redgreenblue, redgreenblue of a still
picture or a computer screen, it's the lightlevel, lightlevel,
lightlevel, lightlevel colorforallfour encoding that was used by NTSC
and PAL.

In the end, IMHO, it's not frame rates, color encoding methods, at all, as
they were fixed around 1960 and not changed, but it is display technology
as
your brain perceives it.



Yes, I was not sure exactly why you were going into all of the colour
encoding issues in the context of LCD motion blur. This has nothing to do
with it. It is the display technology that is causing this. It is simply not
as good as other technologies in this respect, despite all of the efforts of
the manufacturers to make it otherwise ...



No matter what anyone says here, it's the combination of exact
implementation
of display technology, and your brain that matter. If the combination
looks good, and you are comfortable watching it, a 25 fps CRT, or a 100FPS
LED screen, or even a 1000 FPS display, if there was such a thing, would
look
good if everything combined produce good images in YOUR brain, and
bad if some combination produces something "wrong".



But this isn't so. A crap picture may, I agree, look 'ok' to someone who
knows no better, but that doesn't alter the fact that it is still a crap
picture that those who *do* know better, will see for what it is. LCD panels
produce crap images in terms of motion blur, and when compared for this
effect to CRTs, plasma panels, and OLEDs.





If you think about it, the only 'real' difference between an LCD panel,
and
a plasma panel, is the switching time of the individual elements. On the
LCD
panel, this is relatively long, whereas on the plasma panel, it is short.
The LCD panel suffers from motion blur, but the plasma panel doesn't.
Ergo,
it's the element switching time which causes this effect ... ??


There is more to that too. An LCD is like a shutter. It pivots on its
axis and is either open or closed. Not really, there is a discrete
time from closed (black) to open (lit) and therefore a build up of
brightness.



I was talking in terms of the fundamental visual principle in that they are
both matrixed cell-based displays requiring similar frame buffering and
driving techniques in signal terms. I was not referring to the way that each
technology actually produces coloured light from the individual cells, which
is clearly entirely different in both cases, from the raster based CRT
principle which, like plasma panels, doesn't suffer from motion blur.



Plasma displays are gas discharge devices, they only glow when there is
enough
voltage to "fire" them until it drops below the level needed to sustain
the glow. That depends more upon the speed of the control electronics than
any (other) laws of physics, visocsity of the medium the crystals are in,
temperature, etc.




It doesn't really rely on the speed of the drive electronics since there are
techniques used to bring the plasma cells to a 'pre-fire' condition just
below the point at which the gas actually ionises. This allows the cells to
be fired with a small drive voltage, and without having to wait for the cell
to build up to the point where it actually fires. This is how they can get
the switching speed of the cells down to as little as 1uS



That's the aim of LED backlit TV screens (besides less power consumption,
heat,
etc). They only are lit when the crystals are "open", so there is no time
where you see partially lit "pixels".

Geoff.


Hmmm. That's not the way I've seen it described. Most of the hype about this
development seems to concentrate on producing dynamic contrast enhancement
by modulating the LEDs' brightness in an area-specific way, depending on the
picture content in front of them.

Arfa