Thread: How Much?
View Single Post
  #108   Report Post  
Posted to alt.home.repair
Don Y[_3_] Don Y[_3_] is offline
external usenet poster
 
Posts: 2,879
Default How Much?

On 12/11/2015 10:25 PM, Muggles wrote:
On 12/11/2015 12:12 PM, Don Y wrote:
On 12/11/2015 10:58 AM, Muggles wrote:


Maybe you could design better hearing aids?


I'm primarily interested in addressing less well served populations:
visually impaired, physically handicapped. Most "devices" nowadays
interact with people visually (so hearing-impaired is not at a loss
to use them!) and with some degree of manual dexterity.

Folks who can't see and/or are largely immobile are effectively
deprived access to those devices and systems.

To put it in perspective, imagine your smart phone had NO (visual)
display. What would it be like to use it -- sighted?


Sometimes, I can't see things when I can't hear them, and things I can
hear I've no idea where the sound is coming from. I imagine that feels
similar.


Virtually all "technological interfaces" are sight-based. The naive ones
also rely on perfect color-vision, as well -- often only using it in
"cartoon-y" ways.

This implicit reliance has significant consequences to how a device is
actually *designed*. Concepts oriented around visual interfaces usually
have no direct counterparts in other output modalities; what's the
aural equivalent of a "window"? If your user interface relied on sound
instead of vision, how would you present multiple "sessions/applications"
("windows") to your user? How would you "flash" a window to draw attention
to a printer that is jammed, new email arriving, a thumb drive being recognized
on insertion, etc.? How do you let a user *re-view* something that he
just examined (i.e., *heard*)?

How does a (sighted) user "type" CTL-ALT-DEL with a mouth stick? Or, with
an eye-tracker? Or, position the text cursor between these two digits: 12?

Do you eliminate these capabilities from your user interface? Or, come
up with alternatives for each different modality? Do you introduce some
sort of "translation layer" in an attempt to match other implementations
to your original set of capabilities? Which users do you effectively
*penalize* by a poor choice of underlying interface assumptions?

Or, do you design a set of capabilities for your interface with full knowledge
that it must be "portable" to these different modalities from the very
onset? So users can relate to the device/program consistently regardless
of their I/O modality choices/requirements??