We have undergone numerous stages of interaction
between humans and computers - we are getting closer to actually "being
inside the machine" as opposed to being on the outside looking in.
The ways in which individuals interact with computers
has changed dramatically.
· Keyboard Interface –
the initial means of interaction with machines was exclusively through keyboard
commands (or via such keyboard secondary means as punched tape, cards, magnetic
tapes). Computers at this time were expensive, and required extensive training
and complex programming knowledge in order to interact with them.
· Mouse Interface – at
the beginning of the “personal computer” era, keyboard input was augmented with
the now common “mouse”. This allowed individuals to interact with the machine
in a more free form manner, and allowed the development of computer based
analogs to the real world, such as “folders” and “files”. This input interface
allowed the development of such disruptive technologies as Photoshop, 3D
Computer Aided Design / Computer Aided Manufacturing as well as the commonly
known word processors, spread sheets and other digital replacements for
traditional business, engineering and creative systems.
· Touch Input Interface
– now very familiar to most, touch screens allow direct manipulation of the
information within the computer via “touch, click, squeeze and drag”
maneuvering. Generally, there is no mouse component to this interface, however
the ability to use a digital keyboard is common. Recent innovations in this
input method include “raised” interface and “touch screen modification”, where
the touch screen changes physical characteristics (such as, becoming “rougher”
or “dimpled” based upon the application’s requirements.
· Post Touch Input
Interface – this is the new horizon for machine interaction. Composed of both
voice and gesture recognition, this emerging means of machine interaction
allows the machine to recognize basic human communication efforts and respond
to them. Voice recognition systems are part of this environs.
· Wearable computation
devices – although technically not a human interface method, the development of
machines which monitor individual human performance telemetry (i.e. tools to
measure a persons pace, their GPS coordinates, their heart rate, blood
pressure, etc.) as well as their immediate environmental conditions, are
already commonplace as add on apps within many smart phone systems. Utilization
of Radio Frequency Identification Tags, Face recognition systems and crowd
shared data (best friends, common interests, favorite restaurant, etc.) all add
bulk to this “cloud” of data accumulated per person. Think of Charlie Browns
cartoon character Pig Pen - the cloud that follows a modern human around is not
of dust and dirt, but of personal data. A cloud following the individual around
24 hours a day, accessible instantaneously.
· The Internet of
Everything – in 2008 the number of “things” (devices, lights, heating systems,
traffic systems, the list is inexhaustible) outnumbered the number of humans
connected to the Internet. This vast and growing amount of data can be and is
captured and included in the human – computer interface.
· Computer Augmented
Human Systems – the sheer volume of human augmentation systems coming to the
market is incredible. Brain implants designed to assist Alzheimer patients
memory retention and implants designed to reduce or eliminate elliptic
seizures, are here and being integrated into the human cranium. Computer
assisted replacement limbs, optical sight enhancements and computer directed
blood borne nano technology is no longer in the realm of science fiction, but
science fact. To quote Arthur C. Clarke “Any sufficiently advanced technology
is indistinguishable from magic.”
· Augmented Reality
Interaction – combined with the Post Touch Input Interface means of
interaction, wearable computation and the Internet of Everything (above), this
interaction places the individual within the computing environment
itself, Google Glass project and the Apple equivalent are charting this new
territory.
As we dive deeper into the bowls of the machine, the
concept of “reality” will begin to blur. We will be looking at the universe from
within the interior of the computer – and will being to have difficulty
determining where “reality” ends and “augmented reality” begins. At this point,
we truly become the Ghosts in the Machine.
"For first we use machines, then we wear machines, then we become machines."
Kim William Gordon
LinkedIn
Facebook
www.bitnus.com
www.kimwilliamgordon.com
"For first we use machines, then we wear machines, then we become machines."
Kim William Gordon
www.bitnus.com
www.kimwilliamgordon.com
No comments:
Post a Comment