A Summary of Ubiquitous, Mobile, and Wearable Computing
(1/22/03)
Not for citation
Ubiquitous computing, wearable computing, mobile computing and augmented
reality are all up-and-coming fields, and have considerable overlap. This
report gives a brief summary of the fields, current big players, and
problems that are getting focus.
Please note: This is a high-level overview I wrote as a
tutorial for my coworkers, and it is by no means complete. If I left
out your favorite project, lab or research area I apologize. Please do
not cite, and if you see any glaring errors or omissions feel free to
send them to rhodes@bradleyrhodes.com.
Some definitions
- Ubiquitous Computing (Ubicomp), Pervasive Computing, Things That Think
Xerox Wireless PARC Tab, circ. 1993
|
- The term ubiquitous computing was coined by Mark Weiser of Xerox PARC
in 1988. His 1993 Communications of the ACM paper entitled "Some computer
science issues in ubiquitous computing" describes a world where
wirelessly networked computers are distributed throughout the environment,
largely invisible until needed. PARC's early experiments dealt primarily
with three different sizes of device: tabs (about the size of a post-it
note or small PDA), tablets and full boards hung on the wall. Pervasive
computing is a synonym for Ubicomp.
In the mid 1990's the MIT Media Lab started the Things That Think
consortium. Closely related to ubicomp, the Things That Think project is
based on the idea that everyday objects such as coffee cups, frying pans
and toys should use computers to enhance their normal usage. The main
distinguishing feature between TTT and Ubicomp is TTT focuses specifically
on integrating computers into a particular physical object to help with
that object's function.
- Wearable Computing
Thad Starner wearing the MicroOptical display
|
- The fuzzy definition of a wearable computer is that it's a computer
that is always with you, is comfortable and easy to keep and use, and is as
unobtrusive as clothing. My personal definition is that wearable computers
have many of the following characteristics:
- Portable while operational: The most distinguishing feature of a
wearable is that it can be used while walking or otherwise moving
around. This distinguishes wearables from both desktop and laptop
computers.
- Hands-free use: Military and industrial applications for wearables
especially emphasize their hands-free aspect, and concentrate on speech
input and heads-up display or voice output. Other wearables might also
use chording-keyboards, dials, and joysticks to minimize the tying up
of a user's hands.
- Sensors: In addition to user-inputs, a wearable should have sensors for
the physical environment. Such sensors might include wireless
communications, GPS, cameras, or microphones.
- "Proactive": A wearable should be able to convey information to its
user even when not actively being used. For example, if your computer
wants to let you know you have new email and who it's from, it should
be able to communicate this information to you immediately.
- Always on, always running: By default a wearable is always on and
working, sensing, and acting. This is opposed to the normal use of
pen-based PDAs, which normally sit in one's pocket and are only woken
up when a task needs to be done.
An overall design philosophy for wearables can be implied from these
characteristics. Wearable computers are by their nature highly portable,
but their main distinguishing feature is they are designed to be usable at
any time with the minimum cost or distraction from the wearer's primary
task. A wearable computer user's primary task is not using the computer, it
is dealing with his physical environment. The computer will be in a
secondary or support role. That's not to say you couldn't use a wearable to
edit spreadsheets, but such a focused task tends to be better accomplished
with laptop or desktop machines. Wearables make many sacrifices in the name
of conserving user attention. Those sacrifices are wasted when the wearable
is the user's primary focus, as is often the case in desktop
situations.
- Augmented Reality, Mixed Reality
A view from the AR Toolkit
|
- Augmented reality is a merging of virtual, graphical objects with the
real world. For example, you might look at a wall and "see" graphical
overlays on the wall indicating where electrical wiring is located. There
are also AR systems for fusing live medical sensor data (ultrasound, X-ray,
etc) for a surgeon while performing an operation.
Augmented reality tends to use one of two methods. Overlay-based AR uses
a transparent head-up display, or a non-transparent display over one eye
while the other eye is unoccluded, to project graphics with the correct
position and size to fit naturally into the physical
environment. Video-based AR places cameras near the user's eyes and sends
the video from those cameras to VR goggles in real-time. The video can then
be augmented in software to add graphics effects.
Mixed reality also combines real and virtual images in a physical space,
but often goes beyond head-up displays to accomplish it. For example, many
mixed-reality systems use a "magic mirror" metaphor where you see yourself
as if in a large wall-sized mirror, but also in the mirror-image are
graphical elements not in the real world. There is much overlap between AR
and MR, and often the words are used synonymously.
Conferences and Journals
Because the work is inherently interdisciplinary, papers in ubiquitous,
mobile and wearable computing fields are also to appear in conferences such
the ACM Computer-Human Interface conference (SigCHI) and the IEEE International
Conference on Image Processing (ICIP).
Some Labs Working on Ubicomp, Wearables or Augmented Reality
Research areas within these domains
Infrastructure and hardware
Resource discovery.
You enter a room, building or city with your mobile computer. Various
resources have been made available, and more are being added all the
time. On a small scale, your laptop may want to find the local printers. On
a larger scale, your car computer may want to know about all
traffic-cameras along I-280 between Palo Alto and San Francisco.
The main questions are how:
- your mobile computer discovers available resources
- your mobile computer determines the options and hardware that are required to use the service
- the resource provides user interfaces that work with many different
kinds of hardware
- the system scales when hundreds, thousands or millions of resources are made available
- the system gets extended to include resources that were not previously
anticipated
Some examples
- INS/Twine
- The INS/Twine system is a scalable peer-to-peer network of query
resolvers. It uses hashes of keywords, so an ontology can be created and
extended on-the-fly. Code is available (though it's not clear under what
license).
INS/Twine is designed to handle city-wide discovery of particular
resources out of a pool of potentially millions, so scalability is a large
focus of the project.
- Jini
- Jini is Sun's
java-based distributed application infrastructure. The infrastructure uses
Java types for a syntactic ontology (e.g. "is this a color printer") and
includes a Lookup Attribute system for a more semantic ontology. Jini is
mainly meant for local resource negotiations, like automatically
downloading drivers for local printers to your laptop. It has been around
for several years, and while it hasn't taken off it is in commercial
products. Jini is primarily designed for discovering resources within a
floor or building.
- Rendezvous
- Rendezvous is Apple's open-architecture system for discovering other
resources on the local-link. For example, OS-X uses Rendezvous to find
printers, print servers, shared disks, iChat clients and Airport wireless
base stations. The system is based on Multicast DNS. See the
Rendezvous tech brief and Multicasat
DNS Internet draft. Source code available from Apple (open
source). Rendezvous is currently being integrated into printers from
Brother, Epson, HP, Xerox and Lexmark, as well as non-printer third-party
developers including TiVo.
Toolkits and architectures
Research groups with a more software engineering bent often spend a great
deal of time designing toolkits with just the right set of features to make
ubicomp, wearable and augmented reality applications easy to write.
A few examples of note
- Context
Toolkit (Anind Dey, now at Intel Berkeley)
- A toolkit that provides support for distributed sensors and processing
of those sensors. Completed as Anind's PhD thesis in the Future Computing Environment Group
at Georgia Tech (Gregory Abowd's group).
- Tiny OS (UC
Berkeley)
- TinyOS is a component-based runtime environment designed to support
concurrent programs running on minimal hardware. The OS is a part of the Wireless Embedded Systems project
at UC Berkeley, which includes the Smart Dust / Motes systems.
- iRoom
Event Heap (Stanford University)
- The iRoom Event Heap is an architecture for multi-device communication
being used in the Interactive
Workspaces Project at Stanford University. The architecture focuses on
real-time communication between devices in a smart room or office.
-
- AR
Toolkit (University of Washington HIT Lab)
- The AR toolkit is a library for producing augmented reality
applications using fiducials. It is free for non-commercial use.
Hardware Improvements
Many research papers are along the lines of "Here is a new design for an
even smaller, lighter, lower-power computer." While this is important
research it is well outside of my field. Suffice to say that Moore's Law
continues to function, and it is often useful to see what tools are
becoming available to the researcher who needs the next generation of
hardware. These papers are becoming less frequent now that handheld and
embedded computation is becoming increasingly commercial, and the latest
and greatest hardware is available from small start-ups or powerhouses like
IBM and Intel rather than a universities or industry labs.
Mobile Ad-hoc Networking (MANET)
A "mobile ad hoc network" (MANET) is an autonomous system of mobile routers
(and associated hosts) connected by wireless links--the union of which form
an arbitrary graph. The routers are free to move randomly and organize
themselves arbitrarily; thus, the network's wireless topology may change
rapidly and unpredictably. Such a network may operate in a standalone
fashion, or may be connected to the larger Internet. Main issues include
routing algorithms, fault tolerance, scalability and minimizing power
consumption.
There are numerous industry, academic and government programs interested
in MANETs. Some good starting places for more information are the National Institute of Standards
and Technology and the MANET IETF
Working Group.
Sensor Networks & Distributed Computation
Berkeley Mote
|
Sensor Nets are combinations of small sensor devices that combine their
data to produce large-scale sensor readings. Usually sensor nets are based
on a mobile ad-hoc networking infrastructure, though it isn't strictly
necessary.
The prototypical sensor-net application is the ability to throw
thousands of dime-sized sensors onto a field, have them organize into an
ad-hoc network and broadcast soldier movements in the area. Other
applications include detecting vibrations in bridges and buildings, environmental monitoring and wearable
sensor networks for detecting gesture and context. As with MANET there
are many players in this field, with UC Berkeley and Intel Berkeley
as strong local examples.
Distributed computation is the study of algorithms that can be run
across multiple distributed CPUs. For example, the Pushpin
project at the MIT Media Lab experiments with using CPUs in each of the
small sensors to compute
the shape of light projected on a group of push pins. Theoretical
examinations of these algorithms include the Paintable
Computing project at the MIT Media Lab and the Amorphous
Computing project at the MIT AI Lab.
Wireless Personal Area Networks
A personal area network is designed for extremely short-ranged
communications, usually between different objects on the body or the
immediate surrounding area. Bluetooth is one PAN technology, designed
primarily for wireless cellphone headsets. The IEEE 802.15 Working Group for
WPAN is looking at several technologies that trade off data rate for
power consumption. Companies like Symbol, Federal Express, Motorola and BBN
have all been interested in these standards.
Research in the area seems to have moved on to other topics as the
standards committees grind away, although there is still some work being
done in using the electrical
skin effect to transmit data (and possibly even power) through a
person's skin to other devices on the body or in contact with the body
(e.g. a doorknob or another person).
Power Harvesting
MIT Media Lab Power-harvesting shoe
|
Power harvesting is the capture of power from the environment to charge
devices or batteries. Solar power is still the main technology in this area
of course, but other technologies are also being developed. The energy
collected is usually quite small, but is still enough to run RF-ID tags and
small sensor-net nodes. Technologies include piezo-electric shoe inserts
and AM radio harvesting. Joe Paradiso at the
MIT Media Lab is one of the main researchers in this area. As might be
expected, DARPA also has an interest
in this area.
Fabric Circuitry
The Burton Amp MP3 Jacket
|
One of the more interesting fusions in wearable computing is between
textiles and electrical engineering. For the past decade researchers have
been working on ways to create circuitry that feels like cloth, is washable,
and that can be integrated into clothing using standard weaving and
automatic embroidery techniques. One of the more interesting projects is
the Sensate
Liner, a t-shirt that can detect injuries sustained by soldiers and
automatically report them to medical personnel. More commercially, this
research recently lead to an MP3 jacket for snowboarders
that is being marketed by Apple.
Machine Perception
Thad Starner ASL translator wearable
|
Machine perception is the processing of data from physical sensors into
more abstract classifications. The area spans several fields, including
machine vision, speech and sound recognition and general pattern
recognition.
Research can generally be divided two different ways, by the kind of
sensors used and the kind of classifications desired. Sensors include
cameras, microphones, infra-red beacons, radio-frequency beacons, GPS,
biometric sensors (e.g. Galvanic Skin Response), accelerometers, and using
existing infrastructure such as cell-phone emissions. The current trend is
to combine several kinds of sensors in one system. The data people are
trying to get from these systems include location, face/speaker
recognition, gesture recognition (both unconscious and explicit), general
activity (walking, sitting, running), social situations (in conversation,
meeting someone, in a meeting), mood or cognitive load, and most recently
"activity that might be suspicious or a security risk."
The key thing to remember about machine perception is that these systems
are developed with a particular application in mind (stated or
unstated). Whether a machine truly understands what is happening
in a particular situation is a philosophical question that does not need to
be answered. All that needs to be answered is whether the machine can make
the classifications necessary for the application at hand, with the
required accuracy. Often, constraining the scope of an application or
careful user interface design can make up for otherwise unsolvable
perception problems.
Machine perception is a large field well beyond wearable and ubiquitous
computing. A few players that are especially active within the
wearable/ubicomp field are:
- Alex (Sandy) Pentland, MIT Media Lab Vision and Modeling
Group. Sandy's group has been at the fore of machine perception, and
focuses on machine classification of video and audio.
- Thad Starner, Georgia Tech Contextual Computing Group. Thad
was one of the founders of the field of wearable computing. His work covers
a wide range of everyday applications with wearable computers, but often
focuses on machine vision techniques using wearable computers. One of his
ongoing projects is a wearable camera system that automatically interprets
American Sign Language into English.
- Brian Clarkson, Sony CSL Interaction
Laboratory. Brian recently got his PhD from Sandy Pentland's group at
the MIT Media Lab, working on scene recognition using wearable
computers. His PhD included collecting 100 days of nearly continuous audio
and video data from a wearable computer and then automatically classifying
that data.
- Jun Rekimoto, Director of the Interaction Laboratory at the Sony Computer Science Lab. His work
spans across machine perception, augmented reality, mixed reality and
tangible interfaces.
Interface
There are three major issues of interface design in ubiquitous, mobile and
wearable computing:
- Messy environment. The environment for these applications is
less controllable than the desktop environment. This means the interfaces
need to accommodate more unexpected situations. People using wearable,
mobile and ubiquitous computing devices are also more distracted by the
environment, and so these interfaces need to require fewer perceptual and
cognitive resources to operate.
- Large amounts of captured data. These kinds of devices are very
good at automatically capturing large amounts of data. How can this data be
organized such that it is useful without being overwhelming?
- New interface paradigms. Human-computer interaction has been
mired in the WIMP (Windows, Icons, Menus and Pointers) paradigm for over
two decades. Ubicomp, mobicomp and wearables all bring out new interface
possibilities that need to be explored.
Augmented Reality
Augmented Reality systems have special machine-perception needs. In
particular, AR systems need to recognize particular locations, angles and
sometimes occlusion in a video stream so that graphics can be added, all in
real-time and with good frame-rate. Three methods are used for this:
- Head and body tracking. Some systems use GPS to track gross body
motion and accelerometers to do fine-grained tracking of the user's head
and body. Combining this information with a detailed map of the environment
allows the graphics system to add elements to the video stream with the
proper location and perspective.
- Fiducials. Some systems use infra-red beacons or printed 2D
barcodes that are of known size and shape. When the camera sees these
fiducials, their identity, position and angle can be recovered and used as
an anchor for graphical additions to the scene.
- Object recognition. More difficult than fiducials is to analyze
the video and recognize objects that should be annotated. For example, a
face-detection-and-recognition system may recognize a particular person in
a scene so a virtual name-badge can be added to the scene.
Some major players in the space (not complete):
- Steve Feiner, University of Columbia. Steve Feiner is probably the
foremost AR researchers in the field. His work spans all three methods of
AR, and focuses mainly on high-quality graphics with good frame-rate.
- Mark Billinghurst, director Human Interface Technology Laboratory New
Zealand (previously at University of Washington HIT lab). Mark's PhD work
was the AR toolkit, which is distributed free for non-commercial or
research applications.
- Alex (Sandy) Pentland, MIT Media Lab Vision and Modeling
Group. Sandy's group has been at the fore of machine perception, and
focuses on machine classification of video and audio. Sandy also advises
most of the wearable computing work currently underway at the MIT Media
Lab.
- Jun Rekimoto, Director of the Interaction Laboratory at the Sony Computer Science Lab. His AR
work is primarily with printed
2D fiducials.
- Thad Starner, Georgia Tech Contextual Computing Group. Thad
was one of the founders of the field of wearable computing. His work covers
a wide range of everyday applications with wearable computers, but often
focuses on machine vision techniques using wearable computers.
- Steve Mann, University of Toronto. Steve Mann is a combination AR
researcher and performance artist, and focuses on object-recognition
AR using wearable computers. His work is often thought-provoking, but
his evasive style makes it difficult to tell exactly what he has
successfully implemented.
Ubicomp generalizable interfaces
When a person enters a new office or building there may be many different
resources available, including communication, information and
object-control services. Each of these services have different interface
requirements. Some are absolute, e.g. a voice communicator requires a
microphone. Others are more fuzzy, such as a map interface that requires
some way to scroll but could use either buttons or sliders as
appropriate.
This research area looks at how to create interfaces on-the-fly for
different resources, based on whatever hardware is available. For example,
a directory service may provide an interface that uses audio commands for a
cellphone interface and touchscreen buttons for a PDA.
The Pebbles project at
CMU is especially working on this problem, as is the Project Oxygen at the MIT Lab for
Computer Science.
Fusion of multiple capture devices
Several labs have smart meeting rooms and classrooms that capture
information in various forms (whiteboard, video, audio, notes, etc.)
and allow people to view the captured information later in a variety
of forms. Most research focuses on ways to visualize different
information streams, and especially visualizing different information
streams that are linked together because the occurred at the same
time. Some projects also focus on fusing notes taken by multiple
people, thus providing a group-wide view of an event.
Related references
- Automated
Capture, Integration, and Visualization of Multiple Media Streams,
Jason A. Brotherton, Janak R. Bhalodia, Gregory D. Abowd. In the
Proceedings of IEEE Multimedia '98, July, 1998.
Note especially the idea of simple streams (normal audio,
video, slides, URL, etc), control streams (editable lists of
notes or strokes that are mainly meant to index into simple streams), and
derivative streams (automatically produced indexes based on
post-processing of simple streams, e.g. visual data or gesture
recognition). (Georgia Tech eClass Project)
- Personalizing
the Capture of Public Experiences, Khai N. Truong, Gregory D. Abowd
& Jason A. Brotherton. In UIST '99. (Georgia Tech eClass
Project)
- Building
a Digital Library of Captured Educational Experiences , Gregory D.
Abowd, Lonnie D. Harvel and Jason A. Brotherton. Invited paper for the 2000 International
Conference on Digital Libraries, Kyoto, Japan, November 13-16,
2000. (Georgia Tech eClass Project)
- Linking
by Interacting: a Paradigm for Authoring Hypertext
Pimentel, Maria G.C; Abowd, Gregory, Yoshihide Ishiguro Proceedings of
ACM Hypertext'2000. May, 2000.
This system automatically produces a hypertext document with
links between time line and co-occurring events in different media
channels. (Georgia Tech eClass Project)
- James A. Landay and Richard C. Davis, "Making Sharing Pervasive:
Ubiquitous Computing for Shared Note Taking." IBM Systems Journal,
1999. 38(4): p. 531-550. ( PDF
(596K) | HTML)
Notes taken either with PDA or CrossPad. Browsing is done on the Web.
Several people's notes are linked together, using a "unifying document"
like the typed meeting minutes, conference schedule or PowerPoint slides
as the main organization for individual notes. For example, each web page
will have a single PowerPoint slide, with notes from up to five people's
notes shown on the other half of the screen. (UC Berkeley)
- The
audio notebook: paper and pen interaction with structured
speech, Lisa Stifelman, Barry Arons, Chris Schmandt. In SIGCHI 2001,
pp. 182-189.
Paper notes with linked audio content based on timestamp linking. Tap on
a page to hear the audio that was spoken at that time. Fully integrated
system (no smart room needed). (MIT Media Lab)
- NoteLook:
Taking Notes in Meetings with Digital Video and Ink, Patrick Chiu,
Ashutosh Kapuskar, Sarah Reitmeier, Lynn Wilcox. In ACM Multimedia '99.
Video linked with notes on a tablet computer. Based on the earlier
Dynomite work. Interactive (i.e. can see thumbnails & video on tablet
via wireless as you take notes). Client-server system for integrating all
the meeting capture stuff. Video camera-based presentation-recorder.
(FX PAL)
- Xlibris: Tablet
computer-based editing system with inking annotations that are
automatically interpreted to do search. Includes some
Just-in-time-information-retrieval-style interactions
(automatically puts related information in the margins of the
document). (FX PAL)
- Dynomite:
A Dynamically Organized Ink and Audio Notebook, Lynn D. Wilcox, Bill
N. Schilit, and Nitin "Nick" Sawhney. In SIGCHI '97.
Audio linked with notes taken on a tablet computer. (FX PAL)
- Filochat:
handwritten notes provide access to recorded conversations,
Whittaker, S., Hyland, P., and Wiley. M. In Proceedings of CHI94
Conference on Computer Human Interaction, 271-277, 1994.
Yet another PDA-based audio-linked-with-notes-based-on-timestamp system.
(HP Labs)
- Marquee: A Tool for Real-Time Video LoggingWeber, K.,
and Poon, A., Proceedings of CHI '94 (Boston, MA, USA, April 1994), ACM
Press, 58-64.
- "Forget-me-not"
Intimate Computing in Support of Human Memory, M. Lamming and
M. Flynn, 1994. In Proceedings of FRIEND21, '94 International Symposium
on Next Generation Human Interface, Meguro Gajoen, Japan.
Tangible interfaces
Sony Augmented Surfaces System
|
Bill Buxton had an interesting talk at SIGCHI-98 where he had a picture of
a monitor, keyboard and mouse with the title "What decade was this picture
taken?" It was a picture of the Xerox Star taken 20 years ago. His point,
of course, was "what have we been doing the last 20 years?"
Tangible interfaces is one drive to move away from the WIMP
(Windows, Icons, Menus, Pointers) interface. Tangible interfaces are
physical, graspable objects that rely more on tactile feedback than
traditional interfaces. Physical icons known as phicons can
be used to give a physical handle to virtual data. For example, a
physical model of the Eiffel Tower might represent that geographic
location on a Paris map. Physically moving the tower on a graphics
table would scroll the map. Moving a physical model of the Louvre at
the same time would provide a two-handed interface for scaling and
rotating the map.
Major players in the area:
Ambient Interfaces / Calm Technology
Hirishi Ishii's Pinwheels showing wind speed in Tokyo
|
As was said earlier, interfaces in these areas need to minimize cognitive
and perceptual load. One attempt is to use ambient interfaces to quietly
deliver small amounts of information through background environment like
light-level or ambient sounds. For example, one ambient display shows
the number of people communicating in a virtual office space by projecting
shadows on a translucent wall. Another represents network traffic as the
sound of raindrops falling. The idea is to make an interface that is not
distracting, but which:
- can be polled quickly should information be necessary (let me use my
peripheral vision to see if people are using the chat room now).
- can be remembered later, even when you didn't explicitly try to
remember it (now that I think of it, there were a lot of people in the chat
room around noon).
- will alert a person when interesting events occur (gee, it looks like a
lot of people are gathering in the kitchen; I wonder if there's food
there?)
Major players in the area:
Communityware, Tracking Social Interactions, and
Smart Mobs
Many research groups are looking at how these technologies can improve
social interactions and our understanding of those interactions. Howard
Rheingold's book Smart
Mobs gives a nice overview of how mobile communication is changing
social structures by allowing real-time bottom-up organization of groups of
people. The Wearable
Communities work at the University of Oregon is looking at similar
effects from a wearable computing perspective.
Many others are using smart badges and small mobile devices to help our
understanding of community. Rick Borovoy's Folk
Computing (Paper)
project helps teach schoolchildren about how information flows through a
society. Tanzeem
Choudhury, a PhD candidate at the MIT Media Lab, is completing her
thesis on "Sensing and Modeling Human Networks." Her methods use
ubiquitous and wearable computers to track interactions, and then analyzes
those interactions using pattern-recognition techniques.
Application-Driven Research
Many researchers come to these fields because they have a specific problem
they want to solve. While the technologies and outcomes may be similar to
the more basic research projects, their methodology is quite different. To
quote one researcher working with the mentally disabled, "If I can help my
patients best by using Post-It Notes, I'll use Post-It Notes." People
working on real-world problems tend to take a very broad "whole system"
view of their solutions, and very quickly find that ubiquitous, mobile and
wearable solutions must integrate with the entire environment if they are
to succeed in the real world.
Currently most application-based research is in three areas: aid for the
disabled, military and industrial jobs. All three areas involve
unpredictable environments where there is a serious need for information or
command-and-control in the physical world.
Helping the Disabled
Several groups are using wearables and ubiquitous computing to help the
disabled and elderly. These groups don't focus on the technology, but
rather on understanding the particular needs of their users.
Some players
- David Ross, Atlanta VA Hospital. Ross is mainly working with wearable
systems for the blind.
- Elliot Cole, Institute for
Cognitive Prosthetics. Cole is working on technological tools to help
people with brain damage. At the MIT Workshop on
Attention and Memory in Wearable Interfaces he showed a video of a
woman with brain damage that affected her memory become almost normal when
using a Multimedia Album to help her tell stories about recent
events.
- Brad Meyers of the Pebbles
project at CMU is working on a Universal Controller to help the
disabled. Such a device would be automatically configurable to work with
the individual's handicap and allow him to control lights and other
electronic devices in the environment.
Industry and Military Case Studies
Symbol wearable designed for UPS
|
It is often difficult to integrate these technologies into real-world
applications, and some of the most interesting research results are
case-studies from these attempts. For example, Symbol technologies has
published a detailed case study of the deployment of their wearable
barcode-scanning ring and bracelet for the UPS packing center. Many issues
arose, including the fact that wearables need to be sized to fit
individuals, ruggedization was harder than expected, and it was easier to
get buy-in from people who knew how hard the old way was without the
wearable than from fresh recruits. ("Development of a Commercially
Successful Wearable Data Collection System", R. Stein, S. Ferrero,
M. Hetfield, A. Quinn, M. Krichever, ISWC '98.)
Companies especially interested in wearable and ubiquitous computing
applications (as opposed to the technology itself) are Symbol Technologies,
Federal Express, and especially DARPA. Industries looking at or using the
technology include warehouses, medical (especially surgical), military,
environmental and educational fields.