Generation Smartphone
It’s the year 2020
and newlyweds Tom and Sara (random!) are
expecting their first child. Along with selecting the latest high-tech
stroller, picking out a crib, and decorating the nursery, they download the
“NewBorn” application suite to their universal communicator; they’re using what
we’ll call a SmartPhone 20.0. Before the due date, they take the phone on a
tour of the house, letting the phone’s sensors and machine-learning algorithms
create light and sound “fingerprints” for each room.
When they settle Tom
Jr. down for his first nap at home, they place the SmartPhone 20.0 in his crib.
Understanding that the crib is where the baby sleeps, the SmartPhone activates
its sudden infant death syndrome (SIDS) application and uses its built-in
microphone, accelerometers, and other sensors to monitor little Tommy’s
heartbeat and respiration. The “Baby Position” app analyzes the live video
stream to ensure that Tommy does not flip over onto his stomach—a position that
the medical journals still report contributes to SIDS. The NewBorn application suite updates itself
with the latest medical findings. To lull Tommy to sleep, the SmartPhone 20.0
plays music, testing out a variety of selections and learning by observation
which music is most soothing for this particular infant. As a toddler, Tommy is
very observant and has learned the combination on the gate to the swimming pool
area. One day, while his parents have their backs turned, he starts working the
lock. His SmartPhone “Guardian” app recognizes what he is doing, sounds an
alarm, disables the lock, and plays a video demonstrating what could happen if
Tommy fell into the pool with no one else around. Not happy at being thwarted,
Tommy throws a tantrum, and the Guardian app, noting his parents’ arrival,
briefs them on the situation and suggests a time-out.
But the SmartPhone
20.0 won’t be just a high-tech baby monitor. Rather, the device or smart mobile
devices like it will serve as nanny, nurse, or golf caddy—the perfect assistant
for people of all ages. If you think that people can’t seem to make a move
without consulting their phones today, well, you ain’t seen nothing yet.
Let’s age Tommy to 3
years old. Tom and Sara take him skiing for the first time. Tommy’s SmartPhone,
now version 23.0, downloads the “Virtual Skiing Coach,” which uses
accelerometers sewn into Tommy’s clothing to sense his posture and then offer
suggestions for maintaining balance; when it foresees an impending collision,
it quickly blurts out instructions on how to stop. We already have basic sensor-based
virtual coaches. For example, some
devices use accelerometers and gyroscopes to track motion during rehabilitation
exercises and correct errors. Such coaches would enable therapists to remotely
monitor home-based exercise, making it easier for seniors to remain at home as
they age and reducing health-care costs.
At age 5, with the
SmartPhone 25.0 education apps, Tommy has become a curious and eager learner.
He looks forward to his first day at kindergarten. He meets Alice, who can
neither hear nor speak, but because of her SmartPhone, she is able to easily
participate in class. Alice greets Tommy by signing, and her SmartPhone plays a
translation provided by the American Sign Language (ASL) app. Tommy responds,
and Alice’s speech-recognition app provides her with real time captioning.
Tommy shares his favorite song with Alice, sending it from his SmartPhone to
hers, which translates the music to vibrators in a vest she wears. One day,
Tommy is walking home from school, and the SmartPhone 27.0 Guardian app notices
that a stranger has started a conversation with him and is coaxing Tommy to get
into a van. The Guardian app whispers in Tommy’s ear not to talk to the
stranger and tells him to run to a nearby house, one the app has already
verified as a local kid-safe house and confirmed that someone is home. The
Guardian app takes a picture of the stranger and the license plate of his van
and forwards the information to the police.First Person Vision, introduced at
the 2011 CES, uses video taken by wearable cameras and smartphones to identify
gestures, actions, and faces in real time. It’s not much of a stretch to
envision it alerting users to threats.
For Tommy’s 16th
birthday, his parents download the “Driving Instructor” app. Of course, by 2036
cars have many safety features but still require the driver to take over in
emergency situations, so a driver’s license is still required. Under the
tutelage of the app, Tommy becomes an excellent driver; his parents trust that
they’ll be alerted if he starts driving recklessly. These kinds of
driver-monitoring tools are now in the lab. For example, the DriveCap project
at the Quality of Life Technology Center, in Pittsburgh, run by Carnegie Mellon
University and the University of Pittsburgh, uses in-car sensors to track
driver behaviour (accelerometers can detect erratic maneuvers and sudden
changes in braking and acceleration) and the driver’s cognitive load—that is,
how attentive, tired, or overwhelmed the driver is—by focusing a camera on the
eyes.
Years later, Tom
Jr.’s SmartPhone (upgraded, of course, many times over the years) continues to
be a trusted companion. On a business trip, the “Administrative Assistant” app
reminds Tom of people’s names and their connections to him; this is an
easy-to-imagine extension of First Person Vision. Tom has an appointment in a
large building complex, which has a confusing maze of corridors and bridges
between buildings. Tom’s SmartPhone snaps pictures for comparison to an archive
of pictures of different parts of a building; that’s something the First Person
Vision app already does. By locating his position on a floor plan and knowing
his destination, the “Building Navigation” app can efficiently guide him to his
meeting. Applications like this already exist; the simplest are based on indoor
maps developed by Google Places for Business.
On one trip, Tom
twists his ankle while jogging. His SmartPhone directs him to the nearest
emergency room; iPhone 4s users are already familiar with the Siri app’s
ability to do this kind of location finding. Later the SmartPhone recognizes
that Tom is using his crutches incorrectly and gives him some pointers. While a
“crutches coach” is not currently on the market, similar coaches have been
demonstrated in the field. People who use manual wheelchairs are susceptible to
repetitive-use injuries to their wrists and shoulder rotator cuffs. Researchers
at Carnegie Mellon and the University of Pittsburgh have tested accelerometers
in a wristwatch-like bracelet that classifies the arm movements and encourages
those patterns that generate the least stress on the wrist and shoulder.
Powered wheelchairs are being used to test more sophisticated built-in sensors
to help users with spinal cord injuries avoid developing pressure sores by
making sure they shift positions frequently; these devices have also been
tested at the two Pittsburgh universities. Tool kits already exist for
simplifying the development of applications that augment reality, an
open-source project supported by the University of Washington; the University
of Canterbury, in New Zealand; and ARToolworks, in Seattle. To pass along his
father’s life lessons, Tom records video of his father answering a variety of
questions. In years to come, Tom’s son will ask questions, which the SmartPhone’s
speech recognizer will match with an automatically generated index of the video
clips, letting the grandson have simulated conversations with his grandfather.
Even later, Tom’s
declining health requires ever more monitoring by his doctor. Fortunately, Tom’s
SmartPhone Health app allows his doctor to request routine self-monitoring
tests using sensors built into the phone. The app administers these tests to
Tom according to a schedule set by the doctor, who can review test results and
order additional tests if necessary. The Health app also monitors Tom’s
activities and notifies the nurse in the independent living complex where he
now resides if there are any anomalies. Today, “health kiosks” that perform
these functions are already in use in workplaces and senior living centers
.Already, new applications like VitalClip, an iPhone accessory soon to go into
a private beta test, allow users to measure vital signs by touching a finger to
a sensor.
The apps that help
Tom throughout his imaginary life are all straightforward extrapolations from
what exists today. But technology isn’t always bound to a straight path. In the
future, the SmartPhone and smart communicators like it will decrease in size
until sensing and computing is simply part of everyday objects, integrated into
the outer “skins” of devices, woven into clothing, and embedded into
countertops. This integrated technology will be situationally aware,
understanding the user’s intent and jumping in to help without a touch or a
voice command. The Tommys of the future will be protected by helmets and
uniforms that anticipate potential concussion-causing collisions and quickly
react with counterforces that minimize bruising of the brain. Their footballs
will signal “first down” from the bottom of the pile of players—no human
judgment necessary. Their kitchens will figure out what meal is being made as
ingredients are pulled from the refrigerator and step-by-step preparation
instructions are displayed on the countertop. All this technology will have a
zero carbon footprint, as it scavenges energy from radio waves in the
environment and biodegrades when it is discarded. And we can see this future
reflected in today’s smartphones.
-TECHNICAL TEAM!
Brilliant. (y)
ReplyDeletefactful..
ReplyDeletegud..
ReplyDeletenice :)
ReplyDeleteWell, the write up is good. But I think, it'll rather be a Augmented Reality Assistant doing these things in the future rather than a smart phone. With google already showing off its prototype, I don't think the day is far when smart phones will rather be one of the obsolete old devices because in the last decade, Technology has been shaped to engulf human preferences. And we, as humans, need more and more out of it! :)
ReplyDeletenice one..
ReplyDelete