U.S. patent application number 14/137263 was filed with the patent office on 2014-09-11 for apparatus and method for executing system commands based on captured image data.
This patent application is currently assigned to OrCam Technologies, Ltd.. The applicant listed for this patent is Amnon Shashua, Yonatan Wexler. Invention is credited to Amnon Shashua, Yonatan Wexler.
Application Number | 20140253702 14/137263 |
Document ID | / |
Family ID | 51487375 |
Filed Date | 2014-09-11 |
United States Patent
Application |
20140253702 |
Kind Code |
A1 |
Wexler; Yonatan ; et
al. |
September 11, 2014 |
APPARATUS AND METHOD FOR EXECUTING SYSTEM COMMANDS BASED ON
CAPTURED IMAGE DATA
Abstract
An apparatus and method are provided for identifying and
executing system commands based on captured image data. In one
implementation, a method is provided for executing at least one
command retrieved from a captured image. According to the method,
image data is received from an image sensor, and the image data may
include printed information associated with a specific system
commands. The method further includes accessing a database
including a plurality of predefined system commands associated with
printed information, and identifying in the image data an existence
of the printed information associated with the specific system
command stored in the database. The specific system command is
executed after the printed information associated with the specific
system command is identified.
Inventors: |
Wexler; Yonatan; (Jerusalem,
IL) ; Shashua; Amnon; (Mevaseret Zion, IL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Wexler; Yonatan
Shashua; Amnon |
Jerusalem
Mevaseret Zion |
|
IL
IL |
|
|
Assignee: |
OrCam Technologies, Ltd.
Jerusalem
IL
|
Family ID: |
51487375 |
Appl. No.: |
14/137263 |
Filed: |
December 20, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61775603 |
Mar 10, 2013 |
|
|
|
61799649 |
Mar 15, 2013 |
|
|
|
61830122 |
Jun 2, 2013 |
|
|
|
Current U.S.
Class: |
348/62 |
Current CPC
Class: |
G09B 21/008 20130101;
G06F 3/011 20130101; G09B 21/006 20130101; H04M 1/72577 20130101;
G06K 9/00442 20130101; G06F 1/163 20130101; H04M 1/72522 20130101;
G06K 9/18 20130101; H04M 2250/52 20130101; A61F 9/08 20130101 |
Class at
Publication: |
348/62 |
International
Class: |
A61F 9/08 20060101
A61F009/08; G06K 9/18 20060101 G06K009/18 |
Claims
1. An apparatus operated by at least one command retrieved from a
captured image, the apparatus comprising: an image sensor
configured to be worn by a user and to capture image data from an
environment of the user; a mobile power source for powering at
least the image sensor; and at least one portable processor device
configured for tethering to the image sensor and configured to:
identify human-readable text in the image data, the human-readable
text representing a predefined system command; and execute the
predefined system command represented by the human-readable text
after the human-readable text is identified.
2. The apparatus of claim 1, wherein the image sensor is configured
to be movable with a head of the user.
3. The apparatus of claim 1, wherein the tethering between the at
least one processor device and the image sensor is based on a wired
connection.
4. The apparatus of claim 1, wherein the tethering between the at
least one processor device and the image sensor is based on a
wireless connection.
5. The apparatus of claim 1, wherein the at least one portable
processor device is further configured to: access a database of a
plurality of system commands; and identify a portion of the printed
information that corresponds to the human-readable text, the
printed information portion representing a corresponding one of the
system commands; and establish the corresponding one of the system
commands as the predefined system command.
6. The apparatus of claim 5, wherein the database is associated
with a server having an Internet connection and remotely located
with respect to the apparatus.
7. The apparatus of claim 1, wherein the least one processor device
is further configured to perform an optical character recognition
on the image data to identify the human-readable text.
8. The apparatus of claim 5, wherein the least one processor device
is further configured to perform image processing on the image data
to identify the human-readable text.
9. The apparatus of claim 1, wherein the at least one portable
processor device is further configured to identify non-textual
information representing the predefined system command within the
image data.
10. The apparatus of claim 1, wherein the human-readable text
includes hand-written information.
11. (canceled)
12. The apparatus of claim 5, wherein the least one processor
device is further configured to execute the predefined system
command without tactile input from the user.
13. The apparatus of claim 1, wherein the least one processor
device is further configured to execute the predefined system
command without audio input from the user.
14. The apparatus of claim 1, wherein the predefined system command
includes a plurality of steps.
15. The apparatus of claim 1, wherein the predefined system command
includes at least one of the following: enter training mode, enter
sleep mode, enter airplane mode, start recording, end recording,
download stored photo, backup content, update operating system,
restart system, change device configuration, and erase
customization.
16. The apparatus of claim 1, wherein the predefined system command
includes preforming an action on a particular file.
17. An apparatus operated by at least one command retrieved from a
captured image, the apparatus comprising: an image sensor
configured to be worn by a user and to capture image data from an
environment of the user; and at least one portable processor device
configured for tethering to the image sensor and configured to:
receive the image data from the image sensor; identify
human-readable text in the image data, the human-readable text
representing a predefined system command; and execute the
predefined system command represented by the human-readable text
after the human-readable is identified.
18. The apparatus of claim 17, wherein identifying the predefined
system command includes preforming optical character recognition on
the image data, the optical character recognition being executed
automatically upon receipt of the image data.
19. The apparatus of claim 17, wherein identifying the predefined
system command includes performing image processing on the image
data, the image processing being executed automatically upon
receipt of the image data.
20. The apparatus of claim 17, wherein the at least one processor
device is further configured to execute the predefined specific
system command automatically after the predefined system command is
identified.
21. The apparatus of claim 17, wherein the at least one processor
device is further configured to execute the predefined system
command after receiving an audible confirmation from the user.
22. A method for executing at least one command retrieved from a
captured image, the method comprising: receiving image data from an
image sensor; identifying human-readable text in the image data,
the human-readable text representing a predefined system command;
and executing the predefined system command after the
human-readable text representing the predefined system command is
identified.
23. A software product stored on a tangible non-transitory computer
readable medium and comprising data and computer implementable
instructions that, when executed by at least one processor, cause
the at least one processor to perform a method, comprising:
receiving image data from an image sensor; identifying
human-readable text in the image data, the human-readable text
representing a predefined system command; and executing the
predefined-system command represented by the human-readable text
after the human-readable text is identified.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority of U.S.
Provisional Patent Application No. 61/775,603, filed Mar. 10, 2013,
U.S. Provisional Patent Application No. 61/799,649, filed on Mar.
15, 2013, and U.S. Provisional Patent Application No. 61/830,122,
filed on Jun. 2, 2013, the disclosures of which are incorporated
herein by reference in their entirety.
BACKGROUND
[0002] I. Technical Field
[0003] This disclosure generally relates to devices and methods for
providing information to a user. More particularly, this disclosure
relates to devices and methods for providing information to a user
by processing images captured from the environment of the user.
[0004] II. Background Information
[0005] Visual acuity is an indication of the clarity or clearness
of a person's vision that is commonly measured twenty feet from an
object. When measuring visual acuity, the ability of a person to
identify black symbols on a white background at twenty feet is
compared to the ability of a person with normal eyesight. This
comparison can be symbolized by a ratio. For example, a ratio of
20/70 vision means a person located at a distance of twenty feet
can see what a person with normal vision can see at seventy feet. A
person has low vision if he or she has a visual acuity between
20/70 and 20/200 in the better-seeing eye that cannot be corrected
or improved with regular eyeglasses. The prevalence of low vision
is about one in a hundred for people in their sixties and rapidly
increases to one in five for people in their nineties. Low vision
may also depend on the environment. For example, some individuals
may be able to see only when there is ample light.
[0006] A person may have low vision (also known as visual
impairment) for several reasons. Other than eye damage and failure
of the brain to receive visual cues sent by the eyes, different
medical conditions may cause visual impairment. Medical conditions
that may cause visual impairment include Age-related Macular
Degeneration (AMD), retinitis pigmentosa, cataract, and diabetic
retinopathy.
[0007] AMD, which usually affects adults, is caused by damage to
the retina that diminishes vision in the center of a person's
visual field. The lifetime risk for developing AMD is strongly
associated with certain genes. For example, the lifetime risk of
developing AMD is 50% for people that have a relative with AMD,
versus 12% for people that do not have relatives with AMD.
[0008] Retinitis pigmentosa is an inherited, degenerative eye
disease that causes severe vision impairment and often blindness.
The disease process begins with changes in pigment and damage to
the small arteries and blood vessels that supply blood to the
retina. There is no cure for retinitis pigmentosa and no known
treatment can stop the progressive vision loss caused by the
disease.
[0009] A cataract is a clouding of the lens inside the eye which
leads to a decrease in vision. Over time, a yellow-brown pigment is
deposited within the lens and obstructs light from passing and
being focused onto the retina at the back of the eye. Biological
aging is the most common cause of a cataract, but a wide variety of
other risk factors (e.g., excessive tanning, diabetes, prolonged
steroid use) can cause a cataract.
[0010] Diabetic retinopathy is a systemic disease that affects up
to 80% of all patients who have had diabetes for ten years or more.
Diabetic retinopathy causes microvascular damage to a blood-retinal
barrier in the eye and makes the retinal blood vessels more
permeable to fluids.
[0011] People with low vision experience difficulties due to lack
of visual acuity, field-of-view, color perception, and other visual
impairments. These difficulties affect many aspects of everyday
life. Persons with low vision may use magnifying glasses to
compensate for some aspects of low vision. For example, if the
smallest letter a person with 20/100 vision can read is five times
larger than the smallest letter that a person with 20/20 vision can
read, then 5.times. magnification should make everything that is
resolvable to the person with 20/20 vision resolvable to the person
with low vision. However, magnifying glasses are expensive and
cannot remedy all aspects of low vision. For example, a person with
low vision who wears magnifying glasses may still have a difficult
time recognizing details from a distance (e.g., people, signboards,
traffic lights, etc.). Accordingly, there is a need for other
technologies that can assist people who have low vision accomplish
everyday activities.
SUMMARY
[0012] Embodiments consistent with the present disclosure provide
devices and methods for providing information to a user by
processing images captured from the environment of the user. The
disclosed embodiments may assist persons who have low vision.
[0013] Consistent with disclosed embodiments, an apparatus may be
operated by at least one command retrieved from a captured image.
In one aspect, the apparatus includes an image sensor configured to
be worn by a user and to capture image data from an environment of
the user, a mobile power source for powering at least the image
sensor, and at least one portable processor device configured for
tethering to the image sensor. The at least one portable processor
device may be configured to access a database of a plurality of
predefined system commands associated with printed information in
the image data, and identify in the image data an existence of
printed information associated with a specific system command
stored in the database. The at least one portable processor device
may be further configured to execute the specific system command
after the printed information associated with the specific system
command is identified.
[0014] Consistent with additional disclosed embodiments, an
apparatus may be operated by at least one command retrieved from a
captured image. In one aspect, the apparatus may include an image
sensor configured to be worn by a user and to capture image data
from an environment of the user, and at least one portable
processor device configured for tethering to the image sensor. The
at least one portable processor device may be configured to receive
the image data from the image sensor. The image data may include
printed information associated with a specific system command. The
at least one portable processor device may be further configured to
identify in the image data an existence of the printed information,
identify the specific system command associated with the printed
information, and execute the specific system command after the
specific system command is identified.
[0015] Consistent with further disclosed embodiments, a method for
executing at least one command retrieved from a captured image
includes receiving image data from an image sensor. In one aspect,
the image data includes printed information associated with a
specific system commands. The method further includes accessing a
database including a plurality of predefined system commands
associated with printed information, identifying in the image data
an existence of the printed information associated with the
specific system command stored in the database, and executing the
specific system command after the printed information associated
with the specific system command is identified.
[0016] Consistent with other disclosed embodiments, non-transitory
computer-readable storage media may store program instructions,
which are executed by at least one processor device and perform any
of the methods described herein.
[0017] The foregoing general description and the following detailed
description are exemplary and explanatory only and are not
restrictive of the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated in and
constitute a part of this disclosure, illustrate various disclosed
embodiments. In the drawings:
[0019] FIG. 1 is a schematic illustration of a user wearing an
apparatus for aiding persons who have low vision;
[0020] FIG. 2A is a schematic illustration of an example of a
support from a first viewpoint;
[0021] FIG. 2B is a schematic illustration of the support shown in
FIG. 2A from a second viewpoint;
[0022] FIG. 2C is a schematic illustration of the support shown in
FIG. 2A mounted on a pair of glasses;
[0023] FIG. 2D is a schematic illustration of a sensory unit
attached to the support that is mounted on the pair of glasses
shown in FIG. 2C;
[0024] FIG. 2E is an exploded view of FIG. 2D;
[0025] FIG. 3A is a schematic illustration of an example of a
sensory unit from a first viewpoint;
[0026] FIG. 3B is a schematic illustration of the sensory unit
shown in FIG. 3A from a second viewpoint;
[0027] FIG. 3C is a schematic illustration of the sensory unit
shown in FIG. 3A from a third viewpoint;
[0028] FIG. 3D is a schematic illustration of the sensory unit
shown in FIG. 3A from a fourth viewpoint;
[0029] FIG. 3E is a schematic illustration of the sensory unit
shown in FIG. 3A in an extended position;
[0030] FIG. 4A is a schematic illustration of an example of a
processing unit from a first viewpoint;
[0031] FIG. 4B is a schematic illustration of the processing unit
shown in FIG. 4A from a second viewpoint;
[0032] FIG. 5A is a block diagram illustrating an example of the
components of an apparatus for aiding persons who have low vision
according to a first embodiment;
[0033] FIG. 5B is a block diagram illustrating an example of the
components of an apparatus for aiding persons who have low vision
according to a second embodiment;
[0034] FIG. 5C is a block diagram illustrating an example of the
components of an apparatus for aiding persons who have low vision
according to a third embodiment;
[0035] FIG. 5D is a block diagram illustrating an example of the
components of an apparatus for aiding persons who have low vision
according to a fourth embodiment;
[0036] FIG. 6 illustrates an exemplary set of application modules
and databases, according to disclosed embodiments;
[0037] FIG. 7 is a flow diagram of an exemplary process for
identifying and executing system commands based on captured image
data, according to disclosed embodiments;
[0038] FIG. 8 is a flow diagram of an exemplary process for
identifying and executing system commands based on textual
information within captured image data, according to disclosed
embodiments;
[0039] FIG. 9 is a flow diagram of an exemplary process for
executing an identified system command, according to disclosed
embodiments; and
[0040] FIGS. 10-15 illustrate exemplary image data captured by an
apparatus for aiding persons who have low vision, according to
disclosed embodiments.
DETAILED DESCRIPTION
[0041] The following detailed description refers to the
accompanying drawings. Wherever possible, the same reference
numbers are used in the drawings and the following description to
refer to the same or similar parts. While several illustrative
embodiments are described herein, modifications, adaptations and
other implementations are possible. For example, substitutions,
additions or modifications may be made to the components
illustrated in the drawings, and the illustrative methods described
herein may be modified by substituting, reordering, removing, or
adding steps to the disclosed methods. Accordingly, the following
detailed description is not limited to the disclosed embodiments
and examples. Instead, the proper scope is defined by the appended
claims.
[0042] Disclosed embodiments provide devices and methods for
assisting people who have low vision. One example of the disclosed
embodiments is a device that includes a camera configured to
capture real-time image data from the environment of the user. The
device also includes a processing unit configured to process the
real-time image data and provide real-time feedback to the user.
The real-time feedback may include, for example, an output that
audibly identifies individuals from a distance, reads signboards,
and/or identifies the state of a traffic light.
[0043] FIG. 1 illustrates a user 100 wearing an apparatus 110
connected to glasses 105, consistent with a disclosed embodiment.
Apparatus 110 may provide functionality for aiding user 100 with
various daily activities that are otherwise difficult for user 100
to accomplish due to low vision. Glasses 105 may be prescription
glasses, magnifying glasses, nonprescription glasses, safety
glasses, sunglasses, etc.
[0044] As shown in FIG. 1, apparatus 110 includes a sensory unit
120 and a processing unit 140. Sensory unit 120 may be connected to
a support (not shown in FIG. 1) that is mounted on glasses 105. In
addition, sensory unit 120 may include an image sensor (not shown
in FIG. 1) for capturing real-time image data of the field-of-view
of user 100. The term "image data" includes any form of data
retrieved from optical signals in the near-infrared, infrared,
visible, and ultraviolet spectrums. The image data may be used to
form video clips and/or photographs.
[0045] Processing unit 140 may communicate wirelessly or via a wire
130 connected to sensory unit 120. In some embodiments, processing
unit 140 may produce an output of audible feedback to user 100
(e.g., using a speaker or a bone conduction headphone).
[0046] Apparatus 110 is one example of a device capable of
implementing the functionality of the disclosed embodiments. Other
devices capable of implementing the disclosed embodiments include,
for example, a mobile computer with a camera (e.g., a smartphone, a
smartwatch, a tablet, etc.) or a clip-on-camera configured to
communicate with a processing unit (e.g., a smartphone or a
dedicated processing unit, which can be carried in a pocket). A
person skilled in the art will appreciate that different types of
devices and arrangements of devices may implement the functionality
of the disclosed embodiments.
[0047] FIG. 2A is a schematic illustration of an example of a
support 210. As discussed in connection with FIG. 1, support 210
may be mounted on glasses 105 and connect to sensory unit 120. The
term "support" includes any device or structure that enables
detaching and reattaching of a device including a camera to a pair
of glasses or to another object (e.g., a helmet). Support 210 may
be made from plastic (e.g., polycarbonate), metal (e.g., aluminum),
or a combination of plastic and metal (e.g., carbon fiber
graphite). Support 210 may be mounted on glasses 105 using screws,
bolts, snaps, or any fastening means used in the art.
[0048] As shown in FIG. 2A, support 210 includes a base 230
connected to a clamp 240. A bridge 220 connects base 230 with clamp
240. Base 230 and clamp 240 enable sensory unit 120 to easily
attach to and detach from support 210. In one embodiment, base 230
may include an internally threaded member 250 for cooperating with
a screw (not shown in FIG. 2A) to mount support 210 on glasses
105.
[0049] FIG. 2B illustrates support 210 from a second viewpoint. The
viewpoint shown in FIG. 2B is from a side orientation of support
210.
[0050] FIG. 2C illustrates support 210 mounted on glasses 105.
Support 210 may be configured for mounting on any kind of glasses
(e.g., eyeglasses, sunglasses, 3D glasses, safety glasses, etc.).
As shown in FIG. 2C, sensory unit 120 is not attached to support
210 and, accordingly, support 210 may be sold separately from
apparatus 110. This arrangement makes apparatus 110 compatible with
a variety of glasses. For example, some users may have several
pairs of glasses and may wish to mount a support on each pair of
glasses.
[0051] In other embodiments, support 210 may be an integral part of
a pair of glasses, or sold and installed by an optometrist. For
example, support 210 may be configured for mounting on the arms of
glasses 105 near the frame front, but before the hinge.
Alternatively, support 210 may be configured for mounting on the
bridge of glasses 105.
[0052] FIG. 2D illustrates sensory unit 120 attached to support 210
(not visible in FIG. 2D), and support 210 mounted on glasses 105.
In some embodiments, support 210 may include a quick release
mechanism for disengaging and reengaging sensory unit 120. For
example, support 210 and sensory unit 120 may include magnetic
elements. As an alternative example, support 210 may include a male
latch member and sensory unit 120 may include a female
receptacle.
[0053] When sensory unit 120 is attached (or reattached) to support
210, the field-of-view of a camera associated with sensory unit 120
may be substantially identical to the field-of-view of user 100.
Accordingly, in some embodiments, after support 210 is attached to
sensory unit 120, directional calibration of sensory unit 120 may
not be required because sensory unit 120 aligns with the
field-of-view of user 100.
[0054] In other embodiments, support 210 may include an adjustment
component (not shown in FIG. 2D) to enable calibration of the
aiming direction of sensory unit 120 in a substantially set
position that is customized to user 100 wearing glasses 105. For
example, the adjustment component may include an adjustable hinge
to enable vertical and horizontal alignment of the aiming direction
of sensory unit 120. Adjusting the alignment of sensory unit 120
may assist users who have a unique and individual visual
impairment. The adjustment may be internal or external to sensory
unit 120.
[0055] FIG. 2E is an exploded view of the components shown in FIG.
2D. Sensory unit 120 may be attached to glasses 105 in the
following way. Initially, support 210 may be mounted on glasses 105
using screw 260. Next, screw 260 may be inserted into internally
threaded member 250 (not shown in FIG. 2E) in the side of support
210. Sensory unit 120 may then be clipped on support 210 such that
it is aligned with the field-of-view of user 100.
[0056] FIG. 3A is a schematic illustration of sensory unit 120 from
a first viewpoint. As shown in FIG. 3A, sensory unit 120 includes a
feedback-outputting unit 340 and an image sensor 350.
[0057] Sensory unit 120 is configured to cooperate with support 210
using clip 330 and groove 320, which fits the dimensions of support
210. The term "sensory unit" refers to any electronic device
configured to capture real-time images and provide a non-visual
output. Furthermore, as discussed above, sensory unit 120 includes
feedback-outputting unit 340. The term "feedback-outputting unit"
includes any device configured to provide information to a
user.
[0058] In some embodiments, feedback-outputting unit 340 may be
configured to be used by blind persons and persons with low vision.
Accordingly, feedback-outputting unit 340 may be configured to
output nonvisual feedback. The term "feedback" refers to any output
or information provided in response to processing at least one
image in an environment. For example, feedback may include a
descriptor of a branded product, an audible tone, a tactile
response, and/or information previously recorded by user 100.
Furthermore, feedback-outputting unit 340 may comprise appropriate
components for outputting acoustical and tactile feedback that
people with low vision can interpret. For example,
feedback-outputting unit 340 may comprise audio headphones, a
speaker, a bone conduction headphone, interfaces that provide
tactile cues, vibrotactile stimulators, etc.
[0059] As discussed above, sensory unit 120 includes image sensor
350. The term "image sensor" refers to a device capable of
detecting and converting optical signals in the near-infrared,
infrared, visible, and ultraviolet spectrums into electrical
signals. The electric signals may be used to form an image based on
the detected signal. For example, image sensor 350 may be part of a
camera. In some embodiments, when sensory unit 120 is attached to
support 210, image sensor 350 may acquire a set aiming direction
without the need for directional calibration. The set aiming
direction of image sensor 350 may substantially coincide with the
field-of-view of user 100 wearing glasses 105. For example, a
camera associated with image sensor 350 may be installed within
sensory unit 120 in a predetermined angle in a position facing
slightly downwards (e.g., 5-15 degrees from the horizon).
Accordingly, the set aiming direction of image sensor 350 may match
the field-of-view of user 100.
[0060] As shown in FIG. 3A, feedback-outputting unit 340 and image
sensor 350 are included in a housing 310. The term "housing" refers
to any structure that at least partially covers, protects, or
encloses a sensory unit. The housing may be made from one or more
different materials (e.g., plastic or aluminum). In one embodiment,
housing 310 may be designed to engage with a specific pair of
glasses having a specific support (e.g., support 210). In an
alternative embodiment, housing 310 may be designed to engage more
than one pair of glasses, each having a support (e.g., support 210)
mounted thereon. Housing 310 may include a connector for receiving
power from an external mobile-power-source or an internal
mobile-power-source, and for providing an electrical connection to
image sensor 350.
[0061] FIG. 3B is a schematic illustration of sensory unit 120 from
a second viewpoint. As shown in FIG. 3B, housing 310 includes a
U-shaped element. An inner distance "d" between each side of the
U-shaped element is larger than the width of the arm of glasses
105. Additionally, the inner distance "d" between each side of the
U-shaped element is substantially equal to a width of support 210.
The inner distance "d" between each side of the U-shaped element
may allow user 100 to easily attach housing 310 to support 210,
which may be mounted on glasses 105. As illustrated in FIG. 3B,
image sensor 350 is located on one side of the U-shaped element and
feedback-outputting unit 340 is located on another side of the
U-shaped element.
[0062] FIG. 3C is a schematic illustration of sensory unit 120 from
a third viewpoint. The viewpoint shown in FIG. 3C is from a side
orientation of sensory unit 120 and shows the side of the U-shaped
element that includes image sensor 350.
[0063] FIG. 3D is a schematic illustration of sensory unit 120 from
a fourth viewpoint. The viewpoint shown in FIG. 3D is from an
opposite side of the orientation shown in FIG. 3C. FIG. 3D shows
the side of the U-shaped element that includes feedback-outputting
unit 340.
[0064] FIG. 3E is a schematic illustration of the sensory unit
shown in FIG. 3A in an extended position. As shown in FIG. 3E, a
portion of sensory unit 120 is extendable and wire 130 may pass
through a channel of sensory unit 120. This arrangement may allow a
user to adjust the length and the angle of sensory unit 120 without
interfering with the operation of apparatus 110.
[0065] User 100 may adjust the U-shaped element of sensory unit 120
so that feedback-outputting unit 340 is positioned adjacent to the
user's ear or the user's temple. Accordingly, sensory unit 120 may
be adjusted for use with different users who may have different
head sizes. Alternatively, a portion of sensory unit 120 may be
flexible such that the angle of feedback-outputting unit 340 is
relative to the user's ear or the user's temple.
[0066] FIG. 4A is a schematic illustration of processing unit 140.
As shown in FIG. 4A, processing unit 140 has a rectangular shape,
which easily fits in a pocket of user 100. Processing unit 140
includes a connector 400 for connecting wire 130 to processing unit
140. Wire 130 may be used to transmit power from processing unit
140 to sensory unit 120, and data to and from processing unit 140
to sensory unit 120. Alternatively, wire 130 may comprise multiple
wires (e.g., a wire dedicated to power transmission and a wire
dedicated to data transmission).
[0067] Processing unit 140 includes a function button 410 for
enabling user 100 to provide input to apparatus 110. Function
button 410 may accept different types of tactile input (e.g., a
tap, a click, a double-click, a long press, a right-to-left slide,
a left-to-right side). In some embodiments, each type of input may
be associated with a different action. For example, a tap may be
associated with the function of confirming an action, while a
right-to-left slide may be associated with the function of
repeating the last output.
[0068] FIG. 4B is a schematic illustration of processing unit 140
from a second viewpoint. As shown in FIG. 4B, processing unit 140
includes a volume switch 420, a battery pack compartment 430, and a
power port 440. In one embodiment, user 100 may charge apparatus
110 using a charger connectable to power port 440. Alternatively,
user 100 may replace a battery pack (not shown) stored in battery
pack compartment 430.
[0069] FIG. 5A is a block diagram illustrating the components of
apparatus 110 according to a first embodiment. Specifically, FIG.
5A depicts an embodiment in which apparatus 110 comprises sensory
unit 120 and processing unit 140, as discussed in connection with,
for example, FIG. 1. Furthermore, sensory unit 120 may be
physically coupled to support 210.
[0070] As shown in FIG. 5A, sensory unit 120 includes
feedback-outputting unit 340 and image sensor 350. Although one
image sensor is depicted in FIG. 5A, sensory unit 120 may include a
plurality of image sensors (e.g., two image sensors). For example,
in an arrangement with more than one image sensor, each of the
image sensors may be face a different direction or be associated
with a different camera (e.g., a wide angle camera, a narrow angle
camera, an IR camera, etc.). In other embodiments (not shown in the
figure) sensory unit 120 may also include buttons and other sensors
such as a microphone and inertial measurements devices.
[0071] As further shown in FIG. 5A, sensory unit 120 is connected
to processing unit 140 via wire 130. Processing unit 140 includes a
mobile power source 510, a memory 520, a wireless transceiver 530,
and a processor 540.
[0072] Processor 540 may constitute any physical device having an
electric circuit that performs a logic operation on input or
inputs. For example, processor 540 may include one or more
integrated circuits, microchips, microcontrollers, microprocessors,
all or part of a central processing unit (CPU), graphics processing
unit (GPU), digital signal processor (DSP), field-programmable gate
array (FPGA), or other circuits suitable for executing instructions
or performing logic operations. The instructions executed by
processor 540 may, for example, be pre-loaded into a memory
integrated with or embedded into processor 540 or may be stored in
a separate memory (e.g., memory 520). Memory 520 may comprise a
Random Access Memory (RAM), a Read-Only Memory (ROM), a hard disk,
an optical disk, a magnetic medium, a flash memory, other
permanent, fixed, or volatile memory, or any other mechanism
capable of storing instructions.
[0073] Although one processor is shown in FIG. 5A, processing unit
140 may include more than one processor. Each processor may have a
similar construction or the processors may be of differing
constructions that are electrically connected or disconnected from
each other. For example, the processors may be separate circuits or
integrated in a single circuit. When more than one processor is
used, the processors may be configured to operate independently or
collaboratively. The processors may be coupled electrically,
magnetically, optically, acoustically, mechanically or by other
means that permit them to interact.
[0074] In some embodiments, processor 540 may change the aiming
direction of image sensor 350 using image data provided from image
sensor 350. For example, processor 540 may recognize that a user is
reading a book and determine that the aiming direction of image
sensor 350 is offset from the text. That is, because the words in
the beginning of each line of text are not fully in view, processor
540 may determine that image sensor 350 is tilted down and to the
right. Responsive thereto, processor 540 may adjust the aiming
direction of image sensor 350.
[0075] Processor 540 may access memory 520. Memory 520 may be
configured to store information specific to user 100. For example,
data for image representations of known individuals, favorite
products, personal items, etc., may be stored in memory 520. In one
embodiment, user 100 may have more than one pair of glasses, with
each pair of glasses having support 210 mounted thereon.
Accordingly, memory 520 may store information (e.g., personal
settings) associated with each pair of glasses. For example, when a
user wears his sunglasses may have different preferences than when
the user wears reading glasses.
[0076] As shown in FIG. 5A, processing unit 140 includes mobile
power source 510. Mobile power source 510 may be configured to
power processing unit 140 and/or sensory unit 120. The term "mobile
power source" includes any device capable of providing electrical
power, which can be easily carried by a hand (e.g., the total
weight of mobile power source 510 may be less than a pound). Thus,
the mobility of the power source enables user 100 to use apparatus
110 in a variety of situations. For example, mobile power source
510 may include one or more batteries (e.g., nickel-cadmium
batteries, nickel-metal hydride batteries, and lithium-ion
batteries) or any other type of electrical power supply. In some
embodiments, mobile power source 510 may be rechargeable and
contained within a casing that holds processing unit 140. In other
embodiments, mobile power source 510 may include one or more energy
harvesting devices for converting ambient energy into electrical
energy (e.g., portable solar power units, human vibration units,
etc.).
[0077] Apparatus 110 may operate in a low-power-consumption mode
and in a processing-power-consumption mode. For example, mobile
power source 510 can produce five hours of
processing-power-consumption mode and fifteen hours of
low-power-consumption mode. Accordingly, different power
consumption modes may allow mobile power source 510 to produce
sufficient power for powering processing unit 140 for various time
periods (e.g., more than two hours, more than four hours, more than
ten hours, etc.).
[0078] Mobile power source 510 may power one or more wireless
transceivers (e.g., wireless transceiver 530 in FIG. 5A). The term
"wireless transceiver" refers to any device configured to exchange
transmissions over an air interface by use of radio frequency,
infrared frequency, magnetic field, or electric field. Wireless
transceiver 530 may use any known standard to transmit and/or
receive data (e.g., Wi-Fi, Bluetooth.RTM., Bluetooth Smart,
802.15.4, or ZigBee). In some embodiments, wireless transceiver 530
may transmit data (e.g., raw image data or audio data) from image
sensor 350 to processing unit 140, or wireless transceiver 530 may
transmit data from processing unit 140 to feedback-outputting unit
340.
[0079] In another embodiment, wireless transceiver 530 may
communicate with a different device (e.g., a hearing aid, the
user's smartphone, or any wirelessly controlled device) in the
environment of user 100. For example, wireless transceiver 530 may
communicate with an elevator using a Bluetooth.RTM. controller. In
such an arrangement, apparatus 110 may recognize that user 100 is
approaching an elevator and call the elevator, thereby minimizing
wait time. In another example, wireless transceiver 530 may
communicate with a smart TV. In such an arrangement, apparatus 110
may recognize that user 100 is watching television and identify
specific hand movements as commands for the smart TV (e.g.,
switching channels). In yet another example, wireless transceiver
530 may communicate with a virtual cane. A virtual cane is any
device that uses a laser beam or ultrasound waves to determine the
distance from user 100 to an object.
[0080] FIG. 5B is a block diagram illustrating the components of
apparatus 110 according to a second embodiment. In FIG. 5B, similar
to the arrangement shown in FIG. 5A, support 210 is used to couple
sensory unit 120 to a pair of glasses. However, in the embodiment
shown in FIG. 5B, sensory unit 120 and processing unit 140
communicate wirelessly. For example, wireless transceiver 530A can
transmit image data to processing unit 140 and receive information
to be outputted via feedback-outputting unit 340.
[0081] In this embodiment, sensory unit 120 includes
feedback-outputting unit 340, mobile power source 510A, wireless
transceiver 530A, and image sensor 350. Mobile power source 510A is
contained within sensory unit 120. As further shown in FIG. 5B,
processing unit 140 includes wireless transceiver 530B, processor
540, mobile power source 510B, and memory 520.
[0082] FIG. 5C is a block diagram illustrating the components of
apparatus 110 according to a third embodiment. In particular, FIG.
5C depicts an embodiment in which support 210 includes image sensor
350 and connector 550B. In this embodiment, sensory unit 120
provides functionality for processing data and, therefore, a
separate processing unit is not needed in such a configuration.
[0083] As shown in FIG. 5C, sensory unit 120 includes processor
540, connector 550A, mobile power source 510, memory 520, and
wireless transceiver 530. In this embodiment, apparatus 110 does
not include a feedback-outputting unit. Accordingly, wireless
transceiver 530 may communicate directly with a hearing aid (e.g.,
a Bluetooth.RTM. hearing aid). In addition, in this embodiment,
image sensor 350 is included in support 210. Accordingly, when
support 210 is initially mounted on glasses 105, image sensor 350
may acquire a set aiming direction. For example, a camera
associated with image sensor 350 may be installed within support
210 in a predetermined angle in a position facing slightly
downwards (e.g., 7-12 degrees from the horizon). Furthermore,
connector 550A and connector 550B may allow data and power to be
transmitted between support 210 and sensory unit 120.
[0084] FIG. 5D is a block diagram illustrating the components of
apparatus 110 according to a fourth embodiment. In FIG. 5D, sensory
unit 120 couples directly to a pair of glasses without the need of
a support. In this embodiment, sensory unit 120 includes image
sensor 350, feedback-outputting unit 340, processor 540, and memory
520. As shown in FIG. 5D, sensory unit 120 is connected via a wire
130 to processing unit 140. Additionally, in this embodiment,
processing unit 140 includes mobile power source 510 and wireless
transceiver 530.
[0085] As will be appreciated by a person skilled in the art having
the benefit of this disclosure, numerous variations and/or
modifications may be made to the disclosed embodiments. Not all
components are essential for the operation of apparatus 110. Any
component may be located in any appropriate part of apparatus 110
and the components may be rearranged into a variety of
configurations while providing the functionality of the disclosed
embodiments. Therefore, the foregoing configurations are examples
and, regardless of the configurations discussed above, apparatus
110 can assist persons who have low vision with their everyday
activities in numerous ways.
[0086] One way apparatus 110 can assist persons who have low vision
is by identifying relevant objects in an environment. For example,
in some embodiments, processor 540 may execute one or more computer
algorithms and/or signal-processing techniques to find objects
relevant to user 100 in image data captured by sensory unit 120.
The term "object" refers to any physical object, person, text, or
surroundings in an environment.
[0087] In one embodiment, apparatus 110 can perform a hierarchical
object identification process. In a hierarchical object
identification process, apparatus 110 can identify objects from
different categories (e.g., spatial guidance, warning of risks,
objects to be identified, text to be read, scene identification,
and text in the wild) of image data. For example, apparatus 110 can
perform a first search in the image data to identify objects from a
first category, and after initiating the first search, execute a
second search in the image data to identify objects from a second
category.
[0088] In another embodiment, apparatus 110 can provide information
associated with one or more of the objects identified in image
data. For example, apparatus 110 can provide information such as
the name of an individual standing in front of user 100. The
information may be retrieved from a dynamic database stored in
memory 520. If the database does not contain specific information
associated with the object, apparatus 110 may provide user 100 with
nonvisual feedback indicating that a search was made, but the
requested information was not found in the database. Alternatively,
apparatus 110 may use wireless transceiver 530 to search for and
retrieve information associated with the object from a remote
database (e.g., over a cellular network or Wi-Fi connection to the
Internet).
[0089] Another way apparatus 110 can assist persons who have low
vision is by performing a continuous action that relates to an
object in an environment. A continuous action may involve providing
continuous feedback regarding the object. For example, apparatus
110 can provide continuous feedback associated with an object
identified within a field-of-view of image sensor 350, and suspend
the continuous feedback when the object moves outside the
field-of-view of image sensor 350. Examples of continuous feedback
may include audibly reading text, playing a media file, etc. In
addition, in some embodiments, apparatus 110 may provide continuous
feedback to user 100 based on information derived from a discrete
image or based on information derived from one or more images
captured by sensory unit 120 from the environment of user 100.
[0090] Another type of continuous action includes monitoring the
state of an object in an environment. For example, in one
embodiment, apparatus 110 can track an object as long as the object
remains substantially within the field-of-view of image sensor 350.
Furthermore, before providing user 100 with feedback, apparatus 110
may determine whether the object is likely to change its state. If
apparatus 110 determines that the object is unlikely to change its
state, apparatus 110 may provide a first feedback to user 100. For
example, if user 100 points to a road sign, apparatus 110 may
provide a first feedback that comprises a descriptor of the road
sign. However, if apparatus 110 determines that the object is
likely to change its state, apparatus 110 may provide a second
feedback to user 100 after the object has changed its state. For
example, if user 100 points at a traffic light, the first feedback
may comprise a descriptor of the current state of the traffic light
(e.g., the traffic light is red) and the second feedback may
comprise a descriptor indicating that the state of traffic light
has changed (i.e., the traffic light is now green).
[0091] Apparatus 110 may also determine that an object that is
expected to change its state is not functioning and provide
appropriate feedback. For example, apparatus 110 may provide a
descriptor indicating that a traffic light is broken.
[0092] Apparatus 110 can also assist persons who have low vision by
making intelligent decisions regarding a person's intentions.
Apparatus 110 can make these decisions by understanding the context
of a situation. Accordingly, disclosed embodiments may retrieve
contextual information from captured image data and adjust the
operation of apparatus 110 based on at least the contextual
information. The term "contextual information" (or "context")
refers to any information having a direct or indirect relationship
with an object in an environment. In some embodiments, apparatus
110 may retrieve different types of contextual information from
captured image data. One type of contextual information is the time
and/or the place that an image of the object was captured. Another
example of a type of contextual information is the meaning of text
written on the object. Other examples of types of contextual
information include the identity of an object, the type of the
object, the background of the object, the location of the object in
the frame, the physical location of the user relative to the
object, etc.
[0093] In an embodiment, the type of contextual information that is
used to adjust the operation of apparatus 110 may vary based on
objects identified in the image data and/or the particular user who
wears apparatus 110. For example, when apparatus 110 identifies a
package of cookies as an object, apparatus 110 may use the location
of the package (i.e., at home or at the grocery store) to determine
whether or not to read the list of ingredients aloud.
Alternatively, when apparatus 110 identifies a signboard
identifying arrival times for trains as an object, the location of
the sign may not be relevant, but the time that the image was
captured may affect the output. For example, if a train is arriving
soon, apparatus 110 may read aloud the information regarding the
coming train. Accordingly, apparatus 110 may provide different
responses depending on contextual information.
[0094] Apparatus 110 may use contextual information to determine a
processing action to execute or an image resolution of image sensor
350. For example, after identifying the existence of an object,
contextual information may be used to determine if the identity of
the object should be announced, if text written on the object
should be audibly read, if the state of the object should be
monitored, or if an image representation of the object should be
saved. In some embodiments, apparatus 110 may monitor a plurality
of images and obtain contextual information from specific portions
of an environment. For example, motionless portions of an
environment may provide background information that can be used to
identify moving objects in the foreground.
[0095] Yet another way apparatus 110 can assist persons who have
low vision is by automatically carrying out processing actions
after identifying specific objects and/or hand gestures in the
field-of-view of image sensor 350. For example, processor 540 may
execute several actions after identifying one or more triggers in
image data captured by apparatus 110. The term "trigger" includes
any information in the image data that may cause apparatus 110 to
execute an action. For example, apparatus 110 may detect as a
trigger a finger of user 100 pointing to one or more coins. The
detection of this gesture may cause apparatus 110 to calculate a
sum of the value of the one or more coins. As another example of a
trigger, an appearance of an individual wearing a specific uniform
(e.g., a policeman, a fireman, a nurse) in the field-of-view of
image sensor 350 may cause apparatus 110 to make an audible
indication that this particular individual is nearby.
[0096] In some embodiments, the trigger identified in the image
data may constitute a hand-related trigger. The term "hand-related
trigger" refers to a gesture made by, for example, the user's hand,
the user's finger, or any pointed object that user 100 can hold
(e.g., a cane, a wand, a stick, a rod, etc.).
[0097] In other embodiments, the trigger identified in the image
data may include an erratic movement of an object caused by user
100. For example, unusual movement of an object can trigger
apparatus 110 to take a picture of the object. In addition, each
type of trigger may be associated with a different action. For
example, when user 100 points to text, apparatus 110 may audibly
read the text. As another example, when user 100 erratically moves
an object, apparatus 110 may audibly identify the object or store
the representation of that object for later identification.
[0098] Apparatus 110 may use the same trigger to execute several
actions. For example, when user 100 points to text, apparatus 110
may audibly read the text. As another example, when user 100 points
to a traffic light, apparatus 110 may monitor the state of the
traffic light. As yet another example, when user 100 points to a
branded product, apparatus 110 may audibly identify the branded
product. Furthermore, in embodiments in which the same trigger is
used for executing several actions, apparatus 110 may determine
which action to execute based on contextual information retrieved
from the image data. In the examples above, wherein the same
trigger (pointing to an object) is used, apparatus 110 may use the
type of the object (text, a traffic light, a branded product) to
determine which action to execute.
[0099] To assist user 100 throughout his or her daily activities,
apparatus 100 may follow several procedures for saving processing
resources and prolonging battery life. For example, apparatus 110
can use several image resolutions to form images. Higher image
resolution provides more detailed images, but requires more
processing resources. Lower image resolution provides less detailed
images, but saves processing resources. Therefore, to prolong
battery life, apparatus 110 may have rules for capturing and
processing high resolution image under certain circumstances, and
rules for capturing and processing low resolution image when
possible. For example, apparatus 110 may capture higher resolution
images when performing Optical Character Recognition (OCR), and
capture low resolution images when searching for a trigger.
[0100] One of the common challenges persons with low vision face on
a daily basis is reading. Apparatus 110 can assist persons who have
low vision by audibly reading text that is present in user 100
environment. Apparatus 110 may capture an image that includes text
using sensory unit 120. After capturing the image, to save
resources and to process portions of the text that are relevant to
user 100, apparatus 110 may initially perform a layout analysis on
the text. The term "layout analysis" refers to any process of
identifying regions in an image that includes text. For example,
layout analysis may detect paragraphs, blocks, zones, logos,
titles, captions, footnotes, etc.
[0101] In one embodiment, apparatus 110 can select which parts of
the image to process, thereby saving processing resources and
battery life. For example, apparatus 110 can perform a layout
analysis on image data taken at a resolution of one megapixel to
identify specific areas of interest within the text. Subsequently,
apparatus 110 can instruct image sensor 350 to capture image data
at a resolution of five megapixels to recognize the text in the
identified areas. In other embodiments, the layout analysis may
include initiating at least a partial OCR process on the text.
[0102] In another embodiment, apparatus 110 may detect a trigger
that identifies a portion of text that is located a distance from a
level break in the text. A level break in the text represents any
discontinuity of the text (e.g., a beginning of a sentence, a
beginning of a paragraph, a beginning of a page, etc.). Detecting
this trigger may cause apparatus 110 to read the text aloud from
the level break associated with the trigger. For example, user 100
can point to a specific paragraph in a newspaper and apparatus 110
may audibly read the text from the beginning of the paragraph
instead of from the beginning of the page.
[0103] In addition, apparatus 110 may identify contextual
information associated with text and cause the audible presentation
of one portion of the text and exclude other portions of the text.
For example, when pointing to a food product, apparatus 110 may
audibly identify the calorie value of the food product. In other
embodiments, contextual information may enable apparatus 110 to
construct a specific feedback based on at least data stored in
memory 520. For example, the specific feedback may assist user 100
to fill out a form (e.g., by providing user 100 audible
instructions and details relevant to a form in the user's
field-of-view).
[0104] To improve the audible reading capabilities of apparatus
110, processor 540 may use OCR techniques. The term "optical
character recognition" includes any method executable by a
processor to retrieve machine-editable text from images of text,
pictures, graphics, etc. OCR techniques and other document
recognition technology typically use a pattern matching process to
compare the parts of an image to sample characters on a
pixel-by-pixel basis. This process, however, does not work well
when encountering new fonts, and when the image is not sharp.
Accordingly, apparatus 110 may use an OCR technique that compares a
plurality of sets of image regions that are proximate to each
other. Apparatus 110 may recognize characters in the image based on
statistics relate to the plurality of the sets of image regions. By
using the statistics of the plurality of sets of image regions,
apparatus 110 can recognize small font characters defined by more
than four pixels e.g., six or more pixels. In addition, apparatus
110 may use several images from different perspectives to recognize
text on a curved surface. In another embodiment, apparatus 110 can
identify in image data an existence of printed information
associated with a system command stored in a database and execute
the system command thereafter. Examples of a system command
include: "enter training mode," "enter airplane mode," "backup
content," "update operating system," etc.
[0105] The disclosed OCR techniques may be implemented on various
devices and systems and are not limited to use with apparatus 110.
For example, the disclosed OCR techniques provide accelerated
machine reading of text. In one embodiment, a system is provided
for audibly presenting a first part of a text from an image, while
recognizing a subsequent part of the text. Accordingly, the
subsequent part may be presented immediately upon completion of the
presentation of the first part, resulting in a continuous audible
presentation of standard text in less than two seconds after
initiating OCR.
[0106] As is evident from the foregoing, apparatus 110 may provide
a wide range of functionality. More specifically, in embodiments
consistent with the present disclosure, apparatus 110 may capture
image data that includes textual and non-textual information
disposed within a field-of-view of sensory unit 120, identify one
or more system commands associated with the textual information and
non-textual information, and subsequently execute the one or more
system commands automatically or in response to an input received
from a user of apparatus 110.
[0107] In certain aspects, "textual information" consistent with
the disclosed embodiments may include, but is not limited to,
printed text, handwritten text, coded text, text projected onto a
corresponding surface, text displayed to the user through a
corresponding display screen or touchscreen, and any additional or
alternate textual information appropriate to the user and to
apparatus 110. Further, the "non-textual information" may include,
but is not limited to, images of various triggers (e.g., a human
appendage, a cane, or a pointer), images of physical objects,
images of persons, images of surroundings, and images of other
non-textual objects disposed within the field-of-view of sensory
unit 120.
[0108] In certain aspects, apparatus 110 may perform an OCR process
on the textual information within the captured image data, and may
subsequently identify the one or more system commands based on
portions of the recognized text. In other aspects, apparatus 110
may detect elements of non-textual information within the captured
image data, and may initiate the identification of the one or more
system commands in response to the detected non-textual
information.
[0109] In an embodiment, apparatus 110 may include a memory (e.g.,
memory 520) configured to store one or more applications and
application modules that, when executed by a processor (e.g.,
processor 540), enable apparatus 110 identify and execute system
commands based in textual and non-textual information within
captured image data. In certain aspects, memory 520 may also be
configured to store information that identifies the system commands
and associates the system commands with elements of textual
information (e.g., characters, words, phrases, and phrases),
elements of non-textual information (e.g., images of triggers,
physical objects, and persons), other system commands, and other
events. FIG. 6 illustrates an exemplary structure of memory 520, in
accordance with disclosed embodiments.
[0110] In FIG. 6, memory 520 may be configured to store an image
data storage module 602, an image processing module 604, and an
image database 612. In one embodiment, image data storage module
602, upon execution by processor 540, may enable processor 540 to
receive data corresponding to one or more images captured by
sensory unit 120, and to store the captured image data within image
database 612. In some aspects, the captured image data may include
textual information (e.g., printed, handwritten, coded, projected,
and/or displayed text) and non-textual information (e.g., images of
physical objects, persons, and/or triggers), and processor 540 may
store the image data in image database 612 with additional data
specifying a time and/or date at which sensory unit 120 captured
the image data. In additional embodiments, image data storage
module 602 may further enable processor 540 to configure wireless
transceiver 530 to transmit the captured image data to one or more
devices (e.g., an external data repository or a user's mobile
device) in communication with apparatus 110 across a corresponding
wired or wireless network.
[0111] In an embodiment, image processing module 604, upon
execution by processor 540, may enable processor 540 to process the
captured image data and identify elements of textual information
within the captured image data. In certain aspects, textual
information consistent with the disclosed embodiments may include,
but is not limited to, printed text (e.g., text disposed on a page
of a newspaper, magazine, book), handwritten text, coded text, text
displayed to a user through a display unit of a corresponding
device (e.g., an electronic book, a television a web page, or an
screen of a mobile application), text disposed on a flat or curved
surface of an object within a field-of-view of apparatus 110 (e.g.,
a billboard sign, a street sign, text displayed on product
packaging), text projected onto a corresponding screen (e.g.,
during presentation of a movie at a theater), and any additional or
alternate text disposed within images captured by sensory unit
120.
[0112] In certain aspects, processor 540 may perform a layout
analysis of the image data to identify textual information within
the captured image data. By way of example, processor 540 may
perform a layout analysis to detect paragraphs of text, blocks of
text, zones and/or regions that include text, logos, titles,
captions, footnotes, and any additional or alternate portions of
the image data that includes printed, handwritten, displayed,
coded, and/or projected text.
[0113] Referring back to FIG. 6, memory 520 may also include an
optical character recognition (OCR) module 606 that, upon execution
by processor 540, enables processor 540 to perform one or more OCR
processes on elements of textual information disposed within the
image data. In one embodiment, processor 540 may execute image
processing module 604 to identify portions of the captured image
data that include textual information, and further, may execute OCR
module 606 to retrieve machine-readable text from the textual
information.
[0114] Memory 520 may also be configured to store a system command
identification module 608, a system command execution module 610,
and a system command database 614. In one embodiment, system
command database 140 may store linking information that associates
one or more system commands with corresponding portions of captured
image data. In some aspects, a system command may include one or
more instructions that, when executed by processor 540, cause
processor 540 to perform one or more actions or processes
consistent with an operating system of apparatus 110. Further, in
one aspect, linking information may associate a particular system
command with an element of recognized text (e.g., a word, a phrase,
or a paragraph), an element of non-textual information (e.g., an
image of a physical object, a person, or a trigger), combinations
thereof, and any additional or alternate indicia of linkages
between captured image data and system commands.
[0115] In an embodiment, system command identification module 608
may, upon execution by processor 540, enable processor 540 to
access linking information stored within system command database
614, and to identify one or more system commands associated with
portions of the captured image data based on the linking
information. For example, processor 540 may leverage the accessed
linking information to determine that a portion of machine-readable
text corresponds to a system command executable by processor 540.
Additionally or alternatively, system command identification module
608 may enable processor 540 to identify a system command
associated with a particular image within the captured image data,
and further, a system command associated with a particular trigger
in the captured image data, taken alone or in conjunction with
machine-readable text.
[0116] System command execution module 610 may, upon execution by
processor 540, enable processor 540 to execute the identified
system command and perform one or more actions and processes
consistent with the operating system of apparatus 110. In one
instance, the identified system command may enable processor 540 to
modify an operational state of apparatus 110. For example, upon
execution of the identified system command by processor 540,
apparatus 110 may function in accordance with a "training" mode, a
"sleep" mode, or an "airplane" mode, or other mode of operation
consistent with the captured image data and the operating system of
apparatus 110.
[0117] In other aspects, the identified system command may enable
processor 540 to modify a configuration of apparatus 110. By way of
example, upon execution of the identified system command, processor
540 may modify a configuration of one or more of sensory unit 120
and processing unit 130. Further, in additional embodiments,
processor 540 may execute the identified system command to modify a
configuration of an external device in communication with apparatus
110 across a corresponding wired or wireless communications network
(e.g., a mobile telephone, a smart phone, or a tablet
computer).
[0118] Additionally or alternatively, the identified system command
may enable processor 540 to execute one or more applications and/or
perform one or more actions supported by an operating system of
apparatus 110. By way of example, processor 540 may initiate or
terminate a recording of audio or video content, download a stored
digital image (e.g., to image database 612), transmit a stored
digital image to an external device in communication with apparatus
110, update or restart the operating system of apparatus 110, and
establish, modify, or erase a user customization of apparatus
110.
[0119] Further, in an embodiment, the identified system command may
include a plurality of steps associated with corresponding system
sub-commands, and processor 540 may be configured to execute
sequentially the corresponding system sub-commands upon execution
of system command execution module 610. For example, the identified
system command may update the operating system of apparatus 110,
and the corresponding sub-commands may enable processor 540 to
obtain an updated version of the operating system, replace the
existing version of the operating system with the updated version,
and restart apparatus 110 upon completion of the replacement.
[0120] In further embodiments, the identified system command may
cause processor 540 perform an action on one or more files stored
locally by apparatus 110, and additionally or alternatively, on one
or more files stored within a data repository or external device
accessible to apparatus 110 over a corresponding communications
network. By way of example, upon execution of the identified system
command, processor 540 may store image, video, and/or audio files
in memory 520, overwrite one or more files stored within memory
520, and additionally or alternatively, transmit one or more files
stored within memory 520 to the external device.
[0121] In other embodiments, image database 612 and/or contextual
rule database 614 may be located remotely from memory 520, and be
accessible to other components of apparatus 110 (e.g., processing
unit 140) via one or more wireless connections (e.g., a wireless
network). While two databases are shown, it should be understood
that image database 612 and contextual rule database 614 may be
combined and/or interconnected databases may make up the databases.
Image database 612 and/or contextual rule database 614 may further
include computing components (e.g., database management system,
database server, etc.) configured to receive and process requests
for data stored in associated memory devices.
[0122] Image data store module 602, image processing module 604,
OCR module 606, system command identification module 608, and
system command execution module 610 may be implemented in software,
hardware, firmware, a mix of any of those, or the like. For
example, if the modules are implemented in software, they may be
stored in memory 520, as shown in FIG. 6. Other components of
processing unit 140 and/or sensory unit 120 may be configured to
perform processes to implement and facilitate operations of image
data store module 602, image processing module 604, OCR module 606,
system command identification module 608, and system command
execution module 610. Thus, image data store module 602, image
processing module 604, OCR module 606, system command
identification module 608, and system command execution module 610
may include software, hardware, or firmware instructions (or a
combination thereof) executable by one or more processors (e.g.,
processor 540), alone or in various combinations with each other.
For example, image data store module 602, image processing module
604, OCR module 606, system command identification module 608, and
system command execution module 610 may be configured to interact
with each other and/or other modules of apparatus 110 to perform
functions consistent with disclosed embodiments. In some
embodiments, any of the disclosed modules (e.g., image data store
module 602, image processing module 604, OCR module 606, system
command identification module 608, and system command execution
module 610) may each include dedicated sensors (e.g., IR, image
sensors, etc.) and/or dedicated application processing devices to
perform the functionality associated with each module.
[0123] FIG. 7 is a flow diagram of an exemplary process 700 for
identifying and executing system commands based on captured image
data, according to disclosed embodiments. As described above,
sensory unit 120 may capture image data that includes textual
information and non-textual information disposed within a
corresponding field-of-view. Processing unit 130 may receive the
captured image data, and processor 540 may execute one or more
application modules to identify the textual and non-textual
information, and to execute one or more system commands that
correspond to the identified textual and non-textual information.
Process 700 provides further details on how processor 540
identifies and executes one or more system commands based on
captured image data.
[0124] In step 702, processor 540 may obtain captured image data.
In some aspects, sensory unit 120 may capture one or more images,
and the captured image data may be transmitted to processing unit
140 across wired or wireless communications link 130. Processor 540
may, in step 702, obtain the captured image data directly from
sensory module 120 across communications link 130, or
alternatively, processor 540 may retrieve the captured image data
from a corresponding data repository (e.g., image database 612 of
memory 520). By way of example, the captured image data may include
one or more regions of printed, displayed, or projected
information.
[0125] In step 704, processor 540 may analyze the captured image
data to identify portions of the captured image data that include
textual information. As described above, the textual information
may include, but is not limited to, printed, handwritten,
projected, coded, or displayed text, and processor 540 may perform
a layout analysis to detect the textual information within the
captured image data. By way of example, the detected portions may
include, but are not limited to, paragraphs of text, blocks of
text, zones and/or regions that include text, logos, titles,
captions, footnotes, and any additional or alternate portions of
the captured image data that includes printed, handwritten,
displayed, coded, and/or projected text.
[0126] Additionally or alternatively, processor 540 may analyze the
captured image data using image processing techniques in step 704
to identify non-textual information within the captured image data.
In certain aspects, the non-textual information may include, but is
not limited to, an image of a trigger (e.g., a human appendage or a
cane), an image of a person (e.g., a police officer, a firefighter,
or an airline employee), an image of a physical object (e.g., a
streetlight and/or a pedestrian crossing signal, a particular
vehicle, a map), and any additional or alternate image of relevant
to the user of apparatus 110.
[0127] In step 706, processor 540 may identify one or more system
commands associated with the textual and non-textual information.
In one embodiment, to identify the one or more system commands,
processor 540 obtain linking information in step 706 that
associates the system commands with corresponding portions of
textual information, non-textual information, or combinations of
textual and non-textual information. In certain aspects, processor
540 may access system command database 614 to obtain the linking
information. Alternatively, processor 540 may obtain the linking
information from a data repository in communication with apparatus
110 across a corresponding communications network using appropriate
communications protocols.
[0128] For example, processor 540 may determine in step 706 that a
system command is associated with textual information when the
linking information for that system command includes at least a
portion of the textual information. Additionally or alternatively,
processor 540 may also determine in step 706 that the system
command is associated with non-textual information when the linking
information for that system command includes information
identifying the non-textual information, either alone or in
combination with textual information.
[0129] Processor 540 may execute the one or more identified system
commands in step 708. Upon execution of the one or more identified
system commands by processor 540, exemplary process 700 ends.
[0130] As described above, one or more of the executed system
commands may correspond to an operation that modifies a functional
state of apparatus 110 or an external device in communications with
apparatus 110 (e.g., that causes apparatus 110 to enter a sleep
mode, a training mode, or an airplane mode). The executed system
commands may also correspond to processes performed by and
supported by an operating system of apparatus 110, which include,
but are not limited to, processes that initiate or terminate a
recording of audio or video content, download a stored digital
image (e.g., to image database 612), perform an action on one or
more files associated with a user of apparatus 110, transmit a
stored digital image to an external device in communication with
apparatus 110, update or restart the operating system of apparatus
110, backup stored content, obtain information indicative of a
status of a battery of apparatus 110, obtain audible instructions
regarding one or more functions of apparatus 110, and establish,
modify, or erase a user customization of apparatus 110 (e.g., a
volume associated with apparatus 110 or a gender of an audible
narration provided by apparatus 110). Further, and consistent with
the disclosed embodiments, at least one of the system commands may
be associated with a plurality of steps, which correspond to system
sub-commands executed sequentially by processor 540. The disclosed
embodiments are, however, not limited to such exemplary system
commands, and in additional embodiments, processor 540 may identify
(e.g., step 706) and execute (e.g., step 708) any additional or
alternate system command appropriate to processor 540, apparatus
110, and the captured image data.
[0131] FIG. 8 is a flow diagram of an exemplary process 800 for
identifying and executing system commands based on text within
captured image data, according to disclosed embodiments. As
described above, sensory unit 120 of apparatus 110 may capture
image data that includes textual information. In some embodiments,
processor 540 of apparatus 510 may execute one or more application
modules to identify the textual information within the captured
image data, retrieve machine-readable text from the identified
textual information, and execute one or more system commands
associated with the machine-readable text. Process 800 provides
further details on how processor 540 identifies and executes one or
more system commands based on text disposed within portions of
captured image data.
[0132] In step 802, processor 540 may obtain captured image data.
In some aspects, sensory unit 120 may capture one or more images,
and the captured image data may be transmitted to processing unit
140 across wired or wireless communications link 130. As described
above, processor 540 may obtain the captured image data in step 802
from sensory module 120 across communications link 130, or
alternatively, processor 540 may retrieve the captured image data
from a corresponding data repository (e.g., image database 612 of
memory 520). The captured image data may, in certain aspects,
include at least one of textual information (e.g., printed text,
handwritten text, displayed text, projected text, and coded text)
and non-textual information (e.g., images of physical objects,
persons, and triggers).
[0133] Processor 540 may analyze the captured image data in step
804 to identify portions of the captured image data that include
the textual information. In one embodiment, as described herein,
processor 540 may perform a layout analysis to detect the textual
information within the captured image data. By way of example, the
detected textual information may include, but are not limited to,
paragraphs of text, blocks of text, zones and/or regions that
include text, logos, titles, captions, footnotes, and any
additional or alternate portions of the image data that includes
printed, handwritten, displayed, coded, and/or projected text.
[0134] In step 806, processor 540 may perform an OCR process on the
detected textual information to identify and retrieve
machine-readable text. Further, in step 808, processor 540 may
identify one or more system commands associated with the recognized
text based on linking information that associates the system
commands with corresponding machine-readable text. In certain
aspects, system command database 614 may store the linking
information, and in step 808, processor 540 may access system
command database 614 to obtain the linking information, as
described above.
[0135] Processor 540 may execute the one or more identified system
commands in step 810. Upon execution of the one or more identified
system commands by processor 540, exemplary process 800 ends.
[0136] As described above, one or more of the system commands may
correspond to an operation that modifies a functional state of
apparatus 110 or an external device in communications with
apparatus 110 (e.g., that causes apparatus 110 to enter a sleep
mode, a training mode, or an airplane mode). One or more of the
system commands may also correspond to processes performed by and
supported by an operating system of apparatus 110, which include,
but are not limited to, processes that initiate or terminate a
recording of audio or video content, download a stored digital
image (e.g., to image database 612), perform actions on one or more
files associated with a user of apparatus 110, transmit a stored
digital image to an external device in communication with apparatus
110, update or restart the operating system of apparatus 110,
backup stored content, obtain information indicative of a status of
a battery of apparatus 110, obtain audible instructions regarding
one or more functions of apparatus 110, and establish, modify, or
erase a user customization of apparatus 110 (e.g., a volume
associated with apparatus 110 or a gender of an audible narration
provided by apparatus 110). Further, and consistent with the
disclosed embodiments, at least one of the system commands may be
associated with a plurality of sequential steps, which correspond
to system sub-commands executed sequentially by processor 540.
[0137] In the embodiments described above, processor 540 may
identify system commands associated with one or more of textual
information and non-textual information disposed within captured
image data (e.g., step 706 of FIG. 7 and step 808 of FIG. 8), and
may execute the identified system commands (e.g., in step 708 of
FIG. 7 and step 810 of FIG. 8) without tactile or audible
confirmation from a user. In some instances, however, the executed
system commands may be associated with significant and often
irreversible impacts on an operation of apparatus 110. For example,
the executed system commands may erase a user-established
customization of apparatus 110, or alternatively, delete or modify
one or more image filed stored by apparatus 110. In some
embodiments, described below in reference to FIG. 9, processor 540
may execute the identified system commands in response to user
input that confirms the user's intentions to execute the identified
system commands.
[0138] FIG. 9 is a flow diagram of an exemplary process 900 for
executing system commands based on received user confirmation,
according to disclosed embodiments. As described above, processor
540 may identify one or more system commands associated with at
least one of textual or non-textual information disposed within
captured image data. In some embodiments, processor 540 may execute
the identified system commands in response to confirmation of the
user's intention to execute the identified system commands. Process
900 provides further details on how processor 540 requests a
confirmation of the user's intention to execute a system command,
receives and processes input from the user, and executes the system
command based on the received input.
[0139] In step 902, processor 540 may identify a system command
associated with textual information, non-textual information, or
combinations of textual and non-textual information within captured
image data. For example, as described above in reference to FIGS. 7
and 8, the identified system command may be associated with one or
more portions of machine-readable text retrieved from the textual
information using a corresponding OCR process, and additionally or
alternatively, may be associated with elements of non-textual
information disclosed within the captured image data.
[0140] Further, as described above, the identified system command
may correspond to an operation that modifies a functional state of
apparatus 110 or an external device in communications with
apparatus 110 (e.g., that causes apparatus 110 to enter a sleep
mode, a training mode, or an airplane mode). In other aspects, the
identified system command may cause processor 540 to initiate or
terminate a recording of audio or video content, download a stored
digital image (e.g., to image database 612), modify one or more
files associated with the user, transmit a stored digital image to
an external device in communication with apparatus 110, perform an
operation on a stored file, update or restart the operating system
of apparatus 110, backup stored content, obtain information
indicative of status of a battery of apparatus 110, obtain audible
instructions regarding one or more functions of apparatus 110, and
establish, modify, or erase a user customization of apparatus 110
(e.g., a volume associated with apparatus 110 or a gender of an
audible narration provided by apparatus 110). Further, and
consistent with the disclosed embodiments, at least one of the
system commands may be associated with a plurality of steps, which
correspond to system sub-commands executed sequentially by
processor 540.
[0141] Referring back to FIG. 9, processor 540 may request the user
confirm an intention to execute of the identified system command in
step 904. In one embodiment, processor 540 may generate an audible
request, which may be presented to the user through a speaker or a
bone conduction headphone associated with processing unit 140. The
disclosed embodiments are, however, not limited to such audible
requests, and in further embodiments, processor 540 may generate
and provide a textual request to the user (e.g., by transmitting
the textual request as a message to a mobile communications device
of the user in communication with apparatus 110), a tactile request
to the user (e.g., a vibration of apparatus 110 of a predetermined
intensity and duration), or through any additional or alternate
mechanism appropriate to the user and to apparatus 110.
[0142] In step 906, processor 540 may detect user input indicative
of a response to the request for confirmation. In one embodiment,
the detected user input may include an audible response to the
requested spoken by the user into a microphone associated with
apparatus 110. Additionally or alternatively, the user input may
include a tactile response to the request for confirmation (e.g.,
the user may tap a sensor or other input device disposed on a
surface of apparatus 110). The disclosed embodiments are, however,
not limited to such exemplary user input, and in other embodiments,
the user input may include any additional or alternate form or
combination of inputs appropriate to the user and to apparatus
110.
[0143] Based on the detected user input, processor 540 may
determine in step 908 whether the user confirmed the intention to
execute of the identified system command. If processor 540
determine that the user confirms the execution (e.g., step 908;
YES), processor 540 may execute the identified system command in
step 910, as described above. Exemplary process 900 is then
complete. Alternatively, if processor 540 determines that the user
elects not to confirm the execution (e.g., step 908; NO), exemplary
process 900 passes back to step 902, and processor 540 identified
an additional system command for execution, as described above.
[0144] Using the embodiments described above, apparatus 110 may
capture image data that includes one or more of textual and
non-textual information, identify a system command that corresponds
to the textual and/or visual information, and subsequently execute
the identified system command to modify an operational state of
apparatus 110. By way of example, a user of apparatus 110 may board
an airplane and, upon locating a corresponding seat, browse through
materials placed within a pocket or storage accessible to the user
(e.g., a seat-back pocket). As illustrated in FIG. 10, the user may
access an in-flight menu 1000 for the trip, and apparatus 110 may
capture an image that includes a portion 1020 of menu 1000
corresponding to a field-of-view of sensory unit 120.
[0145] As described above, processor 540 may identify textual
information within the captured image, may perform an OCR process
that retrieves machine-readable text from the textual information,
and may access system command database 614 to obtain linking
information associating one or more system commands with
corresponding portions of the recognized text. For example, as
illustrated in FIG. 10, processor 540 may leverage the linking
information to determine that text portion 1032 (e.g., "United.TM.
Economy"), text portion 1034 (e.g., "Welcome Aboard!"), and text
portion 1036 (e.g., "flights") each correspond to a system command
that causes apparatus 110 to enter an airplane mode. In certain
embodiments, processor 540 may execute the corresponding system
command, which disposes apparatus 110 into an airplane mode for the
duration of the flight.
[0146] The disclosed embodiments are, however, not limited to
processes that identify system commands associated with
machine-readable text, and in additional embodiments processor 540
may identify one or more system commands associated with
non-textual information disposed within the captured image data.
For example, as illustrated in FIG. 11, a user of apparatus 110 may
view an in-flight safety video 1100 after boarding an airplane, and
apparatus 110 may capture an image that includes a portion 1120 of
in-flight safety video 1100. Processor 540 may analyze portion 1120
to identify an image 1140 of a flight attendant demonstrating a
proper technique for securing the user's seat belt, and may
leverage linking information to determine that image 1140
corresponds to a system command that causes apparatus 110 to enter
the airplane mode. As described above, processor 540 may execute
the corresponding system command, which places apparatus 110 into
an airplane mode during the flight.
[0147] In additional embodiments, described above, apparatus 110
may capture image data that includes textual and non-textual
information projected onto a corresponding surface visible to a
user of apparatus 110, and may identify and execute one or more
system commands associated with the projected textual and/or
non-textual information. By way of example, a user of apparatus 110
may visit a movie theater to view a recently released feature film.
As illustrated in FIG. 12, the theater may project a reminder onto
a screen 1200 asking viewers to turn off or silence their mobile
communications devices, and apparatus 110 may capture an image that
includes a portion 1220 of the reminder corresponding to a
field-of-view of sensory unit 120.
[0148] Processor 540 may identify textual information within the
captured image, may perform an OCR process that retrieves
machine-readable text from the textual information, and may obtain
linking information associating one or more system commands with
corresponding portions of the recognized text. For example, as
illustrated in FIG. 12, processor 540 may leverage the linking
information to determine that text portion 1232 (e.g., "Turn Off
Your Phones") corresponds to a system command that causes apparatus
110 to enter a "sleep" or "silent" mode. In certain embodiments,
processor 540 may execute the corresponding system command, which
places apparatus 110 into the corresponding sleep or silent mode
for the duration of the feature.
[0149] Apparatus 110 may also capture image data including textual
and non-textual information displayed to user of apparatus 110
through a display unit or touchscreen of a user device (e.g., a
television, a smart phone, tablet computer, laptop, or desktop
computer). For example, as illustrated in FIG. 13, the user may
view a web page 1300 (or other electronic document, such as an
email message or text message) that prompts the user to visit a
corresponding "app store" and upgrade an operating system without
cost, and apparatus 110 may capture an image that includes a
portion 1320 of displayed web page 1300 corresponding to a
field-of-view of sensory unit 120.
[0150] In certain aspects, processor 540 may identify textual
information within the captured image, may perform an OCR process
that retrieves machine-readable text from the textual information,
and may obtain linking information associating one or more system
commands with corresponding portions of the recognized text. For
example, as shown in FIG. 13, processor 540 may leverage the
linking information to determine that text portion 1332 (e.g.,
"Upgrade to the New OS") corresponds to a system command that
causes apparatus 110 to retrieve and install an update to an
operating system of apparatus 110. Processor 540 may then execute
the corresponding system command, which causes apparatus 110 to
obtain and install the corresponding update, and further, to
restart apparatus 110 to complete an installation process.
[0151] Further, in additional embodiments, apparatus 110 may
capture image data that includes handwritten textual information,
and may identify and execute one or more system commands associated
with the handwritten textual information. For example, as
illustrated in FIG. 14, a user of apparatus 110 may receive a
letter 1400 from his or her mother that asks the user to provide
copies of digital images by email. In such an instance, apparatus
110 may capture an image that includes a portion 1420 of letter
1400 corresponding to a field-of-view of sensory unit 120.
[0152] Processor 540 may analyze the captured image data to
identify the handwritten textual information, perform an OCR
process that retrieves machine-readable text from the handwritten
textual information, and obtain linking information associating one
or more system commands with corresponding portions of the
handwritten text. Using the linking information, processor 540 may
determine that text portion 1432 (e.g., "send copies of your new
pictures to me") corresponds to a system command that causes
apparatus 110 to identify one or more stored digital images (e.g.,
within image database 612), and download the identified digital
images to a user device in communications with apparatus 110 over a
corresponding wired or wireless communications network.
[0153] Processor 540 may execute the corresponding system command,
which causes apparatus 110 to download the identified images to the
user's communications device. Further, in additional embodiments,
the executed system command may also provide instructions to the
user's communications device to automatically transmit the
downloaded photos to the user's mother at email address 1434.
[0154] In additional embodiments, apparatus 100 may identify and
execute system commands based on combinations of textual and
non-textual information disposed within captured image data. For
example, a user of apparatus 110 may approach an exterior exit of a
building, but may be unaware of a street onto which the exit leads.
In such an instance, the user may point to an exit sign disposed
above the exit door, and apparatus 110 may capture an image that
include both the exit sign, with its corresponding textual
information, and also an existence of the user's finger within the
field-of-view of apparatus 110.
[0155] By way of example, as illustrated in FIG. 15, a captured
image 1500 may include an image of an exit door 1510, textual
information corresponding to an exit sign 1520 (e.g., "EXIT TO
STREET"), and an image of a trigger 1530 (which corresponds to the
user's finger). In certain aspects, processor 540 may identify the
existence of the textual information and non-textual information
associated with trigger 1530, may identify a system command that
corresponds to the presence of both the textual information (e.g.,
"EXIT TO STREET") and trigger 1530 within the captured image data,
and execute the system command to provide positional information to
the user of apparatus 110.
[0156] In an embodiment, upon execution of the identified system
command, apparatus 110 may access a positioning system (e.g., a GPS
unit) to obtain a current position of apparatus 110, and access a
mapping system to identify a street onto which exit door 1410
leads. In certain aspects, the positioning and mapping systems may
be executed by apparatus 110, or alternatively, may be executed by
an external device in communication with apparatus 110 over a
corresponding wired or wireless communications network. Processor
540 may then provide an audible indication of the determined street
to the user of apparatus 110 (e.g., through a speaker or a bone
conduction headphone associated with processing unit 140).
[0157] The foregoing description has been presented for purposes of
illustration. It is not exhaustive and is not limited to the
precise forms or embodiments disclosed. Modifications and
adaptations will be apparent to those skilled in the art from
consideration of the specification and practice of the disclosed
embodiments. Additionally, although aspects of the disclosed
embodiments are described as being stored in memory, one skilled in
the art will appreciate that these aspects can also be stored on
other types of computer readable media, such as secondary storage
devices, for example, hard disks, floppy disks, or CD ROM, or other
forms of RAM or ROM, USB media, DVD, or other optical drive
media.
[0158] Computer programs based on the written description and
disclosed methods are within the skill of an experienced developer.
The various programs or program modules can be created using any of
the techniques known to one skilled in the art or can be designed
in connection with existing software. For example, program sections
or program modules can be designed in or by means of .Net
Framework, .Net Compact Framework (and related languages, such as
Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX
combinations, XML, or HTML with included Java applets. One or more
of such software sections or modules can be integrated into a
computer system or existing e-mail or browser software.
[0159] Moreover, while illustrative embodiments have been described
herein, the scope of any and all embodiments having equivalent
elements, modifications, omissions, combinations (e.g., of aspects
across various embodiments), adaptations and/or alterations as
would be appreciated by those skilled in the art based on the
present disclosure. The limitations in the claims are to be
interpreted broadly based on the language employed in the claims
and not limited to examples described in the present specification
or during the prosecution of the application. The examples are to
be construed as non-exclusive. Furthermore, the steps of the
disclosed routines may be modified in any manner, including by
reordering steps and/or inserting or deleting steps. It is
intended, therefore, that the specification and examples be
considered as illustrative only, with a true scope and spirit being
indicated by the following claims and their full scope of
equivalents.
* * * * *