U.S. patent application number 13/049870 was filed with the patent office on 2011-09-15 for method and apparatus for visual biometric data capture.
This patent application is currently assigned to Osterhout Group, Inc.. Invention is credited to John D. Haddick, Robert W. King, III, Robert Michael Lohse, Ralph F. Osterhout.
Application Number | 20110221670 13/049870 |
Document ID | / |
Family ID | 44505790 |
Filed Date | 2011-09-15 |
United States Patent
Application |
20110221670 |
Kind Code |
A1 |
King, III; Robert W. ; et
al. |
September 15, 2011 |
METHOD AND APPARATUS FOR VISUAL BIOMETRIC DATA CAPTURE
Abstract
A method and apparatus for visual biometric data capture are
provided. The apparatus includes in interactive head-mounted
eyepiece worn by a user that includes an optical assembly through
which a user views a surrounding environment and displayed content.
The optical assembly comprises a corrective element that corrects
the user's view of the surrounding environment and an integrated
processor for handling content to the user. An integrated optical
sensor captures visual biometric data when the eyepiece is
positioned so that a nearby individual is proximate to the
eyepiece. Visual biometric data is captured using the eyepiece and
is transmitted to a remote processing facility for interpretation.
The remote processing facility interprets the captured visual
biometric data and generates display content based on the
interpretation. This display content is delivered to the eyepiece
and displayed to the user.
Inventors: |
King, III; Robert W.; (San
Francisco, CA) ; Haddick; John D.; (Berkeley, CA)
; Osterhout; Ralph F.; (San Francisco, CA) ;
Lohse; Robert Michael; (Palo Alto, CA) |
Assignee: |
Osterhout Group, Inc.
San Francisco
CA
|
Family ID: |
44505790 |
Appl. No.: |
13/049870 |
Filed: |
March 16, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13037324 |
Feb 28, 2011 |
|
|
|
13049870 |
|
|
|
|
13037335 |
Feb 28, 2011 |
|
|
|
13037324 |
|
|
|
|
61308973 |
Feb 28, 2010 |
|
|
|
61373791 |
Aug 13, 2010 |
|
|
|
61382578 |
Sep 14, 2010 |
|
|
|
61410983 |
Nov 8, 2010 |
|
|
|
61429445 |
Jan 3, 2011 |
|
|
|
61429447 |
Jan 3, 2011 |
|
|
|
Current U.S.
Class: |
345/156 ;
345/8 |
Current CPC
Class: |
G02B 2027/0138 20130101;
G06Q 30/0261 20130101; G02B 2027/0178 20130101; G02B 27/0172
20130101; G06F 3/013 20130101; G06F 3/014 20130101; G06F 3/017
20130101; G06F 3/011 20130101; H04N 5/23293 20130101; G02B 27/017
20130101; G06Q 30/02 20130101; H04N 5/23206 20130101; H04N 5/44
20130101; G06F 3/012 20130101; H04N 5/2254 20130101; G06K 9/00617
20130101; G02C 11/10 20130101; H04N 5/232945 20180801; G06F 1/1673
20130101; G06K 9/00604 20130101 |
Class at
Publication: |
345/156 ;
345/8 |
International
Class: |
G06F 3/01 20060101
G06F003/01; G09G 5/00 20060101 G09G005/00; G06F 17/30 20060101
G06F017/30 |
Claims
1. An apparatus for visual biometric data capture, comprising: an
interactive head-mounted eyepiece worn by a user, wherein the
eyepiece includes an optical assembly through which the user views
a surrounding environment and displayed content, wherein the
optical assembly comprises a corrective element that corrects the
user's view of the surrounding environment, an integrated processor
for handling content for display to the user, and an integrated
image source for introducing the content to the optical assembly,
an integrated optical sensor assembly for capturing visual
biometric data; and an integrated communications facility that
transmits the captured visual biometric data to a facility that
stores the visual biometric data in a biometric data database,
interprets the visual biometric data, generates display content
based on the captured visual biometric data, and the
interpretation, and delivers the display content to the
eyepiece.
2. The apparatus of claim 1, wherein the integrated optical sensor
includes a camera mounted on the eyepiece for obtaining visual
biometric images of an individual proximate to the eyepiece, and
wherein the integrated communications facility includes a
transmission facility for transmitting obtained visual biometric
images to a remote computing facility, wherein the transmitted
visual biometric data is compared to previously stored visual
biometric data for personal identification purposes, wherein a
match indicative of a positive identification will result in data
being communicated to the eyepiece such that the displayed content
includes an indication of the identification.
3. A method for visual biometric data capture, comprising:
positioning an individual proximate to an eyepiece; capturing
visual biometric data from the individual positioned proximate to
the eyepiece, wherein the visual biometric data is a facial image
or an iris image; transmitting the captured visual biometric data
to a facility that stores the captured visual biometric data in a
biometric data database; interpreting the captured visual biometric
data; generating display content based on the interpretation of the
captured visual biometric data; and transmitting the display
content to the eyepiece for display to a user.
4. The method of claim 3, wherein the display content comprises a
report of results of the interpretation of the captured visual
biometric data.
5. The method of claim 4, wherein the report indicates that the
individual proximate to the eyepiece has previously captured visual
biometric data stored in the biometric data database.
6. The method of claim 4, wherein the report indicates that the
individual proximate to the eyepiece may or may not have previously
captured visual biometric data stored in the biometric data
database.
7. The method of claim 4, wherein the report indicates that the
individual proximate to the eyepiece does not have previously
captured visual biometric data stored in the biometric data
database.
8. The method of claim 3, wherein the report requests the user to
capture additional biometric data, wherein the biometric data
requested if of a different type.
9. The method of claim 8, wherein the additional biometric data
requested is an iris image and an audio sample.
10. The method of claim 8, wherein the additional biometric data
requested is a facial image or an iris image or an audio
sample.
11. The method of claim 8, wherein the additional biometric data
requested is a facial image and an audio sample.
12. The method of claim 8, wherein the additional biometric data
requested is an iris image and a facial image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of the following United
States nonprovisional patent applications, each of which is
incorporated herein by reference in its entirety:
[0002] U.S. patent application Ser. No. 13/037,324, filed Feb. 28,
2011 and U.S. patent application Ser. No. 13/037,335, filed Feb.
28, 2011, each of which claim the benefit of the following
provisional applications, each of which is hereby incorporated
herein by reference in its entirety: U.S. Provisional Patent
Application 61/308,973, filed Feb. 28, 2010; U.S. Provisional
Patent Application 61/373,791, filed Aug. 13, 2010; U.S.
Provisional Patent Application 61/382,578, filed Sep. 14, 2010;
U.S. Provisional Patent Application 61/410,983, filed Nov. 8, 2010;
U.S. Provisional Patent Application 61/429,445, filed Jan. 3, 2011;
and U.S. Provisional Patent Application 61/429,447, filed Jan. 3,
2011.
[0003] This application claims the benefit of the following United
States provisional applications, each of which is hereby
incorporated by reference in its entirety:
[0004] U.S. Provisional Patent Application 61/373,791, filed Aug.
13, 2010; U.S. Provisional Patent Application 61/382,578, filed
Sep. 14, 2010; U.S. Provisional Patent Application 61/410,983,
filed Nov. 8, 2010; U.S. Provisional Patent Application 61/429,445,
filed Jan. 3, 2011; and U.S. Provisional Patent Application
61/429,447, filed Jan. 3, 2011.
BACKGROUND
Field
[0005] The present disclosure relates to an augmented reality
eyepiece, associated control technologies, and applications for
use.
SUMMARY
[0006] In one embodiment, an eyepiece may include a nano-projector
(or micro-projector) comprising a light source and an LCoS display,
a (two surface) freeform wave guide lens enabling TIR bounces, a
coupling lens disposed between the LCoS display and the freeform
waveguide, and a wedge-shaped optic (translucent correction lens)
adhered to the waveguide lens that enables proper viewing through
the lens whether the projector is on or off. The projector may
include an RGB LED module. The RGB LED module may emit field
sequential color, wherein the different colored LEDs are turned on
in rapid succession to form a color image that is reflected off the
LCoS display. The projector may have a polarizing beam splitter or
a projection collimator.
[0007] In one embodiment, an eyepiece may include a freeform wave
guide lens, a freeform translucent correction lens, a display
coupling lens and a micro-projector.
[0008] In another embodiment, an eyepiece may include a freeform
wave guide lens, a freeform correction lens, a display coupling
lens and a micro-projector, providing a FOV of at least 80-degrees
and a Virtual Display FOV (Diagonal) of .about.25-30.degree..
[0009] In an embodiment, an eyepiece may include an optical wedge
waveguide optimized to match with the ergonomic factors of the
human head, allowing it to wrap around a human face.
[0010] In another embodiment, an eyepiece may include two freeform
optical surfaces and waveguide to enable folding the complex
optical paths within a very thin prism form factor.
[0011] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly, wherein the
displayed content comprises an interactive control element; and an
integrated camera facility that images the surrounding environment,
and identifies a user hand gesture as an interactive control
element location command, wherein the location of the interactive
control element remains fixed with respect to an object in the
surrounding environment, in response to the interactive control
element location command, regardless of a change in the viewing
direction of the user.
[0012] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; wherein the
displayed content comprises an interactive control element; and an
integrated camera facility that images a user's body part as it
interacts with the interactive control element, wherein the
processor removes a portion of the interactive control element by
subtracting the portion of the interactive control element that is
determined to be co-located with the imaged user body part based on
the user's view.
[0013] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly. The displayed
content may comprise an interactive keyboard control element, and
where the keyboard control element is associated with an input path
analyzer, a word matching search facility, and a keyboard input
interface. The user may input text by sliding a pointing device
(e.g. a finger, a stylus, and the like) across character keys of
the keyboard input interface in an sliding motion through an
approximate sequence of a word the user would like to input as
text, wherein the input path analyzer determines the characters
contacted in the input path, the word matching facility finds a
best word match to the sequence of characters contacted and inputs
the best word match as input text.
[0014] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and an integrated
camera facility that images an external visual cue, wherein the
integrated processor identifies and interprets the external visual
cue as a command to display content associated with the visual cue.
The visual cue may be a sign in the surrounding environment, and
where the projected content is associated with an advertisement.
The sign may be a billboard, and the advertisement a personalized
advertisement based on a preferences profile of the user. The
visual cue may be a hand gesture, and the projected content a
projected virtual keyboard. The hand gesture may be a thumb and
index finger gesture from a first user hand, and the virtual
keyboard projected on the palm of the first user hand, and where
the user is able to type on the virtual keyboard with a second user
hand. The hand gesture may be a thumb and index finger gesture
combination of both user hands, and the virtual keyboard projected
between the user hands as configured in the hand gesture, where the
user is able to type on the virtual keyboard using the thumbs of
the user's hands.
[0015] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and an integrated
camera facility that images a gesture, wherein the integrated
processor identifies and interprets the gesture as a command
instruction. The control instruction may provide manipulation of
the content for display, a command communicated to an external
device, and the like.
[0016] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and a tactile
control interface mounted on the eyepiece that accepts control
inputs from the user through at least one of a user touching the
interface and the user being proximate to the interface.
[0017] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and at least one
of a plurality of head motion sensing control devices integrated
with the eyepiece that provide control commands to the processor as
command instructions based upon sensing a predefined head motion
characteristic.
[0018] The head motion characteristic may be a nod of the user's
head such that the nod is an overt motion dissimilar from ordinary
head motions. The overt motion may be a jerking motion of the head.
The control instructions may provide manipulation of the content
for display, be communicated to control an external device, and the
like.
[0019] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly, wherein the
optical assembly includes an electrochromic layer that provides a
display characteristic adjustment that is dependent on displayed
content requirements and surrounding environmental conditions. In
embodiments, the display characteristic may be brightness,
contrast, and the like. The surrounding environmental condition may
be a level of brightness that without the display characteristic
adjustment would make the displayed content difficult to visualize
by the wearer of the eyepiece, where the display characteristic
adjustment may be applied to an area of the optical assembly where
content is being projected.
[0020] In embodiments, the eyepiece may be an interactive
head-mounted eyepiece worn by a user wherein the eyepiece includes
and optical assembly through which the user may view a surrounding
environment and displayed content. The optical assembly may
comprise a corrective element that corrects the user's view of the
surrounding environment, and an integrated image source for
introducing the content to the optical assembly. Further, the
eyepiece may include an adjustable wrap round extendable arm
comprising any shape memory material for securing the position of
the eyepiece on the user's head. The extendable arm may extend from
an end of an eyepiece arm. The end of a wrap around extendable arm
may be covered with silicone. Further, the extendable arms may meet
and secure to each other or they may independently grasp a portion
of the head. In other embodiments, the extendable arm may attach to
a portion of the head mounted eyepiece to secure the eyepiece to
the user's head. In embodiments, the extendable arm may extend
telescopically from the end of the eyepiece arm. In other
embodiments, at least one of the wrap around extendable arms may be
detachable from the head mounted eyepiece. Also, the extendable arm
may be an add-on feature of the head mounted eyepiece.
[0021] In embodiments, the eyepiece may be an interactive
head-mounted eyepiece worn by a user wherein the eyepiece includes
and optical assembly through which the user may view a surrounding
environment and displayed content. The optical assembly may
comprise a corrective element that corrects the user's view of the
surrounding environment, and an integrated image source for
introducing the content to the optical assembly. Further, the
displayed content may comprise a local advertisement wherein the
location of the eyepiece is determined by an integrated location
sensor. Also, the local advertisement may have relevance to the
location of the eyepiece. In other embodiments, the eyepiece may
contain a capacitive sensor capable of sensing whether the eyepiece
is in contact with human skin. The local advertisement may be sent
to the user based on whether the capacitive sensor senses that the
eyepiece is in contact with human skin. The local advertisements
may also be sent in response to the eyepiece being powered on.
[0022] In other embodiments, the local advertisement may be
displayed to the user as a banner advertisement, two dimensional
graphic, or text. Further, advertisement may be associated with a
physical aspect of the surrounding environment. In yet other
embodiments, the advertisement may be displayed as an augmented
reality associated with a physical aspect of the surrounding
environment. The augmented reality advertisement may be two or
three-dimensional. Further, the advertisement may be animated and
it may be associated with the user's view of the surrounding
environment. The local advertisements may also be displayed to the
user based on a web search conducted by the user and displayed in
the content of the search results. Furthermore, the content of the
local advertisement may be determined based on the user's personal
information. The user's personal information may be available to a
web application or an advertising facility. The user's information
may be used by a web application, an advertising facility or
eyepiece to filter the local advertising based on the user's
personal information. A local advertisement may be cashed on a
server where it may be accessed by at least one of an advertising
facility, web application and eyepiece and displayed to the
user.
[0023] In another embodiment, the user may request additional
information related to a local advertisement by making any action
of an eye movement, body movement and other gesture. Furthermore, a
user may ignore the local advertisement by making any an eye
movement, body movement and other gesture or by not selecting the
advertisement for further interaction within a given period of time
from when the advertisement is displayed. In yet other embodiments,
the user may select to not allow local advertisements to be
displayed by selecting such an option on a graphical user
interface. Alternatively, the user may not allow such
advertisements by tuning such feature off via a control on said
eyepiece.
[0024] In one embodiment, the eyepiece may include an audio device.
Further, the displayed content may comprise a local advertisement
and audio. The location of the eyepiece may be determined by an
integrated location sensor and the local advertisement and audio
may have a relevance to the location of the eyepiece. As such, a
user may hear audio that corresponds to the displayed content and
local advertisements.
[0025] In an aspect, the interactive head-mounted eyepiece may
include an optical assembly, through which the user views a
surrounding environment and displayed content, wherein the optical
assembly includes a corrective element that corrects the user's
view of the surrounding environment and an optical waveguide with a
first and a second surface enabling total internal reflections. The
eyepiece may also include an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly. In this aspect,
displayed content may be introduced into the optical waveguide at
an angle of internal incidence that does not result in total
internal reflection. However, the eyepiece also includes a mirrored
surface on the first surface of the optical waveguide to reflect
the displayed content towards the second surface of the optical
waveguide. Thus, the mirrored surface enables a total reflection of
the light entering the optical waveguide or a reflection of at
least a portion of the light entering the optical waveguide. In
embodiments, the surface may be 100% mirrored or mirrored to a
lower percentage. In some embodiments, in place of a mirrored
surface, an air gap between the waveguide and the corrective
element may cause a reflection of the light that enters the
waveguide at an angle of incidence that would not result in
TIR.
[0026] In one aspect, the interactive head-mounted eyepiece may
include an optical assembly, through which the user views a
surrounding environment and displayed content, wherein the optical
assembly includes a corrective element that corrects the user's
view of the surrounding environment and an integrated processor for
handling content for display to the user. The eyepiece further
includes an integrated image source that introduces the content to
the optical assembly from a side of the optical waveguide adjacent
to an arm of the eyepiece, wherein the displayed content aspect
ratio is between approximately square to approximately rectangular
with the long axis approximately horizontal.
[0027] In an, the interactive head-mounted eyepiece includes an
optical assembly through which a user views a surrounding
environment and displayed content, wherein the optical assembly
includes a corrective element that corrects the user's view of the
surrounding environment, a freeform optical waveguide enabling
internal reflections, and a coupling lens positioned to direct an
image from an LCoS display to the optical waveguide. The eyepiece
further includes an integrated processor for handling content for
display to the user and an integrated projector facility for
projecting the content to the optical assembly, wherein the
projector facility comprises a light source and the LCoS display,
wherein light from the light source is emitted under control of the
processor and traverses a polarizing beam splitter where it is
polarized before being reflected off the LCoS display and into the
optical waveguide. In another aspect, the interactive head-mounted
eyepiece, includes an optical assembly through which a user views a
surrounding environment and displayed content, wherein the optical
assembly includes a corrective element that corrects the user's
view of the surrounding environment, an optical waveguide enabling
internal reflections, and a coupling lens positioned to direct an
image from an optical display to the optical waveguide. The
eyepiece further includes an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly, wherein the image
source comprises a light source and the optical display. The
corrective element may be a see-through correction lens attached to
the optical waveguide that enables proper viewing of the
surrounding environment whether the image source or projector
facility is on or off. The freeform optical waveguide may include
dual freeform surfaces that enable a curvature and a sizing of the
waveguide, wherein the curvature and the sizing enable placement of
the waveguide in a frame of the interactive head-mounted eyepiece.
The light source may be an RGB LED module that emits light
sequentially to form a color image that is reflected off the
optical or LCoS display. The eyepiece may further include a
homogenizer through which light from the light source is propagated
to ensure that the beam of light is uniform. A surface of the
polarizing beam splitter reflects the color image from the optical
or LCoS display into the optical waveguide. The eyepiece may
further include a collimator that improves the resolution of the
light entering the optical waveguide. Light from the light source
may be emitted under control of the processor and traverse a
polarizing beam splitter where it is polarized before being
reflected off the optical display and into the optical waveguide.
The optical display may be at least one of an LCoS and an LCD
display. The image source may be a projector, and wherein the
projector is at least one of a microprojector, a nanoprojector, and
a picoprojector. The eyepiece further includes a polarizing beam
splitter that polarizes light from the light source before being
reflected off the LCoS display and into the optical waveguide,
wherein a surface of the polarizing beam splitter reflects the
color image from the LCoS display into the optical waveguide.
[0028] In an embodiment, an apparatus for biometric data capture is
provided. Biometric data may be visual biometric data, such as
facial biometric data or iris biometric data, or may be audio
biometric data. The apparatus includes an optical assembly through
which a user views a surrounding environment and displayed content.
The optical assembly also includes a corrective element that
corrects the user's view of the surrounding environment. An
integrated processor handles content for display to the user on the
eyepiece. The eyepiece also incorporates an integrated image source
for introducing the content to the optical assembly. Biometric data
capture is accomplished with an integrated optical sensor assembly.
Audio data capture is accomplished with an integrated endfire
microphone array. Processing of the captured biometric data occurs
remotely and data is transmitted using an integrated communications
facility. A remote computing facility interprets and analyzes the
captured biometric data, generates display content based on the
captured biometric data, and delivers the display content to the
eyepiece.
[0029] A further embodiment provides a camera mounted on the
eyepiece for obtaining biometric images of an individual proximate
to the eyepiece.
[0030] A yet further embodiment provides a method for biometric
data capture. In the method an individual is placed proximate to
the eyepiece. This may be accomplished by the wearer of the
eyepiece moving into a position that permits the capture of the
desired biometric data. Once positioned, the eyepiece captures
biometric data and transmits the captured biometric data to a
facility that stores the captured biometric data in a biometric
data database. The biometric data database incorporates a remote
computing facility that interprets the received data and generates
display content based on the interpretation of the captured
biometric data. This display content is then transmitted back to
the user for display on the eyepiece.
[0031] A yet further embodiment provides a method for audio
biometric data capture. In the method an individual is placed
proximate to the eyepiece. This may be accomplished by the wearer
of the eyepiece moving into a position that permits the capture of
the desired audio biometric data. Once positioned, the microphone
array captures audio biometric data and transmits the captured
audio biometric data to a facility that stores the captured audio
biometric data in a biometric data database. The audio biometric
data database incorporates a remote computing facility that
interprets the received data and generates display content based on
the interpretation of the captured audio biometric data. This
display content is then transmitted back to the user for display on
the eyepiece.
[0032] In embodiments, the eyepiece includes a see-through
correction lens attached to an exterior surface of the optical
waveguide that enables proper viewing of the surrounding
environment whether there is displayed content or not. The
see-through correction lens may be a prescription lens customized
to the user's corrective eyeglass prescription. The see-through
correction lens may be polarized and may attach to at least one of
the optical waveguide and a frame of the eyepiece, wherein the
polarized correction lens blocks oppositely polarized light
reflected from the user's eye. The see-through correction lens may
attach to at least one of the optical waveguide and a frame of the
eyepiece, wherein the correction lens protects the optical
waveguide, and may include at least one of a ballistic material and
an ANSI-certified polycarbonate material.
[0033] In one embodiment, an interactive head-mounted eyepiece
includes an eyepiece for wearing by a user, an optical assembly
mounted on the eyepiece through which the user views a surrounding
environment and a displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
environment, an integrated processor for handling content for
display to the user, an integrated image source for introducing the
content to the optical assembly, and an electrically adjustable
lens integrated with the optical assembly that adjusts a focus of
the displayed content for the user.
[0034] One embodiment concerns an interactive head-mounted
eyepiece. This interactive head-mounted eyepiece includes an
eyepiece for wearing by a user, an optical assembly mounted on the
eyepiece through which the user views a surrounding environment and
a displayed content, wherein the optical assembly comprises a
corrective element that corrects a user's view of the surrounding
environment, and an integrated processor of the interactive
head-mounted eyepiece for handling content for display to the user.
The interactive head-mounted eyepiece also includes an electrically
adjustable liquid lens integrated with the optical assembly, an
integrated image source of the interactive head-mounted eyepiece
for introducing the content to the optical assembly, and a memory
operably connected with the integrated processor, the memory
including at least one software program for providing a correction
for the displayed content by adjusting the electrically adjustable
liquid lens.
[0035] Another embodiment is an interactive head-mounted eyepiece
for wearing by a user. The interactive head-mounted eyepiece
includes an optical assembly mounted on the eyepiece through which
the user views a surrounding environment and a displayed content,
wherein the optical assembly comprises a corrective element that
corrects the user's view of the displayed content, and an
integrated processor for handling content for display to the user.
The interactive head-mounted eyepiece also includes an integrated
image source for introducing the content to the optical assembly,
an electrically adjustable liquid lens integrated with the optical
assembly that adjusts a focus of the displayed content for the
user, and at least one sensor mounted on the interactive
head-mounted eyepiece, wherein an output from the at least one
sensor is used to stabilize the displayed content of the optical
assembly of the interactive head mounted eyepiece using at least
one of optical stabilization and image stabilization.
[0036] One embodiment is a method for stabilizing images. The
method includes steps of providing an interactive head-mounted
eyepiece including a camera and an optical assembly through which a
user views a surrounding environment and displayed content, and
imaging the surrounding environment with the camera to capture an
image of an object in the surrounding environment. The method also
includes steps of displaying, through the optical assembly, the
content at a fixed location with respect to the user's view of the
imaged object, sensing vibration and movement of the eyepiece, and
stabilizing the displayed content with respect to the user's view
of the surrounding environment via at least one digital
technique.
[0037] Another embodiment is a method for stabilizing images. The
method includes steps of providing an interactive head-mounted
eyepiece including a camera and an optical assembly through which a
user views a surrounding environment and displayed content, the
assembly also comprising a processor for handling content for
display to the user and an integrated projector for projecting the
content to the optical assembly, and imaging the surrounding
environment with the camera to capture an image of an object in the
surrounding environment. The method also includes steps of
displaying, through the optical assembly, the content at a fixed
location with respect to the user's view of the imaged object,
sensing vibration and movement of the eyepiece, and stabilizing the
displayed content with respect to the user's view of the
surrounding environment via at least one digital technique.
[0038] One embodiment is a method for stabilizing images. The
method includes steps of providing an interactive, head-mounted
eyepiece worn by a user, wherein the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content, wherein the optical assembly comprises a
corrective element that corrects the user's view of the surrounding
environment, an integrated processor for handling content for
display to the user and an integrated image source for introducing
the content to the optical assembly, and imaging the surrounding
environment with a camera to capture an image of an object in the
surrounding environment. The method also includes steps of
displaying, through the optical assembly, the content at a fixed
location with respect to the user's view of the imaged object,
sensing vibration and movement of the eyepiece, sending signals
indicative of the vibration and movement of the eyepiece to the
integrated processor of the interactive head-mounted device, and
stabilizing the displayed content with respect to the user's view
of the environment via at least one digital technique.
[0039] Another embodiment is an interactive head-mounted eyepiece.
The interactive head-mounted eyepiece includes an eyepiece for
wearing by a user, an optical assembly mounted on the eyepiece
through which the user views a surrounding environment and a
displayed content, and a corrective element mounted on the eyepiece
that corrects the user's view of the surrounding environment. The
interactive, head-mounted eyepiece also includes an integrated
processor for handling content for display to the user, an
integrated image source for introducing the content to the optical
assembly, and at least one sensor mounted on the camera or the
eyepiece, wherein an output from the at least one sensor is used to
stabilize the displayed content of the optical assembly of the
interactive head mounted eyepiece using at least one digital
technique.
[0040] One embodiment is an interactive head-mounted eyepiece. The
interactive head-mounted eyepiece includes an interactive
head-mounted eyepiece for wearing by a user, an optical assembly
mounted on the eyepiece through which the user views a surrounding
environment and a displayed content, and an integrated processor of
the eyepiece for handling content for display to the user. The
interactive head-mounted eyepiece also includes an integrated image
source of the eyepiece for introducing the content to the optical
assembly, and at least one sensor mounted on the interactive
head-mounted eyepiece, wherein an output from the at least one
sensor is used to stabilize the displayed content of the optical
assembly of the interactive head mounted eyepiece using at least
one of optical stabilization and image stabilization.
[0041] Another embodiment is an interactive head-mounted eyepiece.
The interactive head-mounted eyepiece includes an eyepiece for
wearing by a user, an optical assembly mounted on the eyepiece
through which the user views a surrounding environment and a
displayed content and an integrated processor for handling content
for display to the user. The interactive head-mounted eyepiece also
includes an integrated image source for introducing the content to
the optical assembly, an electro-optic lens in series between the
integrated image source and the optical assembly for stabilizing
content for display to the user, and at least one sensor mounted on
the eyepiece or a mount for the eyepiece, wherein an output from
the at least one sensor is used to stabilize the electro-optic lens
of the interactive head mounted eyepiece.
[0042] Aspects disclosed herein include an interactive head-mounted
eyepiece worn by a user, wherein the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content, wherein the optical assembly comprises a
corrective element that corrects the user's view of the surrounding
environment, an integrated processor for handling content for
display to the user, and an integrated image source for introducing
the content to the optical assembly.
[0043] The eyepiece may further include a control device worn on a
hand of the user, including at least one control component actuated
by a digit of a hand of the user, and providing a control command
from the actuation of the at least one control component to the
processor as a command instruction. The command instruction may be
directed to the manipulation of content for display to the
user.
[0044] The eyepiece may further include a hand motion sensing
device worn on a hand of the user, and providing control commands
from the motion sensing device to the processor as command
instructions.
[0045] The eyepiece may further include a bi-directional optical
assembly through which the user views a surrounding environment
simultaneously with displayed content as transmitted through the
optical assembly from an integrated image source and a processor
for handling the content for display to the user and sensor
information from the sensor, wherein the processor correlates the
displayed content and the information from the sensor to indicate
the eye's line-of-sight relative to the projected image, and uses
the line-of-sight information relative to the projected image, plus
a user command indication, to invoke an action.
[0046] In the eyepiece, line of sight information for the user's
eye is communicated to the processor as command instructions.
[0047] The eyepiece may further include a hand motion sensing
device for tracking hand gestures within a field of view of the
eyepiece to provide control instructions to the eyepiece.
[0048] In an aspect, a method of social networking includes
contacting a social networking website using the eyepiece,
requesting information about members of the social networking
website using the interactive head-mounted eyepiece, and searching
for nearby members of the social networking website using the
interactive head-mounted eyepiece.
[0049] In an aspect, a method of social networking includes
contacting a social networking website using the eyepiece,
requesting information about other members of the social networking
website using the interactive head-mounted eyepiece, sending a
signal indicating a location of the user of the interactive
head-mounted eyepiece, and allowing access to information about the
user of the interactive head-mounted eyepiece.
[0050] In an aspect, a method of social networking includes
contacting a social networking website using the eyepiece,
requesting information about members of the social networking
website using the interactive, head-mounted eyepiece, sending a
signal indicating a location and at least one preference of the
user of the interactive, head-mounted eyepiece, allowing access to
information on the social networking site about preferences of the
user of the interactive, head-mounted eyepiece, and searching for
nearby members of the social networking website using the
interactive head-mounted eyepiece.
[0051] In an aspect, a method of gaming includes contacting an
online gaming site using the eyepiece, initiating or joining a game
of the online gaming site using the interactive head-mounted
eyepiece, viewing the game through the optical assembly of the
interactive head-mounted eyepiece, and playing the game by
manipulating at least one body-mounted control device using the
interactive, head mounted eyepiece.
[0052] In an aspect, a method of gaming includes contacting an
online gaming site using the eyepiece, initiating or joining a game
of the online gaming site with a plurality of members of the online
gaming site, each member using an interactive head-mounted eyepiece
system, viewing game content with the optical assembly, and playing
the game by manipulating at least one sensor for detecting
motion.
[0053] In an aspect, a method of gaming includes contacting an
online gaming site using the eyepiece, contacting at least one
additional player for a game of the online gaming site using the
interactive head-mounted eyepiece, initiating a game of the online
gaming site using the interactive head-mounted eyepiece, viewing
the game of the online gaming site with the optical assembly of the
interactive head-mounted eyepiece, and playing the game by
touchlessly manipulating at least one control using the interactive
head-mounted eyepiece.
[0054] In an aspect, a method of using augmented vision includes
providing an interactive head-mounted eyepiece including an optical
assembly through which a user views a surrounding environment and
displayed content, scanning the surrounding environment with a
black silicon short wave infrared (SWIR) image sensor, controlling
the SWIR image sensor through movements, gestures or commands of
the user, sending at least one visual image from the sensor to a
processor of the interactive head-mounted eyepiece, and viewing the
at least one visual image using the optical assembly, wherein the
black silicon short wave infrared (SWIR) sensor provides a night
vision capability.
[0055] In an aspect, a method of using augmented vision includes
providing an interactive head-mounted eyepiece including a camera
and an optical assembly through which a user views a surrounding
environment and displayed content, viewing the surrounding
environment with a camera and a black silicon short wave infra red
(SWIR) image sensor, controlling the camera through movements,
gestures or commands of the user, sending information from the
camera to a processor of the interactive head-mounted eyepiece, and
viewing visual images using the optical assembly, wherein the black
silicon short wave infrared (SWIR) sensor provides a night vision
capability.
[0056] In an aspect, a method of using augmented vision includes
providing an interactive head-mounted eyepiece including an optical
assembly through which a user views a surrounding environment and
displayed content, wherein the optical assembly comprises a
corrective element that corrects the user's view of the surrounding
environment, an integrated processor for handling content for
display to the user, and an integrated image source for introducing
the content to the optical assembly, viewing the surrounding
environment with a black silicon short wave infrared (SWIR) image
sensor, controlling scanning of the image sensor through movements
and gestures of the user, sending information from the image sensor
to a processor of the interactive head-mounted eyepiece, and
viewing visual images using the optical assembly, wherein the black
silicon short wave infrared (SWIR) sensor provides a night vision
capability.
[0057] In an aspect, a method of receiving information includes
contacting an accessible database using an interactive head-mounted
eyepiece including an optical assembly through which a user views a
surrounding environment and displayed content, requesting
information from the accessible database using the interactive
head-mounted eyepiece, and viewing information from the accessible
database using the interactive head-mounted eyepiece, wherein the
steps of requesting and viewing information are accomplished
without contacting controls of the interactive head-mounted device
by the user.
[0058] In an aspect, a method of receiving information includes
contacting an accessible database using the eyepiece, requesting
information from the accessible database using the interactive
head-mounted eyepiece, displaying the information using the optical
facility, and manipulating the information using the processor,
wherein the steps of requesting, displaying and manipulating are
accomplished without touching controls of the interactive
head-mounted eyepiece.
[0059] In an aspect, a method of receiving information includes
contacting an accessible database using the eyepiece, requesting
information from the accessible website using the interactive,
head-mounted eyepiece without touching of the interactive
head-mounted eyepiece by digits of the user, allowing access to
information on the accessible website without touching controls of
the interactive head-mounted eyepiece, displaying the information
using the optical facility, and manipulating the information using
the processor without touching controls of the interactive
head-mounted eyepiece.
[0060] In an aspect, a method of social networking includes
providing the eyepiece, scanning facial features of a nearby person
with an optical sensor of the head-mounted eyepiece, extracting a
facial profile of the person, contacting a social networking
website using a communications facility of the interactive
head-mounted eyepiece, and searching a database of the social
networking site for a match for the facial profile.
[0061] In an aspect, a method of social networking includes
providing the eyepiece, scanning facial features of a nearby person
with an optical sensor of the head-mounted eyepiece, extracting a
facial profile of the person, contacting a database using a
communications facility of the head-mounted eyepiece, and searching
the database for a person matching the facial profile.
[0062] In an aspect, a method of social networking includes
contacting a social networking website using the eyepiece,
requesting information about nearby members of the social
networking website using the interactive, head-mounted eyepiece,
scanning facial features of a nearby person identified as a member
of the social networking site with an optical sensor of the
head-mounted eyepiece, extracting a facial profile of the person,
and searching at least one additional database for information
concerning the person.
[0063] In one aspect, a method of using augmented vision includes
providing the eyepiece, controlling the camera through movements,
gestures or commands of the user, sending information from the
camera to a processor of the interactive head-mounted eyepiece, and
viewing visual images using the optical assembly, wherein visual
images from the camera and optical assembly are an improvement for
the user in at least one of focus, brightness, clarity and
magnification.
[0064] In another aspect, a method of using augmented vision,
includes providing the eyepiece, controlling the camera through
movements of the user without touching controls of the interactive
head-mounted eyepiece, sending information from the camera to a
processor of the interactive head-mounted eyepiece, and viewing
visual images using the optical assembly of the interactive
head-mounted eyepiece, wherein visual images from the camera and
optical assembly are an improvement for the user in at least one of
focus, brightness, clarity and magnification.
[0065] In one aspect, a method of using augmented vision includes
providing the eyepiece, controlling the camera through movements of
the user of the interactive head-mounted eyepiece, sending
information from the camera to the integrated processor of the
interactive head-mounted eyepiece, applying an image enhancement
technique using computer software and the integrated processor of
the interactive head-mounted eyepiece, and viewing visual images
using the optical assembly of the interactive head-mounted
eyepiece, wherein visual images from the camera and optical
assembly are an improvement for the user in at least one of focus,
brightness, clarity and magnification.
[0066] In one aspect, a method for facial recognition includes
capturing an image of a subject with the eyepiece, converting the
image to biometric data, comparing the biometric data to a database
of previously collected biometric data, identifying biometric data
matching previously collected biometric data, and reporting the
identified matching biometric data as displayed content.
[0067] In another aspect, a system includes the eyepiece, a face
detection facility in association with the integrated processor
facility, wherein the face detection facility captures images of
faces in the surrounding environment, compares the captured images
to stored images in a face recognition database, and provides a
visual indication to indicate a match, where the visual indication
corresponds to the current position of the imaged face in the
surrounding environment as part of the projected content, and an
integrated vibratory actuator in the eyepiece, wherein the
vibratory actuator provides a vibration output to alert the user to
the match.
[0068] In one aspect, a method for augmenting vision includes
collecting photons with a short wave infrared sensor mounted on the
eyepiece, converting the collected photons in the short wave
infrared spectrum to electrical signals, relaying the electrical
signals to the eyepiece for display, collecting biometric data
using the sensor, collecting audio data using an audio sensor, and
transferring the collected biometric data and audio data to a
database.
[0069] In another aspect, a method for object recognition includes
capturing an image of an object with the eyepiece, analyzing the
object to determine if the object has been previously captured,
increasing the resolution of the areas of the captured image that
have not been previously captured and analyzed, and decreasing the
resolution of the areas of the captured image that have been
previously captured and analyzed.
[0070] These and other systems, methods, objects, features, and
advantages of the present disclosure will be apparent to those
skilled in the art from the following detailed description of the
embodiments and the drawings.
[0071] All documents mentioned herein are hereby incorporated in
their entirety by reference. References to items in the singular
should be understood to include items in the plural, and vice
versa, unless explicitly stated otherwise or clear from the text.
Grammatical conjunctions are intended to express any and all
disjunctive and conjunctive combinations of conjoined clauses,
sentences, words, and the like, unless otherwise stated or clear
from the context.
BRIEF DESCRIPTION OF THE FIGURES
[0072] The present disclosure and the following detailed
description of certain embodiments thereof may be understood by
reference to the following figures:
[0073] FIG. 1 depicts an illustrative embodiment of the optical
arrangement.
[0074] FIG. 2 depicts an RGB LED projector.
[0075] FIG. 3 depicts the projector in use.
[0076] FIG. 4 depicts an embodiment of the waveguide and correction
lens disposed in a frame.
[0077] FIG. 5 depicts a design for a waveguide eyepiece.
[0078] FIG. 6 depicts an embodiment of the eyepiece with a
see-through lens.
[0079] FIG. 7 depicts an embodiment of the eyepiece with a
see-through lens.
[0080] FIGS. 8a and 8b depict an embodiment of snap-fit optics.
[0081] FIG. 9 depicts an electrochromic layer of the eyepiece.
[0082] FIG. 10 depicts the advantages of the eyepiece in real-time
image enhancement, keystone correction, and virtual perspective
correction.
[0083] FIG. 11 depicts a plot of responsivity versus wavelength for
three substrates.
[0084] FIG. 12 illustrates the performance of the black silicon
sensor.
[0085] FIG. 13a depicts an incumbent night vision system, FIG. 13b
depicts the night vision system of the present disclosure, and FIG.
13c illustrates the difference in responsivity between the two.
[0086] FIG. 14 depicts a tactile interface of the eyepiece.
[0087] FIG. 14A depicts motions in an embodiment of the eyepiece
featuring nod control.
[0088] FIG. 15 depicts a ring that controls the eyepiece.
[0089] FIG. 15A depicts hand mounted sensors in an embodiment of a
virtual mouse.
[0090] FIG. 15B depicts a facial actuation sensor as mounted on the
eyepiece.
[0091] FIG. 15C depicts a hand pointing control of the
eyepiece.
[0092] FIG. 15D depicts a hand pointing control of the
eyepiece.
[0093] FIG. 15E depicts an example of eye tracking control.
[0094] FIG. 15F depicts a hand positioning control of the
eyepiece.
[0095] FIG. 16 depicts a location-based application mode of the
eyepiece.
[0096] FIG. 17 shows the difference in image quality between A) a
flexible platform of uncooled CMOS image sensors capable of
VIS/NIR/SWIR imaging and B) an image intensified night vision
system
[0097] FIG. 18 depicts an augmented reality-enabled custom
billboard.
[0098] FIG. 19 depicts an augmented reality-enabled custom
advertisement.
[0099] FIG. 20 an augmented reality-enabled custom artwork.
[0100] FIG. 20A depicts a method for posting messages to be
transmitted when a viewer reaches a certain location.
[0101] FIG. 21 depicts an alternative arrangement of the eyepiece
optics and electronics.
[0102] FIG. 22 depicts an alternative arrangement of the eyepiece
optics and electronics.
[0103] FIG. 23 depicts an alternative arrangement of the eyepiece
optics and electronics.
[0104] FIG. 24 depicts a lock position of a virtual keyboard.
[0105] FIG. 25 depicts a detailed view of the projector.
[0106] FIG. 26 depicts a detailed view of the RGB LED module.
[0107] FIG. 27 depicts a gaming network.
[0108] FIG. 28 depicts a method for gaming using augmented reality
glasses.
[0109] FIG. 29 depicts an exemplary electronic circuit diagram for
an augmented reality eyepiece.
[0110] FIG. 30 depicts partial image removal by the eyepiece.
[0111] FIG. 31 depicts a flowchart for a method of identifying a
person based on speech of the person as captured by microphones of
the augmented reality device.
[0112] FIG. 32 depicts a typical camera for use in video calling or
conferencing.
[0113] FIG. 33 illustrates an embodiment of a block diagram of a
video calling camera.
[0114] FIG. 34 depicts embodiments of the eyepiece for optical or
digital stabilization.
[0115] FIG. 35 depicts an embodiment of a classic cassegrain
configuration.
[0116] FIG. 36 depicts the configuration of the microcassegrain
telescoping folded optic camera.
[0117] FIG. 37 depicts a swipe process with a virtual keyboard.
[0118] FIG. 38 depicts a target marker process for a virtual
keyboard.
[0119] FIG. 39 illustrates glasses for biometric data capture
according to an embodiment.
[0120] FIG. 40 illustrates iris recognition using the biometric
data capture glasses according to an embodiment.
[0121] FIG. 41 depicts face and iris recognition according to an
embodiment.
[0122] FIG. 42 illustrates use of dual omni-microphones according
to an embodiment.
[0123] FIG. 43 depicts the directionality improvements with
multiple microphones.
[0124] FIG. 44 shows the use of adaptive arrays to steer the audio
capture facility according to an embodiment.
[0125] FIG. 45 depicts a block diagram of a system including the
eyepiece.
DETAILED DESCRIPTION
[0126] The present disclosure relates to eyepiece electro-optics.
The eyepiece may include projection optics suitable to project an
image onto a see-through or translucent lens, enabling the wearer
of the eyepiece to view the surrounding environment as well as the
displayed image. The projection optics, also known as a projector,
may include an RGB LED module that uses field sequential color.
With field sequential color, a single full color image may be
broken down into color fields based on the primary colors of red,
green, and blue and imaged by an LCoS (liquid crystal on silicon)
optical display 210 individually. As each color field is imaged by
the optical display 210, the corresponding LED color is turned on.
When these color fields are displayed in rapid sequence, a full
color image may be seen. With field sequential color illumination,
the resulting projected image in the eyepiece can be adjusted for
any chromatic aberrations by shifting the red image relative to the
blue and/or green image and so on. The image may thereafter be
reflected into a two surface freeform waveguide where the image
light engages in total internal reflections (TIR) until reaching
the active viewing area of the lens where the user sees the image.
A processor, which may include a memory and an operating system,
may control the LED light source and the optical display. The
projector may also include or be optically coupled to a display
coupling lens, a condenser lens, a polarizing beam splitter, and a
field lens.
[0127] Referring to FIG. 1, an illustrative embodiment of the
augmented reality eyepiece 100 may be depicted. It will be
understood that embodiments of the eyepiece 100 may not include all
of the elements depicted in FIG. 1 while other embodiments may
include additional or different elements. In embodiments, the
optical elements may be embedded in the arm portions 122 of the
frame 102 of the eyepiece. Images may be projected with a projector
108 onto at least one lens 104 disposed in an opening of the frame
102. One or more projectors 108, such as a nanoprojector,
picoprojector, microprojector, femtoprojector, LASER-based
projector, holographic projector, and the like may be disposed in
an arm portion of the eyepiece frame 102. In embodiments, both
lenses 104 are see-through or translucent while in other
embodiments only one lens 104 is translucent while the other is
opaque or missing. In embodiments, more than one projector 108 may
be included in the eyepiece 100.
[0128] In embodiments such as the one depicted in FIG. 1, the
eyepiece 100 may also include at least one articulating ear bud
120, a radio transceiver 118 and a heat sink 114 to absorb heat
from the LED light engine, to keep it cool and to allow it to
operate at full brightness. There is also a TI OMAP4 (open
multimedia applications processor) 112, and a flex cable with RF
antenna 110, all of which will be further described herein.
[0129] In an embodiment and referring to FIG. 2, the projector 200
may be an RGB projector. The projector 200 may include a housing
202, a heatsink 204 and an RGB LED engine or module 206. The RGB
LED engine 206 may include LEDs, dichroics, concentrators, and the
like. A digital signal processor (DSP) (not shown) may convert the
images or video stream into control signals, such as voltage
drops/current modifications, pulse width modulation (PWM) signals,
and the like to control the intensity, duration, and mixing of the
LED light. For example, the DSP may control the duty cycle of each
PWM signal to control the average current flowing through each LED
generating a plurality of colors. A still image co-processor of the
eyepiece may employ noise-filtering, image/video stabilization, and
face detection, and be able to make image enhancements. An audio
back-end processor of the eyepiece may employ buffering, SRC,
equalization and the like.
[0130] The projector 200 may include an optical display 210, such
as an LCoS display, and a number of components as shown. In
embodiments, the projector 200 may be designed with a single panel
LCoS display 210; however, a three panel display may be possible as
well. In the single panel embodiment, the display 210 is
illuminated with red, blue, and green sequentially (aka field
sequential color). In other embodiments, the projector 200 may make
use of alternative optical display technologies, such as a back-lit
liquid crystal display (LCD), a front-lit LCD, a transflective LCD,
an organic light emitting diode (OLED), a field emission display
(FED), a ferroelectric LCoS (FLCOS) and the like.
[0131] The eyepiece may be powered by any power supply, such as
battery power, solar power, line power, and the like. The power may
be integrated in the frame 102 or disposed external to the eyepiece
100 and in electrical communication with the powered elements of
the eyepiece 100. For example, a solar energy collector may be
placed on the frame 102, on a belt clip, and the like. Battery
charging may occur using a wall charger, car charger, on a belt
clip, in an eyepiece case, and the like.
[0132] The projector 200 may include the LED light engine 206,
which may be mounted on heat sink 204 and holder 208, for ensuring
vibration-free mounting for the LED light engine, hollow tapered
light tunnel 220, diffuser 212 and condenser lens 214. Hollow
tunnel 220 helps to homogenize the rapidly-varying light from the
RGB LED light engine. In one embodiment, hollow light tunnel 220
includes a silvered coating. The diffuser lens 212 further
homogenizes and mixes the light before the light is led to the
condenser lens 214. The light leaves the condenser lens 214 and
then enters the polarizing beam splitter (PBS) 218. In the PBS, the
LED light is propagated and split into polarization components
before it is refracted to a field lens 216 and the LCoS display
210. The LCoS display provides the image for the microprojector.
The image is then reflected from the LCoS display and back through
the polarizing beam splitter, and then reflected ninety degrees.
Thus, the image leaves microprojector 200 in about the middle of
the microprojector. The light then is led to the coupling lens 504,
described below.
[0133] In an embodiment, the digital signal processor (DSP) may be
programmed and/or configured to receive video feed information and
configure the video feed to drive whatever type of image source is
being used with the optical display 210. The DSP may include a bus
or other communication mechanism for communicating information, and
an internal processor coupled with the bus for processing the
information. The DSP may include a memory, such as a random access
memory (RAM) or other dynamic storage device (e.g., dynamic RAM
(DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled
to the bus for storing information and instructions to be executed.
The DSP can include a non-volatile memory such as for example a
read only memory (ROM) or other static storage device (e.g.,
programmable ROM (PROM), erasable PROM (EPROM), and electrically
erasable PROM (EEPROM)) coupled to the bus for storing static
information and instructions for the internal processor. The DSP
may include special purpose logic devices (e.g., application
specific integrated circuits (ASICs)) or configurable logic devices
(e.g., simple programmable logic devices (SPLDs), complex
programmable logic devices (CPLDs), and field programmable gate
arrays (FPGAs)).
[0134] The DSP may include at least one computer readable medium or
memory for holding instructions programmed and for containing data
structures, tables, records, or other data necessary to drive the
optical display. Examples of computer readable media suitable for
applications of the present disclosure may be compact discs, hard
disks, floppy disks, tape, magneto-optical disks, PROMs (EPROM,
EEPROM, flash EPROM), DRAM, SRAM, SDRAM, or any other magnetic
medium, compact discs (e.g., CD-ROM), or any other optical medium,
punch cards, paper tape, or other physical medium with patterns of
holes, a carrier wave (described below), or any other medium from
which a computer can read. Various forms of computer readable media
may be involved in carrying out one or more sequences of one or
more instructions to the optical display 210 for execution. The DSP
may also include a communication interface to provide a data
communication coupling to a network link that can be connected to,
for example, a local area network (LAN), or to another
communications network such as the Internet. Wireless links may
also be implemented. In any such implementation, an appropriate
communication interface can send and receive electrical,
electromagnetic or optical signals that carry digital data streams
representing various types of information (such as the video
information) to the optical display 210.
[0135] In another embodiment, FIGS. 21 and 22 depict an alternate
arrangement of the waveguide and projector in exploded view. In
this arrangement, the projector is placed just behind the hinge of
the arm of the eyepiece and it is vertically oriented such that the
initial travel of the RGB LED signals is vertical until the
direction is changed by a reflecting prism in order to enter the
waveguide lens. The vertically arranged projection engine may have
a PBS 218 at the center, the RGB LED array at the bottom, a hollow,
tapered tunnel with thin film diffuser to mix the colors for
collection in an optic, and a condenser lens. The PBS may have a
pre-polarizer on an entrance face. The pre-polarizer may be aligned
to transmit light of a certain polarization, such as p-polarized
light and reflect (or absorb) light of the opposite polarization,
such as s-polarized light. The polarized light may then pass
through the PBS to the field lens 216. The purpose of the field
lens 216 may be to create near telecentric illumination of the LCoS
panel. The LCoS display may be truly reflective, reflecting colors
sequentially with correct timing so the image is displayed
properly. Light may reflect from the LCoS panel and, for bright
areas of the image, may be rotated to s-polarization. The light
then may refract through the field lens 216 and may be reflected at
the internal interface of the PBS and exit the projector, heading
toward the coupling lens. The hollow, tapered tunnel 220 may
replace the homogenizing lenslet from other embodiments. By
vertically orienting the projector and placing the PBS in the
center, space is saved and the projector is able to be placed in a
hinge space with little moment arm hanging from the waveguide.
[0136] Light entering the waveguide may be polarized, such as
s-polarized. When this light reflects from the user's eye, it may
appear as a "night glow" from the user's eye. This night glow may
be eliminated by attaching lenses to the waveguide or frame, such
as the snap-fit optics described herein, that are oppositely
polarized from the light reflecting from the user's eye, such as
p-polarized in this case.
[0137] In FIGS. 21-22, augmented reality eyepiece 2100 includes a
frame 2102 and left and right earpieces or temple pieces 2104.
Protective lenses 2106, such as ballistic lenses, are mounted on
the front of the frame 2102 to protect the eyes of the user or to
correct the user's view of the surrounding environment if they are
prescription lenses. The front portion of the frame may also be
used to mount a camera or image sensor 2130 and one or more
microphones 2132. Not visible in FIG. 21, waveguides are mounted in
the frame 2102 behind the protective lenses 2106, one on each side
of the center or adjustable nose bridge 2138. The front cover 2106
may be interchangeable, so that tints or prescriptions may be
changed readily for the particular user of the augmented reality
device. In one embodiment, each lens is quickly interchangeable,
allowing for a different prescription for each eye. In one
embodiment, the lenses are quickly interchangeable with snap-fits
as discussed elsewhere herein. Certain embodiments may only have a
projector and waveguide combination on one side of the eyepiece
while the other side may be filled with a regular lens, reading
lens, prescription lens, or the like. The left and right ear pieces
2104 each vertically mount a projector or microprojector 2114 or
other image source atop a spring-loaded hinge 2128 for easier
assembly and vibration/shock protection. Each temple piece also
includes a temple housing 2116 for mounting associated electronics
for the eyepiece, and each may also include an elastomeric head
grip pad 2120, for better retention on the user. Each temple piece
also includes extending, wrap-around ear buds 2112 and an orifice
2126 for mounting a headstrap 2142.
[0138] As noted, the temple housing 2116 contains electronics
associated with the augmented reality eyepiece. The electronics may
include several circuit boards, as shown, such as for the
microprocessor and radios 2122, the communications system on a chip
(SOC) 2124, and the open multimedia applications processor (OMAP)
processor board 2140. The communications system on a chip (SOC) may
include electronics for one or more communications capabilities,
including a wide local area network (WLAN), BlueTooth.TM.
communications, frequency modulation (FM) radio, a global
positioning system (GPS), a 3-axis accelerometer, one or more
gyroscopes, and the like. In addition, the right temple piece may
include an optical trackpad (not shown) on the outside of the
temple piece for user control of the eyepiece and one or more
applications.
[0139] The frame 2102 is in a general shape of a pair of
wrap-around sunglasses. The sides of the glasses include
shape-memory alloy straps 2134, such as nitinol straps. The nitinol
or other shape-memory alloy straps are fitted for the user of the
augmented reality eyepiece. The straps are tailored so that they
assume their trained or preferred shape when worn by the user and
warmed to near body temperature.
[0140] Other features of this embodiment include detachable,
noise-cancelling earbuds. As seen in the figure, the earbuds are
intended for connection to the controls of the augmented reality
eyepiece for delivering sounds to ears of the user. The sounds may
include inputs from the wireless internet or telecommunications
capability of the augmented reality eyepiece. The earbuds also
include soft, deformable plastic or foam portions, so that the
inner ears of the user are protected in a manner similar to
earplugs. In one embodiment, the earbuds limit inputs to the user's
ears to about 85 dB. This allows for normal hearing by the wearer,
while providing protection from gunshot noise or other explosive
noises. In one embodiment, the controls of the noise-cancelling
earbuds have an automatic gain control for very fast adjustment of
the cancelling feature in protecting the wearer's ears.
[0141] FIG. 23 depicts a layout of the vertically arranged
projector 2114, where the illumination light passes from bottom to
top through one side of the PBS on its way to the display and
imager board, which may be silicon backed, and being refracted as
image light where it hits the internal interfaces of the triangular
prisms which constitute the polarizing beam splitter, and is
reflected out of the projector and into the waveguide lens. In this
example, the dimensions of the projector are shown with the width
of the imager board being 11 mm, the distance from the end of the
imager board to the image centerline being 10.6 mm, and the
distance from the image centerline to the end of the LED board
being about 11.8 mm.
[0142] A detailed and assembled view of the components of the
projector discussed above may be seen in FIG. 25. This view depicts
how compact the micro-projector 2500 is when assembled, for
example, near a hinge of the augmented reality eyepiece.
Microprojector 2500 includes a housing and a holder 208 for
mounting certain of the optical pieces. As each color field is
imaged by the optical display 210, the corresponding LED color is
turned on. The RGB LED light engine 202 is depicted near the
bottom, mounted on heat sink 204. The holder 208 is mounted atop
the LED light engine 202, the holder mounting light tunnel 220,
diffuser lens 212 (to eliminate hotspots) and condenser lens 214.
Light passes from the condenser lens into the polarizing beam
splitter 218 and then to the field lens 216. The light then
refracts onto the LCoS (liquid crystal on silicon) chip 210, where
an image is formed. The light for the image then reflects back
through the field lens 216 and is polarized and reflected
90.degree. through the polarizing beam splitter 218. The light then
leaves the microprojector for transmission to the optical display
of the glasses.
[0143] FIG. 26 depicts an exemplary RGB LED module. In this
example, the LED is a 2.times.2 array with 1 red, 1 blue and 2
green die and the LED array has 4 cathodes and a common anode. The
maximum current may be 0.5 A per die and the maximum voltage
(.apprxeq.4V) may be needed for the green and blue die.
[0144] FIG. 3 depicts an embodiment of a horizontally disposed
projector in use. The projector 300 may be disposed in an arm
portion of an eyepiece frame. The LED module 302, under processor
control 304, may emit a single color at a time in rapid sequence.
The emitted light may travel down a light tunnel 308 and through at
least one homogenizing lenslet 310 before encountering a polarizing
beam splitter 312 and being deflected towards an LCoS display 314
where a full color image is displayed. The LCoS display may have a
resolution of 1280.times.720p. The image may then be reflected back
up through the polarizing beam splitter, reflected off a fold
mirror 318 and travel through a collimator on its way out of the
projector and into a waveguide. The projector may include a
diffractive element to eliminate aberrations.
[0145] In an embodiment, the interactive head-mounted eyepiece
includes an optical assembly through which a user views a
surrounding environment and displayed content, wherein the optical
assembly includes a corrective element that corrects the user's
view of the surrounding environment, a freeform optical waveguide
enabling internal reflections, and a coupling lens positioned to
direct an image from an optical display, such as an LCoS display,
to the optical waveguide. The eyepiece further includes an
integrated processor for handling content for display to the user
and an integrated image source, such as a projector facility, for
introducing the content to the optical assembly. In embodiments
where the image source is a projector, the projector facility
includes a light source and the optical display. Light from the
light source, such as an RGB module, is emitted under control of
the processor and traverses a polarizing beam splitter where it is
polarized before being reflected off the optical display, such as
the LCoS display or LCD display in certain other embodiments, and
into the optical waveguide. A surface of the polarizing beam
splitter may reflect the color image from the optical display into
the optical waveguide. The RGB LED module may emit light
sequentially to form a color image that is reflected off the
optical display. The corrective element may be a see-through
correction lens that is attached to the optical waveguide to enable
proper viewing of the surrounding environment whether the image
source is on or off. This corrective element may be a wedge-shaped
correction lens, and may be prescription, tinted, coated, or the
like. The freeform optical waveguide, which may be described by a
higher order polynomial, may include dual freeform surfaces that
enable a curvature and a sizing of the waveguide. The curvature and
the sizing of the waveguide enable its placement in a frame of the
interactive head-mounted eyepiece. This frame may be sized to fit a
user's head in a similar fashion to sunglasses or eyeglasses. Other
elements of the optical assembly of the eyepiece include a
homogenizer through which light from the light source is propagated
to ensure that the beam of light is uniform and a collimator that
improves the resolution of the light entering the optical
waveguide.
[0146] Referring to FIG. 4, the image light, which may be polarized
and collimated, may optionally traverse a display coupling lens
412, which may or may not be the collimator itself or in addition
to the collimator, and enter the waveguide 414. In embodiments, the
waveguide 414 may be a freeform waveguide, where the surfaces of
the waveguide are described by a polynomial equation. The waveguide
may be rectilinear. The waveguide 414 may include two reflective
surfaces. When the image light enters the waveguide 414, it may
strike a first surface with an angle of incidence greater than the
critical angle above which total internal reflection (TIR) occurs.
The image light may engage in TIR bounces between the first surface
a second facing surface, eventually reaching the active viewing
area 418 of the composite lens. In an embodiment, light may engage
in at least three TIR bounces. Since the waveguide 414 tapers to
enable the TIR bounces to eventually exit the waveguide, the
thickness of the composite lens 420 may not be uniform. Distortion
through the viewing area of the composite lens 420 may be minimized
by disposing a wedge-shaped correction lens 410 along a length of
the freeform waveguide 414 in order to provide a uniform thickness
across at least the viewing area of the lens 420. The correction
lens 410 may be a prescription lens, a tinted lens, a polarized
lens, and the like.
[0147] In some embodiments, while the optical waveguide may have a
first surface and a second surface enabling total internal
reflections of the light entering the waveguide, the light may not
actually enter the waveguide at an internal angle of incidence that
would result in total internal reflection. The eyepiece may include
a mirrored surface on the first surface of the optical waveguide to
reflect the displayed content towards the second surface of the
optical waveguide. Thus, the mirrored surface enables a total
reflection of the light entering the optical waveguide or a
reflection of at least a portion of the light entering the optical
waveguide. In embodiments, the surface may be 100% mirrored or
mirrored to a lower percentage. In some embodiments, in place of a
mirrored surface, an air gap between the waveguide and the
corrective element may cause a reflection of the light that enters
the waveguide at an angle of incidence that would not result in
TIR.
[0148] In an embodiment, the eyepiece includes an integrated image
source, such as a projector, that introduces content for display to
the optical assembly from a side of the optical waveguide adjacent
to an arm of the eyepiece. As opposed to prior art optical
assemblies where image injection occurs from a top side of the
optical waveguide, the present disclosure provides image injection
to the waveguide from a side of the waveguide. The displayed
content aspect ratio is between approximately square to
approximately rectangular with the long axis approximately
horizontal. In embodiments, the displayed content aspect ratio is
16:9. In embodiments, achieving a rectangular aspect ratio for the
displayed content where the long axis is approximately horizontal
may be done via rotation of the injected image. In other
embodiments, it may be done by stretching the image until it
reaches the desired aspect ratio.
[0149] FIG. 5 depicts a design for a waveguide eyepiece showing
sample dimensions. For example, in this design, the width of the
coupling lens 504 may be 13.about.15 mm, with the optical display
502 optically coupled in series. These elements may be disposed in
an arm of an eyepiece. Image light from the optical display 502 is
projected through the coupling lens 504 into the freeform waveguide
508. The thickness of the composite lens, including waveguide 508
and correction lens 510, may be 9 mm. In this design, the waveguide
502 enables an exit pupil diameter of 8 mm with an eye clearance of
20 mm. The resultant see-through view 512 may be about 60-70 mm.
The distance from the pupil to the image light path as it enters
the waveguide 502 (dimension a) may be about 50-60 mm, which can
accommodate a large % of human head breadths. In an embodiment, the
field of view may be larger than the pupil. In embodiments, the
field of view may not fill the lens. It should be understood that
these dimensions are for a particular illustrative embodiment and
should not be construed as limiting. In an embodiment, the
waveguide, snap-on optics, and/or the corrective lens may comprise
optical plastic. In other embodiments, the waveguide snap-on
optics, and/or the corrective lens may comprise glass, marginal
glass, bulk glass, metallic glass, palladium-enriched glass, or
other suitable glass. In embodiments, the waveguide 508 and
correction lens 510 may be made from different materials selected
to result in little to no chromatic aberrations. The materials may
include a diffraction grating, a holographic grating, and the
like.
[0150] In embodiments such as that shown in FIG. 1, the projected
image may be a stereo image when two projectors 108 are used for
the left and right images. To enable stereo viewing, the projectors
108 may be disposed at an adjustable distance from one another that
enables adjustment based on the interpupillary distance for
individual wearers of the eyepiece.
[0151] Having described certain embodiments of the eyepiece, we
turn to describing various additional features, applications for
use 4512, control technologies and external control devices 4508,
associated external devices 4504, software, networking
capabilities, integrated sensors 4502, external processing
facilities 4510, associated third party facilities 4514, and the
like. External devices 4504 for use with the eyepiece include
devices useful in entertainment, navigation, computing,
communication, and the like. External control devices 4508 include
a ring/hand or other haptic controller, external device enabling
gesture control (e.g. non-integral camera, device with embedded
accelerometer), I/F to external device, and the like. External
processing facilities 4510 include local processing facilities,
remote processing facilities, I/F to external applications, and the
like. Applications for use 4512 include those for commercial,
consumer, military, education, government, augmented reality,
advertising, media, and the like. Various third party facilities
4514 may be accessed by the eyepiece or work in conjunction with
the eyepiece. Eyepieces 100 may interact with other eyepieces 100
through wireless communication, near-field communication, a wired
communication, and the like.
[0152] FIG. 6 depicts an embodiment of the eyepiece 600 with a
see-through or translucent lens 602. A projected image 618 can be
seen on the lens 602. In this embodiment, the image 618 that is
being projected onto the lens 602 happens to be an augmented
reality version of the scene that the wearer is seeing, wherein
tagged points of interest (POI) in the field of view are displayed
to the wearer. The augmented reality version may be enabled by a
forward facing camera embedded in the eyepiece (not shown in FIG.
6) that images what the wearer is looking and identifies the
location/POI. In one embodiment, the output of the camera or
optical transmitter may be sent to the eyepiece controller or
memory for storage, for transmission to a remote location, or for
viewing by the person wearing the eyepiece or glasses. For example,
the video output may be streamed to the virtual screen seen by the
user. The video output may thus be used to help determine the
user's location, or may be sent remotely to others to assist in
helping to locate the location of the wearer, or for any other
purpose. Other detection technologies, such as GPS, RFID, manual
input, and the like, may be used to determine a wearer's location.
Using location or identification data, a database may be accessed
by the eyepiece for information that may be overlaid, projected or
otherwise displayed with what is being seen. Augmented reality
applications and technology will be further described herein.
[0153] In FIG. 7, an embodiment of the eyepiece 700 is depicted
with a translucent lens 702 on which is being displayed streaming
media (an e-mail application) and an incoming call notification. In
this embodiment, the media obscures a portion of the viewing area,
however, it should be understood that the displayed image may be
positioned anywhere in the field of view. In embodiments, the media
may be made to be more or less transparent.
[0154] In an embodiment, the eyepiece may receive input from any
external source, such as an external converter box. The source may
be depicted in the lens of eyepiece. In an embodiment, when the
external source is a phone, the eyepiece may use the phone's
location capabilities to display location-based augmented reality,
including marker overlay from marker-based AR applications. In
embodiments, a VNC client running on the eyepiece's processor or an
associated device may be used to connect to and control a computer,
where the computer's display is seen in the eyepiece by the wearer.
In an embodiment, content from any source may be streamed to the
eyepiece, such as a display from a panoramic camera riding atop a
vehicle, a user interface for a device, imagery from a drone or
helicopter, and the like. The lenses may be chromic, such as
photochromic or electrochromic. The electrochromic lens may include
integral chromic material or a chromic coating which changes the
opacity of at least a portion of the lens in response to a burst of
charge applied by the processor across the chromic material. For
example, and referring to FIG. 9, a chromic portion 902 of the lens
904 is shown darkened, such as for providing greater viewability by
the wearer of the eyepiece when that portion is showing displayed
content to the wearer. In embodiments, there may be a plurality of
chromic areas on the lens that may be controlled independently,
such as large portions of the lens, sub-portions of the projected
area, programmable areas of the lens and/or projected area,
controlled to the pixel level, and the like. Activation of the
chromic material may be controlled via the control techniques
further described herein or automatically enabled with certain
applications (e.g. a streaming video application, a sun tracking
application) or in response to a frame-embedded UV sensor. The lens
may have an angular sensitive coating which enables transmitting
light-waves with low incident angles and reflecting light, such as
s-polarized light, with high incident angles. The chromic coating
may be controlled in portions or in its entirety, such as by the
control technologies described herein. The lenses may be variable
contrast. In embodiments, the user may wear the interactive
head-mounted eyepiece, where the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content. The optical assembly may include a corrective
element that corrects the user's view of the surrounding
environment, an integrated processor for handling content for
display to the user, and an integrated image source for introducing
the content to the optical assembly. The optical assembly may
include an electrochromic layer that provides a display
characteristic adjustment that is dependent on displayed content
requirements and surrounding environmental conditions. In
embodiments, the display characteristic may be brightness,
contrast, and the like. The surrounding environmental condition may
be a level of brightness that without the display characteristic
adjustment would make the displayed content difficult to visualize
by the wearer of the eyepiece, where the display characteristic
adjustment may be applied to an area of the optical assembly where
content is being displayed.
[0155] In embodiments, the eyepiece may have brightness, contrast,
spatial, resolution, and the like control over the eyepiece display
area, such as to alter and improve the user's view of the displayed
content against a bright or dark surrounding environment. For
example, a user may be using the eyepiece under bright daylight
conditions, and in order for the user to clearly see the displayed
content the display area my need to be altered in brightness and/or
contrast. Alternatively, the viewing area surrounding the display
area may be altered. In addition, the area altered, whether within
the display area or not, may be spatially oriented or controlled
per the application being implemented. For instance, only a small
portion of the display area may need to be altered, such as when
that portion of the display area deviates from some determined or
predetermined contrast ratio between the displayed portion of the
display area and the surrounding environment. In embodiments,
portions of the lens may be altered in brightness, contrast,
spatial extent, resolution, and the like, such as fixed to include
the entire display area, adjusted to only a portion of the lens,
adaptable and dynamic to changes in lighting conditions of the
surrounding environment and/or the brightness-contrast of the
displayed content, and the like. Spatial extent (e.g. the area
affected by the alteration) and resolution (e.g. display optical
resolution) may vary over different portions of the lens, including
high resolution segments, low resolution segments, single pixel
segments, and the like, where differing segments may be combined to
achieve the viewing objectives of the application(s) being
executed. In embodiments, technologies for implementing alterations
of brightness, contrast, spatial extent, resolution, and the like,
may include electrochromic materials, LCD technologies, embedded
beads in the optics, flexible displays, suspension particle device
(SPD) technologies, colloid technologies, and the like.
[0156] In embodiments, there may be various modes of activation of
the electrochromic layer. For example, the user may enter sunglass
mode where the composite lenses appear only somewhat darkened or
the user may enter "Blackout" mode, where the composite lenses
appear completely blackened.
[0157] In an example of a technology that may be employed in
implementing the alterations of brightness, contrast, spatial
extent, resolution, and the like, may be electrochromic materials,
films, inks, and the like. Electrochromism is the phenomenon
displayed by some materials of reversibly changing appearance when
electric charge is applied. Various types of materials and
structures can be used to construct electrochromic devices,
depending on the specific applications. For instance,
electrochromic materials include tungsten oxide (WO.sub.3), which
is the main chemical used in the production of electrochromic
windows or smart glass. In embodiments, electrochromic coatings may
be used on the lens of the eyepiece in implementing alterations. In
another example, electrochromic displays may be used in
implementing `electronic paper`, which is designed to mimic the
appearance of ordinary paper, where the electronic paper displays
reflected light like ordinary paper. In embodiments,
electrochromism may be implemented in a wide variety of
applications and materials, including gyricon (consisting of
polyethylene spheres embedded in a transparent silicone sheet, with
each sphere suspended in a bubble of oil so that they can rotate
freely), electro-phoretic displays (forming images by rearranging
charged pigment particles using an applied electric field), E-Ink
technology, electro-wetting, electro-fluidic, interferometric
modulator, organic transistors embedded into flexible substrates,
nano-chromics displays (NCD), and the like.
[0158] In another example of a technology that may be employed in
implementing the alterations of brightness, contrast, spatial
extent, resolution, and the like, may be suspended particle devices
(SPD). When a small voltage is applied to an SPD film, its
microscopic particles, which in their stable state are randomly
dispersed, become aligned and allow light to pass through. The
response may be immediate, uniform, and with stable color
throughout the film. Adjustment of the voltage may allow users to
control the amount of light, glare and heat passing through. The
system's response may range from a dark blue appearance, with up to
full blockage of light in its off state, to clear in its on state.
In embodiments, SPD technology may be an emulsion applied on a
plastic substrate creating the active film. This plastic film may
be laminated (as a single glass pane), suspended between two sheets
of glass, plastic or other transparent materials, and the like.
[0159] The augmented reality glasses may include a lens 818 for
each eye of the wearer. The lenses 818 may be made to fit readily
into the frame 814, so that each lens may be tailored for the
person for whom the glasses are intended. Thus, the lenses may be
corrective lenses, and may also be tinted for use as sunglasses, or
have other qualities suitable for the intended environment. Thus,
the lenses may be tinted yellow, dark or other suitable color, or
may be photochromic, so that the transparency of the lens decreases
when exposed to brighter light. In one embodiment, the lenses may
also be designed for snap fitting into the frames, i.e., snap on
lenses are one embodiment.
[0160] Of course, the lenses need not be corrective lenses; they
may simply serve as sunglasses or as protection for the optical
system within the frame. It goes without saying that the outer
lenses are important for helping to protect the rather expensive
waveguides, viewing systems and electronics within the augmented
reality glasses. At a minimum, the outer lenses offer protection
from scratching by the environment of the user, whether sand,
brambles, thorns and the like, in one environment, and flying
debris, bullets and shrapnel, in another environment. In addition,
the outer lenses may be decorative, acting to change a look of the
lens, perhaps to appeal to the individuality or fashion sense of a
user. The outer lenses may also help one individual user to
distinguish his or her glasses from others, for example, when many
users are gathered together.
[0161] It is desirable that the lenses be suitable for impact, such
as a ballistic impact. Accordingly, in one embodiment, the lenses
and the frames meet ANSI Standard Z87.1-2010 for ballistic
resistance. In one embodiment, the lenses also meet ballistic
standard CE EN166B. In another embodiment, for military uses, the
lenses and frames may meet the standards of MIL-PRF-31013,
standards 3.5.1.1 or 4.4.1.1. Each of these standards has slightly
different requirements for ballistic resistance and each is
intended to protect the eyes of the user from impact by high-speed
projectiles or debris. While no particular material is specified,
polycarbonate, such as certain Lexan.RTM. grades, usually is
sufficient to pass tests specified in the appropriate standard.
[0162] In one embodiment, as shown in FIG. 8a, the lenses snap in
from the outside of the frame, not the inside, for better impact
resistance, since any impact is expected from the outside of the
augmented reality eyeglasses. In this embodiment, replaceable lens
819 has a plurality of snap-fit arms 819a which fit into recesses
820a of frame 820. The engagement angle 819b of the arm is greater
than 90.degree., while the engagement angle 820b of the recess is
also greater than 90.degree.. Making the angles greater than right
angles has the practical effect of allowing removal of lens 819
from the frame 820. The lens 819 may need to be removed if the
person's vision has changed or if a different lens is desired for
any reason. The design of the snap fit is such that there is a
slight compression or bearing load between the lens and the frame.
That is, the lens may be held firmly within the frame, such as by a
slight interference fit of the lens within the frame.
[0163] The cantilever snap fit of FIG. 8a is not the only possible
way to removably snap-fit the lenses and the frame. For example, an
annular snap fit may be used, in which a continuous sealing lip of
the frame engages an enlarged edge of the lens, which then
snap-fits into the lip, or possibly over the lip. Such a snap fit
is typically used to join a cap to an ink pen. This configuration
may have an advantage of a sturdier joint with fewer chances for
admission of very small dust and dirt particles. Possible
disadvantages include the fairly tight tolerances required around
the entire periphery of both the lens and frame, and the
requirement for dimensional integrity in all three dimensions over
time.
[0164] It is also possible to use an even simpler interface, which
may still be considered a snap-fit. A groove may be molded into an
outer surface of the frame, with the lens having a protruding
surface, which may be considered a tongue that fits into the
groove. If the groove is semi-cylindrical, such as from about
270.degree. to about 300.degree., the tongue will snap into the
groove and be firmly retained, with removal still possible through
the gap that remains in the groove. In this embodiment, shown in
FIG. 8b, a lens or replacement lens or cover 826 with a tongue 828
may be inserted into a groove 827 in a frame 825, even though the
lens or cover is not snap-fit into the frame. Because the fit is a
close one, it will act as a snap-fit and securely retain the lens
in the frame.
[0165] In another embodiment, the frame may be made in two pieces,
such as a lower portion and an upper portion, with a conventional
tongue-and-groove fit. In another embodiment, this design may also
use standard fasteners to ensure a tight grip of the lens by the
frame. The design should not require disassembly of anything on the
inside of the frame. Thus, the snap-on or other lens or cover
should be assembled onto the frame, or removed from the frame,
without having to go inside the frame. As noted in other parts of
this disclosure, the augmented reality glasses have many component
parts. Some of the assemblies and subassemblies may require careful
alignment. Moving and jarring these assemblies may be detrimental
to their function, as will moving and jarring the frame and the
outer or snap-on lens or cover.
[0166] In an embodiment, the electro-optics characteristics may be,
but not limited to, as follows:
TABLE-US-00001 Optic Characteristics Value WAVEGUIDE virtual
display field of view ~25-30 degrees (equivalent to the (Diagonal)
FOV of a 24'' monitor viewed at 1 m distance) see-through field of
view more than 80 degrees eye clearance more than 18 mm Material
zeonex optical plastic weight approx 15 grams Wave Guide dimensions
60 .times. 30 .times. 10 mm (or 9) Size 15.5 mm (diagonal) Material
PMMA (optical plastics) FOV 53.5.degree. (diagonal) Active display
area 12.7 mm .times. 9.0 mm Resolution 800 .times. 600 pixels
VIRTUAL IMAGING SYSTEM Type Folded FFS prism Effective focal length
15 mm Exit pupil diameter 8 mm Eye relief 18.25 mm F# 1.875 Number
of free form surfaces 2-3 AUGMENTED VIEWING SYSTEM Type Free form
Lens Number of free form surfaces 2 Other Parameters Wavelength
656.3 - 486.1 nm Field of view 45.degree. H .times. 32.degree. V
Vignetting 0.15 for the top and bottom fields Distortion <12% at
the maximum field Image quality MTF >10% at 301 p/mm
[0167] In an embodiment, the Projector Characteristics may be as
follows:
TABLE-US-00002 Projector Characteristics Value Brightness
Adjustable, .25-2 Lumens Voltage 3.6 VDC Illumination Red, Green
and Blue LEDs Display SVGA 800 .times. 600 dpi Syndiant LCOS
Display Power Consumption Adjustable, 50 to 250 mw Target MPE
Dimensions Approximately 24 mm .times. 12 mm .times. 6 mm Focus
Adjustable Optics Housing 6061-T6 Aluminum and Glass-filled ABS/PC
Weight 5 gms RGB Engine Adjustable Color Output ARCHITECTURE
2.times. 1 GHZ processor cores 633 MHZ DSPs 30M polygons/sec DC
graphics accelerator IMAGE CORRECTION real-time sensing image
enhancement noise reduction keystone correction perspective
correction
[0168] In another embodiment, an augmented reality eyepiece may
include electrically-controlled lenses as part of the
microprojector or as part of the optics between the microprojector
and the waveguide. FIG. 21 depicts an embodiment with such liquid
lenses 2152.
[0169] The glasses also include at least one camera or optical
sensor 2130 that may furnish an image or images for viewing by the
user. The images are formed by a microprojector 2114 on each side
of the glasses for conveyance to the waveguide 2108 on that side.
In one embodiment, an additional optical element, a variable focus
lens 2152 is also furnished. The lens is electrically adjustable by
the user so that the image seen in the waveguides 2108 are focused
for the user.
[0170] Variable lenses may include the so-called liquid lenses
furnished by Varioptic, S. A., Lyons, France, or by LensVector,
Inc., Mountain View, Calif., U.S.A. Such lenses may include a
central portion with two immiscible liquids. Typically, in these
lenses, the path of light through the lens, i.e., the focal length
of the lens is altered or focused by applying an electric potential
between electrodes immersed in the liquids. At least one of the
liquids is affected by the resulting electric or magnetic field
potential. Thus, electrowetting may occur, as described in U.S.
Pat. Appl. Publ. 2010/0007807, assigned to LensVector, Inc. Other
techniques are described in LensVector Pat. Appl. Publs.
2009/021331 and 2009/0316097. All three of these disclosures are
incorporated herein by reference, as though each page and figures
were set forth verbatim herein.
[0171] Other patent documents from Varioptic, S. A., describe other
devices and techniques for a variable focus lens, which may also
work through an electrowetting phenomenon. These documents include
U.S. Pats. No. 7,245,440 and 7,894,440 and U.S. Pat. Appl. Pubis.
2010/0177386 and 2010/0295987, each of which is also incorporated
herein by reference, as though each page and figures were set forth
verbatim herein. In these documents, the two liquids typically have
different indices of refraction and different electrical
conductivities, e.g., one liquid is conductive, such as an aqueous
liquid, and the other liquid is insulating, such as an oily liquid.
Applying an electric potential may change the thickness of the lens
and does change the path of light through the lens, thus changing
the focal length of the lens.
[0172] The electrically-adjustable lenses may be controlled by the
controls of the glasses. In one embodiment, a focus adjustment is
made by calling up a menu from the controls and adjusting the focus
of the lens. The lenses may be controlled separately or may be
controlled together. The adjustment is made by physically turning a
control knob, by indicating with a gesture, or by voice command. In
another embodiment, the augmented reality glasses may also include
a rangefinder, and focus of the electrically-adjustable lenses may
be controlled automatically by pointing the rangefinder, such as a
laser rangefinder, to a target or object a desired distance away
from the user.
[0173] As shown in U.S. Pat. No. 7,894,440, discussed above, the
variable lenses may also be applied to the outer lenses of the
augmented reality glasses or eyepiece. In one embodiment, the
lenses may simply take the place of a corrective lens. The variable
lenses with their electric-adjustable control may be used instead
of or in addition to the image source- or projector-mounted lenses.
The corrective lens inserts provide corrective optics for the
user's environment, the outside world, whether the waveguide
displays are active or not.
[0174] It is important to stabilize the images presented to the
wearer of the augmented reality glasses or eyepiece(s), that is,
the images seen in the waveguide. The view or images presented
travel from one or two digital cameras or sensors mounted on the
eyepiece, to digital circuitry, where the images are processed and,
if desired, stored as digital data before they appear in the
display of the glasses. In any event, and as discussed above, the
digital data is then used to form an image, such as by using an
LCOS display and a series of RGB light emitting diodes. The light
images are processed using a series of lenses, a polarizing beam
splitter, an electrically-powered liquid corrective lens and at
least one transition lens from the projector to the.
[0175] The process of gathering and presenting images includes
several mechanical and optical linkages between components of the
augmented reality glasses. It seems clear, therefore, that some
form of stabilization will be required. This may include optical
stabilization of the most immediate cause, the camera itself, since
it is mounted on a mobile platform, the glasses, which themselves
are movably mounted on a mobile user. Accordingly, camera
stabilization or correction may be required. In addition, at least
some stabilization or correction should be used for the liquid
variable lens. Ideally, a stabilization circuit at that point could
correct not only for the liquid lens, but also for any aberration
and vibration from many parts of the circuit upstream from the
liquid lens, including the image source. One advantage of the
present system is that many commercial off-the-shelf cameras are
very advanced and typically have at least one image-stabilization
feature or option. Thus, there may be many embodiments of the
present disclosure, each with a same or a different method of
stabilizing an image or a very fast stream of images, as discussed
below. The term optical stabilization is typically used herein with
the meaning of physically stabilizing the camera, camera platform,
or other physical object, while image stabilization refers to data
manipulation and processing.
[0176] One technique of image stabilization is performed on digital
images as they are formed. This technique may use pixels outside
the border of the visible frame as a buffer for the undesired
motion. Alternatively, the technique may use another relatively
steady area or basis in succeeding frames. This technique is
applicable to video cameras, shifting the electronic image from
frame to frame of the video in a manner sufficient to counteract
the motion. This technique does not depend on sensors and directly
stabilizes the images by reducing vibrations and other distracting
motion from the moving camera. In some techniques, the speed of the
images may be slowed in order to add the stabilization process to
the remainder of the digital process, and requiring more time per
image. These techniques may use a global motion vector calculated
from frame-to-frame motion differences to determine the direction
of the stabilization.
[0177] Optical stabilization for images uses a gravity- or
electronically-driven mechanism to move or adjust an optical
element or imaging sensor such that it counteracts the ambient
vibrations. Another way to optically stabilize the displayed
content is to provide gyroscopic correction or sensing of the
platform housing the augmented reality glasses, e.g., the user. As
noted above, the sensors available and used on the augmented
reality glasses or eyepiece include MEMS gyroscopic sensors. These
sensors capture movement and motion in three dimensions in very
small increments and can be used as feedback to correct the images
sent from the camera in real time. It is clear that at least a
large part of the undesired and undesirable movement probably is
caused by movement of the user and the camera itself. These larger
movements may include gross movements of the user, e.g., walking or
running, riding in a vehicle. Smaller vibrations may also result
within the augmented reality eyeglasses, that is, vibrations in the
components in the electrical and mechanical linkages that form the
path from the camera (input) to the image in the waveguide
(output). These gross movements may be more important to correct or
to account for, rather than, for instance, independent and small
movements in the linkages of components downstream from the
projector.
[0178] Motion sensing may thus be used to sense the motion and
correct for it, as in optical stabilization, or to sense the motion
and then correct the images that are being taken and processed, as
in image stabilization. An apparatus for sensing motion and
correcting the images or the data is depicted in FIG. 34A. In this
apparatus, one or more kinds of motion sensors may be used,
including accelerometers, angular position sensors or gyroscopes,
such as MEMS gyroscopes. Data from the sensors is fed back to the
appropriate sensor interfaces, such as analog to digital converters
(ADCs) or other suitable interface, such as digital signal
processors (DSPs). A microprocessor then processes this
information, as discussed above, and sends image-stabilized frames
to the display driver and then to the see-through display or
waveguide discussed above. In one embodiment, the display begins
with the RGB display in the microprojector of the augmented reality
eyepiece.
[0179] In another embodiment, a video sensor or augmented reality
glasses, or other device with a video sensor may be mounted on a
vehicle. In this embodiment, the video stream may be communicated
through a telecommunication capability or an Internet capability to
personnel in the vehicle. One application could be sightseeing or
touring of an area. Another embodiment could be exploring or
reconnaissance, or even patrolling, of an area. In these
embodiments, gyroscopic stabilization of the image sensor would be
helpful, rather than applying a gyroscopic correction to the images
or digital data representing the images. An embodiment of this
technique is depicted in FIG. 34B. In this technique, a camera or
image sensor 3407 is mounted on a vehicle 3401. One or more motion
sensors 3406, such as gyroscopes, are mounted in the camera
assembly 3405. A stabilizing platform 3403 receives information
from the motion sensors and stabilizes the camera assembly 3405, so
that jitter and wobble are minimized while the camera operates.
This is true optical stabilization. Alternatively, the motion
sensors or gyroscopes may be mounted on or within the stabilizing
platform itself. This technique would actually provide optical
stabilization, stabilizing the camera or image sensor, in contrast
to digital stabilization, correcting the image afterwards by
computer processing of the data taken by the camera.
[0180] In one technique, the key to optical stabilization is to
apply the stabilization or correction before an image sensor
converts the image into digital information. In one technique,
feedback from sensors, such as gyroscopes or angular velocity
sensors, is encoded and sent to an actuator that moves the image
sensor, much as an autofocus mechanism adjusts a focus of a lens.
The image sensor is moved in such a way as to maintain the
projection of the image onto the image plane, which is a function
of the focal length of the lens being used. Autoranging and focal
length information, perhaps from a range finder of the interactive
head-mounted eyepiece, may be acquired through the lens itself. In
another technique, angular velocity sensors, sometimes also called
gyroscopic sensors, can be used to detect, respectively, horizontal
and vertical movements. The motion detected may then be fed back to
electromagnets to move a floating lens of the camera. This optical
stabilization technique, however, would have to be applied to each
lens contemplated, making the result rather expensive.
[0181] Stabilization of the liquid lens is discussed in U.S. Pat.
Appl. Publ. 2010/0295987, assigned to Varioptic, S. A., Lyon,
France. In theory, control of a liquid lens is relatively simple,
since there is only one variable to control: the level of voltage
applied to the electrodes in the conducting and non-conducting
liquids of the lens, using, for examples, the lens housing and the
cap as electrodes. Applying a voltage causes a change or tilt in
the liquid-liquid interface via the electrowetting effect. This
change or tilt adjusts the focus or output of the lens. In its most
basic terms, a control scheme with feedback would then apply a
voltage and determine the effect of the applied voltage on the
result, i.e., a focus or an astigmatism of the image. The voltages
may be applied in patterns, for example, equal and opposite + and -
voltages, both positive voltages of differing magnitude, both
negative voltages of differing magnitude, and so forth. Such lenses
are known as electrically variable optic lenses or electro-optic
lenses.
[0182] Voltages may be applied to the electrodes in patterns for a
short period of time and a check on the focus or astigmatism made.
The check may be made, for instance, by an image sensor. In
addition, sensors on the camera or in this case the lens, may
detect motion of the camera or lens. Motion sensors would include
accelerometers, gyroscopes, angular velocity sensors or
piezoelectric sensors mounted on the liquid lens or a portion of
the optic train very near the liquid lens. In one embodiment, a
table, such as a calibration table, is then constructed of voltages
applied and the degree of correction or voltages needed for given
levels of movement. More sophistication may also be added, for
example, by using segmented electrodes in different portions of the
liquid so that four voltages may be applied rather than two. Of
course, if four electrodes are used, four voltages may be applied,
in many more patterns than with only two electrodes. These patterns
may include equal and opposite positive and negative voltages to
opposite segments, and so forth. An example is depicted in FIG.
34C. Four electrodes 3409 are mounted within a liquid lens housing
(not shown). Two electrodes are mounted in or near the
non-conducting liquid and two are mounted in or near the conducting
liquid. Each electrode is independent in terms of the possible
voltage that may be applied.
[0183] Look-up or calibration tables may be constructed and placed
in the memory of the augmented reality glasses. In use, the
accelerometer or other motion sensor will sense the motion of the
glasses, i.e., the camera on the glasses or the lens itself. A
motion sensor such as an accelerometer will sense in particular,
small vibration-type motions that interfere with smooth delivery of
images to the waveguide. In one embodiment, the image stabilization
techniques described here can be applied to the
electrically-controllable liquid lens so that the image from the
projector is corrected immediately. This will stabilize the output
of the projector, at least partially correcting for the vibration
and movement of the augmented reality eyepiece, as well as at least
some movement by the user. There may also be a manual control for
adjusting the gain or other parameter of the corrections. Note that
this technique may also be used to correct for near-sightedness or
far-sightedness of the individual user, in addition to the focus
adjustment already provided by the image sensor controls and
discussed as part of the adjustable-focus projector.
[0184] Another variable focus element uses tunable liquid crystal
cells to focus an image. These are disclosed, for example, in U.S.
Pat. Appl. Publ. Nos. 2009/0213321, 2009/0316097 and 2010/0007807,
which are hereby incorporated by reference in their entirety and
relied on. In this method, a liquid crystal material is contained
within a transparent cell, preferably with a matching index of
refraction. The cell includes transparent electrodes, such as those
made from indium tin oxide (ITO). Using one spiral-shaped
electrode, and a second spiral-shaped electrode or a planar
electrode, a spatially non-uniform magnetic field is applied.
Electrodes of other shapes may be used. The shape of the magnetic
field determines the rotation of molecules in the liquid crystal
cell to achieve a change in refractive index and thus a focus of
the lens. The liquid crystals can thus be electromagnetically
manipulated to change their index of refraction, making the tunable
liquid crystal cell act as a lens.
[0185] In a first embodiment, a tunable liquid crystal cell 3420 is
depicted in FIG. 34D. The cell includes an inner layer of liquid
crystal 3421 and thin layers 3423 of orienting material such as
polyimide. This material helps to orient the liquid crystals in a
preferred direction. Transparent electrodes 3425 are on each side
of the orienting material. An electrode may be planar, or may be
spiral shaped as shown on the right in FIG. 34D. Transparent glass
substrates 3427 contain the materials within the cell. The
electrodes are formed so that they will lend shape to the magnetic
field. As noted, a spiral shaped electrode on one or both sides,
such that the two are not symmetrical, is used in one embodiment. A
second embodiment is depicted in FIG. 34E. Tunable liquid crystal
cell 3430 includes central liquid crystal material 3431,
transparent glass substrate walls 3433, and transparent electrodes.
Bottom electrode 3435 is planar, while top electrode 3437 is in the
shape of a spiral. Transparent electrodes may be made of indium tin
oxide (ITO).
[0186] Additional electrodes may be used for quick reversion of the
liquid crystal to a non-shaped or natural state. A small control
voltage is thus used to dynamically change the refractive index of
the material the light passes through. The voltage generates a
spatially non-uniform magnetic field of a desired shape, allowing
the liquid crystal to function as a lens.
[0187] In one embodiment, the camera includes the black silicon,
short wave infrared (SWIR) CMOS sensor described elsewhere in this
patent. In another embodiment, the camera is a 5 megapixel (MP)
optically-stabilized video sensor. In one embodiment, the controls
include a 3 GHz microprocessor or microcontroller, and may also
include a 633 MHz digital signal processor with a 30 M
polygon/second graphic accelerator for real-time image processing
for images from the camera or video sensor. In one embodiment, the
augmented reality glasses may include a wireless internet, radio or
telecommunications capability for wideband, personal area network
(PAN), local area network (LAN), a wide local area network, WLAN,
conforming to IEEE 802.11, or reach-back communications. The
equipment furnished in one embodiment includes a Bluetooth
capability, conforming to IEEE 802.15. In one embodiment, the
augmented reality glasses include an encryption system, such as a
256-bit Advanced Encryption System (AES) encryption system or other
suitable encryption program, for secure communications.
[0188] In one embodiment, the wireless telecommunications may
include a capability for a 3G or 4G network and may also include a
wireless internet capability. In order for an extended life, the
augmented reality eyepiece or glasses may also include at least one
lithium-ion battery, and as discussed above, a recharging
capability. The recharging plug may comprise an AC/DC power
converter and may be capable of using multiple input voltages, such
as 120 or 240 VAC. The controls for adjusting the focus of the
adjustable focus lenses in one embodiment comprises a 2D or 3D
wireless air mouse or other non-contact control responsive to
gestures or movements of the user. A 2D mouse is available from
Logitech, Fremont, Calif., USA. A 3D mouse is described herein, or
others such as the Cideko AVK05 available from Cideko, Taiwan,
R.O.C, may be used.
[0189] In an embodiment, the eyepiece may comprise electronics
suitable for controlling the optics, and associated systems,
including a central processing unit, non-volatile memory, digital
signal processors, 3-D graphics accelerators, and the like. The
eyepiece may provide additional electronic elements or features,
including inertial navigation systems, cameras, microphones, audio
output, power, communication systems, sensors, stopwatch or
chronometer functions, thermometer, vibratory temple motors, motion
sensor, a microphone to enable audio control of the system, a UV
sensor to enable contrast and dimming with photochromic materials,
and the like.
[0190] In an embodiment, the central processing unit (CPU) of the
eyepiece may be an OMAP 4, with dual 1 GHz processor cores. The CPU
may include a 633 MHz DSP, giving a capability for the CPU of 30
million polygons/second.
[0191] The system may also provide dual micro-SD (secure digital)
slots for provisioning of additional removable non-volatile
memory.
[0192] An on-board camera may provide 1.3 MP color and record up to
60 minutes of video footage. The recorded video may be transferred
wirelessly or using a mini-USB transfer device to off-load
footage.
[0193] The communications system-on-a-chip (SOC) may be capable of
operating with wide local area networks (WLAN), Bluetooth version
3.0, a GPS receiver, an FM radio, and the like.
[0194] The eyepiece may operate on a 3.6 VDC lithium-ion
rechargeable battery for long battery life and ease of use. An
additional power source may be provided through solar cells on the
exterior of the frame of the system. These solar cells may supply
power and may also be capable of recharging the lithium-ion
battery.
[0195] The total power consumption of the eyepiece may be
approximately 400 mW, but is variable depending on features and
applications used. For example, processor-intensive applications
with significant video graphics demand more power, and will be
closer to 400 mW. Simpler, less video-intensive applications will
use less power. The operation time on a charge also may vary with
application and feature usage.
[0196] The micro-projector illumination engine, also known herein
as the projector, may include multiple light emitting diodes
(LEDs). In order to provide life-like color, Osram red, Cree green,
and Cree blue LEDs are used. These are die-based LEDs. The RGB
engine may provide an adjustable color output, allowing a user to
optimize viewing for various programs and applications.
[0197] In embodiments, illumination may be added to the glasses or
controlled through various means. For example, LED lights or other
lights may be embedded in the frame of the eyepiece, such as in the
nose bridge, around the composite lens, or at the temples.
[0198] The intensity of the illumination and or the color of
illumination may be modulated. Modulation may be accomplished
through the various control technologies described herein, through
various applications, filtering and magnification.
[0199] By way of example, illumination may be modulated through
various control technologies described herein such as through the
adjustment of a control knob, a gesture, eye movement, or voice
command. If a user desires to increase the intensity of
illumination, the user may adjust a control knob on the glasses or
he may adjust a control knob in the user interface displayed on the
lens or by other means. The user may use eye movements to control
the knob displayed on the lens or he may control the knob by other
means. The user may adjust illumination through a movement of the
hand or other body movement such that the intensity or color of
illumination changes based on the movement made by the user. Also,
the user may adjust the illumination through a voice command such
as by speaking a phrase requesting increased or decreased
illumination or requesting other colors to be displayed.
Additionally, illumination modulation may be achieved through any
control technology described herein or by other means.
[0200] Further, the illumination may be modulated per the
particular application being executed. As an example, an
application may automatically adjust the intensity of illumination
or color of illumination based on the optimal settings for that
application. If the current levels of illumination are not at the
optimal levels for the application being executed, a message or
command may be sent to provide for illumination adjustment.
[0201] In embodiments, illumination modulation may be accomplished
through filtering and or through magnification. For example,
filtering techniques may be employed that allow the intensity and
or color of the light to be changed such that the optimal or
desired illumination is achieved. Also, in embodiments, the
intensity of the illumination may be modulated by applying greater
or less magnification to reach the desired illumination
intensity.
[0202] The projector may be connected to the display to output the
video and other display elements to the user. The display used may
be an SVGA 800.times.600 dots/inch SYNDIANT liquid crystal on
silicon (LCoS) display.
[0203] The target MPE dimensions for the system may be 24
mm.times.12 mm.times.6 mm.
[0204] The focus may be adjustable, allowing a user to refine the
projector output to suit their needs.
[0205] The optics system may be contained within a housing
fabricated for 6061-T6 aluminum and glass-filled ABS/PC.
[0206] The weight of the system, in an embodiment, is estimated to
be 3.75 ounces, or 95 grams.
[0207] In an embodiment, the eyepiece and associated electronics
provide night vision capability. This night vision capability may
be enabled by a black silicon SWIR sensor. Black silicon is a
complementary metal-oxide silicon (CMOS) processing technique that
enhances the photo response of silicon over 100 times. The spectral
range is expanded deep into the short wave infra-red (SWIR)
wavelength range. In this technique, a 300 nm deep absorbing and
anti-reflective layer is added to the glasses. This layer offers
improved responsivity as shown in FIG. 11, where the responsivity
of black silicon is much greater than silicon's over the visible
and NIR ranges and extends well into the SWIR range. This
technology is an improvement over current technology, which suffers
from extremely high cost, performance issues, as well as high
volume manufacturability problems. Incorporating this technology
into night vision optics brings the economic advantages of CMOS
technology into the design.
[0208] These advantages include using active illumination only when
needed. In some instances there may be sufficient natural
illumination at night, such as during a full moon. When such is the
case, artificial night vision using active illumination may not be
necessary. With black silicon CMOS-based SWIR sensors, active
illumination may not be needed during these conditions, and is not
provided, thus improving battery life.
[0209] In addition, a black silicon image sensor may have over
eight times the signal to noise ration found in costly
indium-gallium arsenide image sensors under night sky conditions.
Better resolution is also provided by this technology, offering
much higher resolution than available using current technology for
night vision. Typically, long wavelength images produced by
CMOS-based SWIR have been difficult to interpret, having good heat
detection, but poor resolution. This problem is solved with a black
image silicon SWIR sensor, which relies on much shorter
wavelengths. SWIR is highly desirable for battlefield night vision
glasses for these reasons. FIG. 12 illustrates the effectiveness of
black silicon night vision technology, providing both before and
after images of seeing through a) dust; b) fog, and c) smoke. The
images in FIG. 12 demonstrate the performance of the new
VIS/NIR/SWIR black silicon sensor.
[0210] Previous night vision systems suffered from "blooms" from
bright light sources, such as streetlights. These "blooms" were
particularly strong in image intensifying technology and are also
associated with a loss of resolution. In some cases, cooling
systems are necessary in image intensifying technology systems,
increasing weight and shortening battery power lifespan. FIG. 17
shows the difference in image quality between A) a flexible
platform of uncooled CMOS image sensors capable of VIS/NIR/SWIR
imaging and B) an image intensified night vision system.
[0211] FIG. 13 depicts the difference in structure between current
or incumbent vision enhancement technology and uncooled CMOS image
sensors. The incumbent platform (FIG. 13A) limits deployment
because of cost, weight, power consumption, spectral range, and
reliability issues. Incumbent systems 1300 are typically comprised
of a front lens 1301, photocathode 1302, micro channel plate 1303,
high voltage power supply 1304, phosphorous screen 1305, and
eyepiece 1306. This is in contrast to a flexible platform (FIG.
13B) of uncooled CMOS image sensors 1307 capable of VIS/NIR/SWIR
imaging at a fraction of the cost, power consumption, and weight.
These much simpler sensors include a front lens 1308 and an image
sensor 1309 with a digital image output.
[0212] These advantages derive from the CMOS compatible processing
technique that enhances the photo response of silicon over 100
times and extends the spectral range deep into the short wave
infrared region. The difference in responsivity is illustrated in
FIG. 13C. While typical night vision goggles are limited to the UV,
visible and near infrared (NIR) ranges, to about 1100 nm (1.1
micrometers) the newer CMOS image sensor ranges also include the
short wave infrared (SWIR) spectrum, out to as much as 2000 nm (2
micrometers).
[0213] The black silicon core technology may offer significant
improvement over current night vision glasses. Femtosecond laser
doping may enhance the light detection properties of silicon across
a broad spectrum. Additionally, optical response may be improved by
a factor of 100 to 10,000. The black silicon technology is a fast,
scalable, and CMOS compatible technology at a very low cost,
compared to current night vision systems. Black silicon technology
may also provide a low operation bias, with 3.3 V typical. In
addition, uncooled performance may be possible up to 50.degree. C.
Cooling requirements of current technology increase both weight and
power consumption, and also create discomfort in users. As noted
above, the black silicon core technology offers a high-resolution
replacement for current image intensifier technology. Black silicon
core technology may provide high speed electronic shuttering at
speeds up to 1000 frames/second with minimal cross talk. In certain
embodiments of the night vision eyepiece, an OLED display may be
preferred over other optical displays, such as the LCoS
display.
[0214] Further advantages of the eyepiece may include robust
connectivity. This connectivity enables download and transmission
using Bluetooth, Wi-Fi/Internet, cellular, satellite, 3G, FM/AM,
TV, and UVB transceiver.
[0215] The eyepiece may provide its own cellular connectivity, such
as though a personal wireless connection with a cellular system.
The personal wireless connection may be available for only the
wearer of the eyepiece, or it may be available to a plurality of
proximate users, such as in a Wi-Fi hot spot (e.g. MiFi), where the
eyepiece provides a local hotspot for others to utilize. These
proximate users may be other wearers of an eyepiece, or users of
some other wireless computing device, such as a mobile
communications facility (e.g. mobile phone). Through this personal
wireless connection, the wearer may not need other cellular or
Internet wireless connections to connect to wireless services. For
instance, without a personal wireless connection integrated into
the eyepiece, the wearer may have to find a WiFi connection point
or tether to their mobile communications facility in order to
establish a wireless connection. In embodiments, the eyepiece may
be able to replace the need for having a separate mobile
communications device, such as a mobile phone, mobile computer, and
the like, by integrating these functions and user interfaces into
the eyepiece. For instance, the eyepiece may have an integrated
WiFi connection or hotspot, a real or virtual keyboard interface, a
USB hub, speakers (e.g. to stream music to) or speaker input
connections, integrated camera, external camera, and the like. In
embodiments, an external device, in connectivity with the eyepiece,
may provide a single unit with a personal network connection (e.g.
WiFi, cellular connection), keyboard, control pad (e.g. a touch
pad), and the like.
[0216] The eyepiece may include MEMS-based inertial navigation
systems, such as a GPS processor, an accelerometer (e.g. for
enabling head control of the system and other functions), a
gyroscope, an altimeter, an inclinometer, a speedometer/odometer, a
laser rangefinder, and a magnetometer, which also enables image
stabilization.
[0217] The eyepiece may include integrated headphones, such as the
articulating earbud 120, that provide audio output to the user or
wearer.
[0218] In an embodiment, a forward facing camera (see FIG. 21)
integrated with the eyepiece may enable basic augmented reality. In
augmented reality, a viewer can image what is being viewed and then
layer an augmented, edited, tagged, or analyzed version on top of
the basic view. In the alternative, associated data may be
displayed with or over the basic image. If two cameras are provided
and are mounted at the correct interpupillary distance for the
user, stereo video imagery may be created. This capability may be
useful for persons requiring vision assistance. Many people suffer
from deficiencies in their vision, such as near-sightedness,
far-sightedness, and so forth. A camera and a very close, virtual
screen as described herein provides a "video" for such persons, the
video adjustable in terms of focal point, nearer or farther, and
fully in control by the person via voice or other command. This
capability may also be useful for persons suffering diseases of the
eye, such as cataracts, retinitis pigmentosa, and the like. So long
as some organic vision capability remains, an augmented reality
eyepiece can help a person see more clearly. Embodiments of the
eyepiece may feature one or more of magnification, increased
brightness, and ability to map content to the areas of the eye that
are still healthy. Embodiments of the eyepiece may be used as
bifocals or a magnifying glass. The wearer may be able to increase
zoom in the field of view or increase zoom within a partial field
of view. In an embodiment, an associated camera may make an image
of the object and then present the user with a zoomed picture. A
user interface may allow a wearer to point at the area that he
wants zoomed, such as with the control techniques described herein,
so the image processing can stay on task as opposed to just zooming
in on everything in the camera's field of view.
[0219] A rear-facing camera (not shown) may also be incorporated
into the eyepiece in a further embodiment. In this embodiment, the
rear-facing camera may enable eye control of the eyepiece, with the
user making application or feature selection by directing his or
her eyes to a specific item displayed on the eyepiece.
[0220] The camera may be a microcassegrain telescoping folded optic
camera into the device. The microcassegrain telescoping folded
optic camera may be mounted on a handheld device, such as the
bio-print device, the bio-phone, and could also be mounted on
glasses used as part of a bio-kit to collect biometric data.
[0221] A cassegrain reflector is a combination of a primary concave
mirror and a secondary convex mirror. These reflectors are often
used in optical telescopes and radio antennas because they deliver
good light (or sound) collecting capability in a shorter, smaller
package.
[0222] In a symmetrical cassegrain both mirrors are aligned about
the optical axis, and the primary mirror usually has a hole in the
center, allowing light to reach the eyepiece or a camera chip or
light detection device, such as a CCD chip. An alternate design,
often used in radio telescopes, places the final focus in front of
the primary reflector. A further alternate design may tilt the
mirrors to avoid obstructing the primary or secondary mirror and
may eliminate the need for a hole in the primary mirror or
secondary mirror. The microcassegrain telescoping folded optic
camera may use any of the above variations, with the final
selection determined by the desired size of the optic device.
[0223] The classic cassegrain configuration uses a parabolic
reflector as the primary mirror and a hyperbolic mirror as the
secondary mirror. Further embodiments of the microcassegrain
telescoping folded optic camera may use a hyperbolic primary mirror
and/or a spherical or elliptical secondary mirror. In operation the
classic cassegrain with a parabolic primary mirror and a hyperbolic
secondary mirror reflects the light back down through a hole in the
primary, as shown in FIG. 35. Folding the optical path makes the
design more compact, and in a "micro" size, suitable for use with
the bio-print sensor and bio-print kit described in co-pending
application titled "Local Advertising Content on an Interactive
Head-Mounted Eyepiece" filed on Feb. 28, 2011. In a folded optic
system, the beam is bent to make the optical path much longer than
the physical length of the system. One common example of folded
optics is prismatic binoculars. In a camera lens the secondary
mirror may be mounted on an optically flat, optically clear glass
plate that closes the lens tube. This support eliminates
"star-shaped" diffraction effects that are caused by a
straight-vaned support spider. This allows for a sealed closed tube
and protects the primary mirror, albeit at some loss of light
collecting power.
[0224] The cassegrain design also makes use of the special
properties of parabolic and hyperbolic reflectors. A concave
parabolic reflector will reflect all incoming light rays parallel
to its axis of symmetry to a single focus point. A convex
hyperbolic reflector has two foci and reflects all light rays
directed at one focus point toward the other focus point. Mirrors
in this type of lens are designed and positioned to share one
focus, placing the second focus of the hyperbolic mirror at the
same point as where the image is observed, usually just outside the
eyepiece. The parabolic mirror reflects parallel light rays
entering the lens to its focus, which is coincident with the focus
of the hyperbolic mirror. The hyperbolic mirror then reflects those
light rays to the other focus point, where the camera records the
image.
[0225] FIG. 36 shows the configuration of the microcassegrain
telescoping folded optic camera. The camera may be mounted on
augmented reality glasses, a bio-phone, or other biometric
collection device. The assembly, 3600 has multiple telescoping
segments that allow the camera to extend with cassegrain optics
providing for a longer optical path. Threads 3602 allow the camera
to be mounted on a device, such as augmented reality glasses or
other biometric collection device. While the embodiment depicted in
FIG. 36 uses threads, other mounting schemes such as bayonet mount,
knobs, or press-fit, may also be used. A first telescoping section
3604 also acts as an external housing when the lens is in the fully
retracted position. The camera may also incorporate a motor to
drive the extension and retraction of the camera. A second
telescoping section 3606 may also be included. Other embodiments
may incorporate varying numbers of telescoping sections, depending
on the length of optical path needed for the selected task or data
to be collected. A third telescoping section 3608 includes the lens
and a reflecting mirror. The reflecting mirror may be a primary
reflector if the camera is designed following classic cassegrain
design. The secondary mirror may be contained in first telescoping
section 3604.
[0226] Further embodiments may utilize microscopic mirrors to form
the camera, while still providing for a longer optical path through
the use of folded optics. The same principles of cassegrain design
are used.
[0227] Lens 3610 provides optics for use in conjunction with the
folded optics of the cassegrain design. The lens 3610 may be
selected from a variety of types, and may vary depending on the
application. The threads 3602 permit a variety of cameras to be
interchanged depending on the needs of the user.
[0228] Eye control of feature and option selection may be
controlled and activated by object recognition software loaded on
the system processor. Object recognition software may enable
augmented reality, combine the recognition output with querying a
database, combine the recognition output with a computational tool
to determine dependencies/likelihoods, and the like.
[0229] Three-dimensional viewing is also possible in an additional
embodiment that incorporates a 3D projector. Two stacked
picoprojectors (not shown) may be used to create the three
dimensional image output.
[0230] Referring to FIG. 10, a plurality of digital CMOS Sensors
with redundant micros and DSPs for each sensor array and projector
detect visible, near infrared, and short wave infrared light to
enable passive day and night operations, such as real-time image
enhancement 1002, real-time keystone correction 1004, and real-time
virtual perspective correction 1008.
[0231] The augmented reality eyepiece or glasses may be powered by
any stored energy system, such as battery power, solar power, line
power, and the like. A solar energy collector may be placed on the
frame, on a belt clip, and the like. Battery charging may occur
using a wall charger, car charger, on a belt clip, in a glasses
case, and the like. In one embodiment, the eyepiece may be
rechargeable and be equipped with a mini-USB connector for
recharging. In another embodiment, the eyepiece may be equipped for
remote inductive recharging by one or more remote inductive power
conversion technologies, such as those provided by Powercast,
Ligonier, Pa., USA; and Fulton Int'l. Inc., Ada, Mich., USA, which
also owns another provider, Splashpower, Inc., Cambridge, UK.
[0232] The augmented reality eyepiece also includes a camera and
any interface necessary to connect the camera to the circuit. The
output of the camera may be stored in memory and may also be
displayed on the display available to the wearer of the glasses. A
display driver may also be used to control the display. The
augmented reality device also includes a power supply, such as a
battery, as shown, power management circuits and a circuit for
recharging the power supply. As noted elsewhere, recharging may
take place via a hard connection, e.g., a mini-USB connector, or by
means of an inductor, a solar panel input, and so forth.
[0233] The control system for the eyepiece or glasses may include a
control algorithm for conserving power when the power source, such
as a battery, indicates low power. This conservation algorithm may
include shutting power down to applications that are energy
intensive, such as lighting, a camera, or sensors that require high
levels of energy, such as any sensor requiring a heater, for
example. Other conservation steps may include slowing down the
power used for a sensor or for a camera, e.g., slowing the sampling
or frame rates, going to a slower sampling or frame rate when the
power is low; or shutting down the sensor or camera at an even
lower level. Thus, there may be at least three operating modes
depending on the available power: a normal mode; a conserve power
mode; and an emergency or shutdown mode.
[0234] Applications of the present disclosure may be controlled
through movements and direct actions of the wearer, such as
movement of his or her hand, finger, feet, head, eyes, and the
like, enabled through facilities of the eyepiece (e.g.
accelerometers, gyros, cameras, optical sensors, GPS sensors, and
the like) and/or through facilities worn or mounted on the wearer
(e.g. body mounted sensor control facilities). In this way, the
wearer may directly control the eyepiece through movements and/or
actions of their body without the use of a traditional hand-held
remote controller. For instance, the wearer may have a sense
device, such as a position sense device, mounted on one or both
hands, such as on at least one finger, on the palm, on the back of
the hand, and the like, where the position sense device provides
position data of the hand, and provides wireless communications of
position data as command information to the eyepiece. In
embodiments, the sense device of the present disclosure may include
a gyroscopic device (e.g. electronic gyroscope, MEMS gyroscope,
mechanical gyroscope, quantum gyroscope, ring laser gyroscope,
fiber optic gyroscope), accelerometers, MEMS accelerometers,
velocity sensors, force sensors, optical sensors, proximity sensor,
RFID, and the like, in the providing of position information. For
example, a wearer may have a position sense device mounted on their
right index finger, where the device is able to sense motion of the
finger. In this example, the user may activate the eyepiece either
through some switching mechanism on the eyepiece or through some
predetermined motion sequence of the finger, such as moving the
finger quickly, tapping the finger against a hard surface, and the
like. Note that tapping against a hard surface may be interpreted
through sensing by accelerometers, force sensors, and the like. The
position sense device may then transmit motions of the finger as
command information, such as moving the finger in the air to move a
cursor across the displayed or projected image, moving in quick
motion to indicate a selection, and the like. In embodiments, the
position sense device may send sensed command information directly
to the eyepiece for command processing, or the command processing
circuitry may be co-located with the position sense device, such as
in this example, mounted on the finger as part of an assembly
including the sensors of the position sense device.
[0235] In embodiments, the wearer may have a plurality of position
sense devices mounted on their body. For instance, and in
continuation of the preceding example, the wearer may have position
sense devices mounted on a plurality of points on the hand, such as
with individual sensors on different fingers, or as a collection of
devices, such as in a glove. In this way, the aggregate sense
command information from the collection of sensors at different
locations on the hand may be used to provide more complex command
information. For instance, the wearer may use a sensor device glove
to play a game, where the glove senses the grasp and motion of the
user's hands on a ball, bat, racket, and the like, in the use of
the present disclosure in the simulation and play of a simulated
game. In embodiments, the plurality of position sense devices may
be mounted on different parts of the body, allowing the wearer to
transmit complex motions of the body to the eyepiece for use by an
application.
[0236] In embodiments, the sense device may have a force sensor,
such as for detecting when the sense device comes in contact with
an object. For instance, a sense device may include a force sensor
at the tip of a wearer's finger. In this case, the wearer may tap,
multiple tap, sequence taps, swipe, touch, and the like to generate
a command to the eyepiece. Force sensors may also be used to
indicate degrees of touch, grip, push, and the like, where
predetermined or learned thresholds determine different command
information. In this way, commands may be delivered as a series of
continuous commands that constantly update the command information
being used in an application through the eyepiece. In an example, a
wearer may be running a simulation, such as a game application,
military application, commercial application, and the like, where
the movements and contact with objects, such as through at least
one of a plurality of sense devices, are fed to the eyepiece as
commands that influence the simulation displayed through the
eyepiece.
[0237] In embodiments, the sense device may include an optical
sensor or optical transmitter as a way for movement to be
interpreted as a command. For instance, a sense device may include
an optical sensor mounted on the hand of the wearer, and the
eyepiece housing may include an optical transmitter, such that when
a user moves their hand past the optical transmitter on the
eyepiece, the motions may be interpreted as commands. A motion
detected through an optical sensor may include swiping past at
different speeds, with repeated motions, combinations of dwelling
and movement, and the like. In embodiments, optical sensors and/or
transmitters may be located on the eyepiece, mounted on the wearer
(e.g. on the hand, foot, in a glove, piece of clothing), or used in
combinations between different areas on the wearer and the
eyepiece, and the like.
[0238] In one embodiment, a number of sensors useful for monitoring
the condition of the wearer or a person in proximity to the wearer
are mounted within the augmented reality glasses. Sensors have
become much smaller, thanks to advances in electronics technology.
Signal transducing and signal processing technologies have also
made great progress in the direction of size reduction and
digitization. Accordingly, it is possible to have not merely a
temperature sensor in the AR glasses, but an entire sensor array.
These sensors may include, as noted, a temperature sensor, and also
sensor to detect: pulse rate; beat-to-beat heart variability; EKG
or ECG; respiration rate; core body temperature; heat flow from the
body; galvanic skin response or GSR; EMG; EEG; EOG; blood pressure;
body fat; hydration level; activity level; oxygen consumption;
glucose or blood sugar level; body position; and UV radiation
exposure or absorption. In addition, there may also be a retinal
sensor and a blood oxygenation sensor (such as an Sp0.sub.2
sensor), among others. Such sensors are available from a variety of
manufacturers, including Vermed, Bellows Falls, Vt., USA; VTI,
Ventaa, Finland; and ServoFlow, Lexington, Mass., USA.
[0239] In some embodiments, it may be more useful to have sensors
mounted on the person or on equipment of the person, rather than on
the glasses themselves. For example, accelerometers, motion sensors
and vibration sensors may be usefully mounted on the person, on
clothing of the person, or on equipment worn by the person. These
sensors may maintain continuous or periodic contact with the
controller of the AR glasses through a Bluetooth.RTM. radio
transmitter or other radio device adhering to IEEE 802.11
specifications. For example, if a physician wishes to monitor
motion or shock experienced by a patient during a foot race, the
sensors may be more useful if they are mounted directly on the
person's skin, or even on a T-shirt worn by the person, rather than
mounted on the glasses. In these cases, a more accurate reading may
be obtained by a sensor placed on the person or on the clothing
rather than on the glasses. Such sensors need not be as tiny as the
sensors which would be suitable for mounting on the glasses
themselves, and be more useful, as seen.
[0240] The AR glasses or goggles may also include environmental
sensors or sensor arrays. These sensors are mounted on the glasses
and sample the atmosphere or air in the vicinity of the wearer.
These sensors or sensor array may be sensitive to certain
substances or concentrations of substances. For example, sensors
and arrays are available to measure concentrations of carbon
monoxide, oxides of nitrogen ("NO.sub.x"), temperature, relative
humidity, noise level, volatile organic chemicals (VOC), ozone,
particulates, hydrogen sulfide, barometric pressure and ultraviolet
light and its intensity. Vendors and manufacturers include:
Sensares, Crolles, FR; Cairpol, Ales, FR; Critical Environmental
Technologies of Canada, Delta, B.C., Canada; Apollo Electronics
Co., Shenzhen, China; and AV Technology Ltd., Stockport, Cheshire,
UK. Many other sensors are well known. If such sensors are mounted
on the person or on clothing or equipment of the person, they may
also be useful. These environmental sensors may include radiation
sensors, chemical sensors, poisonous gas sensors, and the like.
[0241] In one embodiment, environmental sensors, health monitoring
sensors, or both, are mounted on the frames of the augmented
reality glasses. In another embodiment, the sensors may be mounted
on the person or on clothing or equipment of the person. For
example, a sensor for measuring electrical activity of a heart of
the wearer may be implanted, with suitable accessories for
transducing and transmitting a signal indicative of the person's
heart activity.
[0242] The signal may be transmitted a very short distance via a
Bluetooth.RTM. radio transmitter or other radio device adhering to
IEEE 802.15.1 specifications. Other frequencies or protocols may be
used instead. The signal may then be processed by the
signal-monitoring and processing equipment of the augmented reality
glasses, and recorded and displayed on the virtual screen available
to the wearer. In another embodiment, the signal may also be sent
via the AR glasses to a friend or squad leader of the wearer. Thus,
the health and well-being of the person may be monitored by the
person and by others, and may also be tracked over time.
[0243] In another embodiment, environmental sensors may be mounted
on the person or on equipment of the person. For example, radiation
or chemical sensors may be more useful if worn on outer clothing or
a web-belt of the person, rather than mounted directly on the
glasses. As noted above, signals from the sensors may be monitored
locally by the person through the AR glasses. The sensor readings
may also be transmitted elsewhere, either on demand or
automatically, perhaps at set intervals, such as every quarter-hour
or half-hour. Thus, a history of sensor readings, whether of the
person's body readings or of the environment, may be made for
tracking or trending purposes.
[0244] In an embodiment, an RF/micropower impulse radio (MIR)
sensor may be associated with the eyepiece and serve as a
short-range medical radar. The sensor may operate on an ultra-wide
band. The sensor may include an RF/impulse generator, receiver, and
signal processor, and may be useful for detecting and measuring
cardiac signals by measuring ion flow in cardiac cells within 3 mm
of the skin. The receiver may be a phased array antenna to enable
determining a location of the signal in a region of space. The
sensor may be used to detect and identify cardiac signals through
blockages, such as walls, water, concrete, dirt, metal, wood, and
the like. For example, a user may be able to use the sensor to
determine how many people are located in a concrete structure by
how many heart rates are detected. In another embodiment, a
detected heart rate may serve as a unique identifier for a person
so that they may be recognized in the future. In an embodiment, the
RF/impulse generator may be embedded in one device, such as the
eyepiece or some other device, while the receiver is embedded in a
different device, such as another eyepiece or device. In this way,
a virtual "tripwire" may be created when a heart rate is detected
between the transmitter and receiver. In an embodiment, the sensor
may be used as an in-field diagnostic or self-diagnosis tool. EKG's
may be analyzed and stored for future use as a biometric
identifier. A user may receive alerts of sensed heart rate signals
and how many heart rates are present as displayed content in the
eyepiece.
[0245] FIG. 29 depicts an embodiment of an augmented reality
eyepiece or glasses with a variety of sensors and communication
equipment. One or more than one environmental or health sensors are
connected to a sensor interface locally or remotely through a short
range radio circuit and an antenna, as shown. The sensor interface
circuit includes all devices for detecting, amplifying, processing
and sending on or transmitting the signals detected by the
sensor(s). The remote sensors may include, for example, an
implanted heart rate monitor or other body sensor (not shown). The
other sensors may include an accelerometer, an inclinometer, a
temperature sensor, a sensor suitable for detecting one or more
chemicals or gasses, or any of the other health or environmental
sensors discussed in this disclosure. The sensor interface is
connected to the microprocessor or microcontroller of the augmented
reality device, from which point the information gathered may be
recorded in memory, such as random access memory (RAM) or permanent
memory, read only memory (ROM), as shown.
[0246] In an embodiment, a sense device enables simultaneous
electric field sensing through the eyepiece. Electric field (EF)
sensing is a method of proximity sensing that allows computers to
detect, evaluate and work with objects in their vicinity. Physical
contact with the skin, such as a handshake with another person or
some other physical contact with a conductive or a non-conductive
device or object, may be sensed as a change in an electric field
and either enable data transfer to or from the eyepiece or
terminate data transfer. For example, videos captured by the
eyepiece may be stored on the eyepiece until a wearer of the
eyepiece with an embedded electric field sensing transceiver
touches an object and initiates data transfer from the eyepiece to
a receiver. The transceiver may include a transmitter that includes
a transmitter circuit that induces electric fields toward the body
and a data sense circuit, which distinguishes transmitting and
receiving modes by detecting both transmission and reception data
and outputs control signals corresponding to the two modes to
enable two-way communication. An instantaneous private network
between two people may be generated with a contact, such as a
handshake. Data may be transferred between an eyepiece of a user
and a data receiver or eyepiece of the second user. Additional
security measures may be used to enhance the private network, such
as facial or audio recognition, detection of eye contact,
fingerprint detection, biometric entry, and the like.
[0247] In embodiments, there may be an authentication facility
associated with accessing functionality of the eyepiece, such as
access to displayed or projected content, access to restricted
projected content, enabling functionality of the eyepiece itself
(e.g. as through a login to access functionality of the eyepiece)
either in whole or in part, and the like. Authentication may be
provided through recognition of the wearer's voice, iris, retina,
fingerprint, and the like, or other biometric identifier. The
authentication system may provide for a database of biometric
inputs for a plurality of users such that access control may be
provided for use of the eyepiece based on policies and associated
access privileges for each of the users entered into the database.
The eyepiece may provide for an authentication process. For
instance, the authentication facility may sense when a user has
taken the eyepiece off, and require re-authentication when the user
puts it back on. This better ensures that the eyepiece only
provides access to those users that are authorized, and for only
those privileges that the wearer is authorized for. In an example,
the authentication facility may be able to detect the presence of a
user's eye or head as the eyepiece is put on. In a first level of
access, the user may only be able to access low-sensitivity items
until authentication is complete. During authentication, the
authentication facility may identify the user, and look up their
access privileges. Once these privileges have been determined, the
authentication facility may then provide the appropriate access to
the user. In the case of an unauthorized user being detected, the
eyepiece may maintain access to low-sensitivity items, further
restrict access, deny access entirely, and the like.
[0248] In an embodiment, a receiver may be associated with an
object to enable control of that object via touch by a wearer of
the eyepiece, wherein touch enables transmission or execution of a
command signal in the object. For example, a receiver may be
associated with a car door lock. When a wearer of the eyepiece
touches the car, the car door may unlock. In another example, a
receiver may be embedded in a medicine bottle. When the wearer of
the eyepiece touches the medicine bottle, an alarm signal may be
initiated. In another example, a receiver may be associated with a
wall along a sidewalk. As the wearer of the eyepiece passes the
wall or touches the wall, advertising may be launched either in the
eyepiece or on a video panel of the wall.
[0249] In an embodiment, when a wearer of the eyepiece initiates a
physical contact, a WiFi exchange of information with a receiver
may provide an indication that the wearer is connected to an online
activity such as a game or may provide verification of identity in
an online environment. In the embodiment, a representation of the
person could change color or undergo some other visual indication
in response to the contact. In embodiments, the eyepiece may
include tactile interface as in FIG. 14, such as to enable haptic
control of the eyepiece, such as with a swipe, tap, touch, press,
click, roll of a rollerball, and the like. For instance, the
tactile interface 1402 may be mounted on the frame of the eyepiece,
such as on an arm, both arms, the nosepiece, the top of the frame,
the bottom of the frame, and the like. The wearer may then touch
the tactile interface in a plurality of ways to be interpreted by
the eyepiece as commands, such as by tapping one or multiple times
on the interface, by brushing a finger across the interface, by
pressing and holding, by pressing more than one interface at a
time, and the like. In embodiments, the tactile interface may be
attached to the wearer's body, their clothing, as an attachment to
their clothing, as a ring 1500, as a bracelet, as a necklace, and
the like. For example, the interface may be attached on the body,
such as on the back of the wrist, where touching different parts of
the interface provides different command information (e.g. touching
the front portion, the back portion, the center, holding for a
period of time, tapping, swiping, and the like). In another
example, the wearer may have an interface mounted in a ring as
shown in FIG. 15, a hand piece, and the like, where the interface
may have at least one of a plurality of command interface types,
such as a tactile interface, a position sensor device, and the like
with wireless command connection to the eyepiece. In an embodiment,
the ring 1500 may have controls that mirror a computer mouse, such
as buttons 1504 (e.g. functioning as a one-button, multi-button,
and like mouse functions), a 2D position control 1502, scroll
wheel, and the like. The buttons 1504 and 2D position control 1502
may be as shown in FIG. 15, where the buttons are on the side
facing the thumb and the 2D position controller is on the top.
Alternately, the buttons and 2D position control may be in other
configurations, such as all facing the thumb side, all on the top
surface, or any other combination. The 2D position control 1502 may
be a 2D button position controller (e.g. such as the TrackPoint
pointing device embedded in some laptop keyboards to control the
position of the mouse), a pointing stick, joystick, an optical
track pad, an opto touch wheel, a touch screen, touch pad, track
pad, scrolling track pad, trackball, any other position or pointing
controller, and the like. In embodiments, control signals from the
tactile interface (such as the ring tactile interface 1500) may be
provided with a wired or wireless interface to the eyepiece, where
the user is able to conveniently supply control inputs, such as
with their hand, thumb, finger, and the like. For example, the user
may be able to articulate the controls with their thumb, where the
ring is worn on the user's index finger. In embodiments, a method
or system may provide an interactive head-mounted eyepiece worn by
a user, wherein the eyepiece includes an optical assembly through
which the user views a surrounding environment and displayed
content, a processor for handling content for display to the user,
and an integrated projector facility for projecting the content to
the optical assembly, and a control device worn on a hand of the
user, including at least one control component actuated by a digit
of a hand of the user, and providing a control command from the
actuation of the at least one control component to the processor as
a command instruction. The command instruction may be directed to
the manipulation of content for display to the user. The control
device may be worn on a first digit of the hand of the user, and
the at least one control component may be actuated by a second
digit of a hand of the user. The first digit may be the index
finger, the second digit the thumb, and the first and second digit
on the same hand of the user. The control device may have at least
one control component mounted on the index finger side facing the
thumb. The at least one control component may be a button. The at
least one control component may be a 2D position controller. The
control device may have at least one button actuated control
component mounted on the index finger side facing the thumb, and a
2D position controller actuated control component mounted on the
top facing side of the index finger. The control components may be
mounted on at least two digits of the user's hand. The control
device may be worn as a glove on the hand of the user. The control
device may be worn on the wrist of the user. The at least one
control component may be worn on at least one digit of the hand,
and a transmission facility may be worn separately on the hand. The
transmission facility may be worn on the wrist. The transmission
facility may be worn on the back of the hand. The control component
may be at least one of a plurality of buttons. The at least one
button may provide a function substantially similar to a
conventional computer mouse button. Two of the plurality of buttons
may function substantially similar to primary buttons of a
conventional two-button computer mouse. The control component may
be a scrolling wheel. The control component may be a 2D position
control component. The 2D position control component may be a
button position controller, pointing stick, joystick, optical track
pad, opto-touch wheel, touch screen, touch pad, track pad,
scrolling track pad, trackball, capacitive touch screen, and the
like. The 2D position control component may be controlled with the
user's thumb. The control component may be a touch-screen capable
of implementing touch controls including button-like functions and
2D manipulation functions. The control component may be actuated
when the user puts on the projected processor content pointing and
control device. A surface-sensing component in the control device
for detecting motion across a surface may also be provided. The
surface sensing component may be disposed on the palmar side of the
user's hand. The surface may be at least one of a hard surface, a
soft surface, surface of the user's skin, surface of the user's
clothing, and the like. Providing control commands may be
transmitted wirelessly, through a wired connection, and the like.
The control device may control a pointing function associated with
the displayed processor content. The pointing function may be
control of a cursor position; selection of displayed content,
selecting and moving displayed content; control of zoom, pan, field
of view, size, position of displayed content; and the like. The
control device may control a pointing function associated with the
viewed surrounding environment. The pointing function may be
placing a cursor on a viewed object in the surrounding environment.
The viewed object's location position may be determined by the
processor in association with a camera integrated with the
eyepiece. The viewed object's identification may be determined by
the processor in association with a camera integrated with the
eyepiece. The control device may control a function of the
eyepiece. The function may be associated with the displayed
content. The function may be a mode control of the eyepiece. The
control device may be foldable for ease of storage when not worn by
the user. In embodiments, the control device may be used with
external devices, such as to control the external device in
association with the eyepiece. External devices may be
entertainment equipment, audio equipment, portable electronic
devices, navigation devices, weapons, automotive controls, and the
like.
[0250] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and a tactile
control interface mounted on the eyepiece that accepts control
inputs from the user through at least one of a user touching the
interface and the user being proximate to the interface.
[0251] In embodiments, control of the eyepiece, and especially
control of a cursor associated with displayed content to the user,
may be enabled through hand control, such as with a worn device
1500 as in FIG. 15, as a virtual computer mouse 1500A as in FIG.
15A, and the like. For instance, the worn device 1500 may transmit
commands through physical interfaces (e.g. a button 1502, scroll
wheel 1504), and the virtual computer mouse 1500A may be able
interpret commands though detecting motion and actions of the
user's thumb, fist, hand, and the like. In computing, a physical
mouse is a pointing device that functions by detecting
two-dimensional motion relative to its supporting surface. A
physical mouse traditionally consists of an object held under one
of the user's hands, with one or more buttons. It sometimes
features other elements, such as "wheels", which allow the user to
perform various system-dependent operations, or extra buttons or
features that can add more control or dimensional input. The
mouse's motion translates into the motion of a cursor on a display,
which allows for fine control of a graphical user interface. In the
case of the eyepiece, the user may be able to utilize a physical
mouse, a virtual mouse, or combinations of the two. In embodiments,
a virtual mouse may involve one or more sensors attached to the
user's hand, such as on the thumb 1502A, finger 1504A, palm 1508A,
wrist 1510A, and the like, where the eyepiece receives signals from
the sensors and translates the received signals into motion of a
cursor on the eyepiece display to the user. In embodiments, the
signals may be received through an exterior interface, such as the
tactile interface 1402, through a receiver on the interior of the
eyepiece, at a secondary communications interface, on an associated
physical mouse or worn interface, and the like. The virtual mouse
may also include actuators or other output type elements attached
to the user's hand, such as for haptic feedback to the user through
vibration, force, electrical impulse, temperature, and the like.
Sensors and actuators may be attached to the user's hand by way of
a wrap, ring, pad, glove, and the like. As such, the eyepiece
virtual mouse may allow the user to translate motions of the hand
into motion of the cursor on the eyepiece display, where `motions`
may include slow movements, rapid motions, jerky motions, position,
change in position, and the like, and may allow users to work in
three dimensions, without the need for a physical surface, and
including some or all of the six degrees of freedom. Note that
because the `virtual mouse` may be associated with multiple
portions of the hand, the virtual mouse may be implemented as
multiple `virtual mouse` controllers, or as a distributed
controller across multiple control members of the hand. In
embodiments, the eyepiece may provide for the use of a plurality of
virtual mice, such as for one on each of the user's hands, one or
more of the user's feet, and the like.
[0252] In embodiments, the eyepiece virtual mouse may need no
physical surface to operate, and detect motion such as through
sensors, such as one of a plurality of accelerometer types (e.g.
tuning fork, piezoelectric, shear mode, strain mode, capacitive,
thermal, resistive, electromechanical, resonant, magnetic, optical,
acoustic, laser, three dimensional, and the like), and through the
output signals of the sensor(s) determine the translational and
angular displacement of the hand, or some portion of the hand. For
instance, accelerometers may produce output signals of magnitudes
proportional to the translational acceleration of the hand in the
three directions. Pairs of accelerometers may be configured to
detect rotational accelerations of the hand or portions of the
hand. Translational velocity and displacement of the hand or
portions of the hand may be determined by integrating the
accelerometer output signals and the rotational velocity and
displacement of the hand may be determined by integrating the
difference between the output signals of the accelerometer pairs.
Alternatively, other sensors may be utilized, such as ultrasound
sensors, imagers, IR/RF, magnetometer, gyro magnetometer, and the
like. As accelerometers, or other sensors, may be mounted on
various portions of the hand, the eyepiece may be able to detect a
plurality of movements of the hand, ranging from simple motions
normally associated with computer mouse motion, to more highly
complex motion, such as interpretation of complex hand motions in a
simulation application. In embodiments, the user may require only a
small translational or rotational action to have these actions
translated to motions associated with user intended actions on the
eyepiece projection to the user.
[0253] In embodiments, the virtual mouse may have physical switches
associated with it to control the device, such as an on/off switch
mounted on the hand, the eyepiece, or other part of the body. The
virtual mouse may also have on/off control and the like through
pre-defined motions or actions of the hand. For example, the
operation of the virtual mouse may be enabled through a rapid back
and forth motion of the hand. In another example, the virtual mouse
may be disabled through a motion of the hand past the eyepiece,
such as in front of the eyepiece. In embodiments, the virtual mouse
for the eyepiece may provide for the interpretation of a plurality
of motions to operations normally associated with physical mouse
control, and as such, familiar to the user without training, such
as single clicking with a finger, double clicking, triple clicking,
right clicking, left clicking, click and drag, combination
clicking, roller wheel motion, and the like. In embodiments, the
eyepiece may provide for gesture recognition, such as in
interpreting hand gestures via mathematical algorithms.
[0254] In embodiments, gesture control recognition may be provided
through technologies that utilize capacitive changes resulting from
changes in the distance of a user's hand from a conductor element
as part of the eyepiece's control system, and so would require no
devices mounted on the user's hand. In embodiments, the conductor
may be mounted as part of the eyepiece, such as on the arm or other
portion of the frame, or as some external interface mounted on the
user's body or clothing. For example, the conductor may be an
antenna, where the control system behaves in a similar fashion to
the touch-less musical instrument known as the theremin. The
theremin uses the heterodyne principle to generate an audio signal,
but in the case of the eyepiece, the signal may be used to generate
a control input signal. The control circuitry may include a number
of radio frequency oscillators, such as where one oscillator
operates at a fixed frequency and another controlled by the user's
hand, where the distance from the hand varies the input at the
control antenna. In this technology, the user's hand acts as a
grounded plate (the user's body being the connection to ground) of
a variable capacitor in an L-C (inductance-capacitance) circuit,
which is part of the oscillator and determines its frequency. In
another example, the circuit may use a single oscillator, two pairs
of heterodyne oscillators, and the like. In embodiments, there may
be a plurality of different conductors used as control inputs. In
embodiments, this type of control interface may be ideal for
control inputs that vary across a range, such as a volume control,
a zoom control, and the like. However, this type of control
interface may also be used for more discrete control signals (e.g.
on/off control) where a predetermined threshold determines the
state change of the control input.
[0255] In embodiments, the eyepiece may interface with a physical
remote control device, such as a wireless track pad mouse, hand
held remote control, body mounted remote control, remote control
mounted on the eyepiece, and the like. The remote control device
may be mounted on an external piece of equipment, such as for
personal use, gaming, professional use, military use, and the like.
For example, the remote control may be mounted on a rifle for a
sport rifle shooter, such as mounted on a pistol grip, on a muzzle
shroud, on a fore grip, and the like, providing remote control to
the shooter without the need to remove their hands from the rifle.
The remote control may be removably mounted to the eyepiece.
[0256] In embodiments, a remote control for the eyepiece may be
activated and/or controlled through a proximity sensor. A proximity
sensor may be a sensor able to detect the presence of nearby
objects without any physical contact. For example, a proximity
sensor may emit an electromagnetic or electrostatic field, or a
beam of electromagnetic radiation (infrared, for instance), and
look for changes in the field or return signal. The object being
sensed is often referred to as the proximity sensor's target.
Different proximity sensor targets may demand different sensors.
For example, a capacitive or photoelectric sensor might be suitable
for a plastic target; an inductive proximity sensor requires a
metal target. Other examples of proximity sensor technologies
include capacitive displacement sensors, eddy-current, magnetic,
photocell (reflective), laser, passive thermal infrared, passive
optical, CCD, reflection of ionizing radiation, and the like. In
embodiments, the proximity sensor may be integral to any of the
control embodiments described herein, including physical remote
controls, virtual mouse, interfaces mounted on the eyepiece,
controls mounted on an external piece of equipment (e.g. a game
controller), and the like.
[0257] In embodiments, control of the eyepiece, and especially
control of a cursor associated with displayed content to the user,
may be enabled through the sensing of the motion of a facial
feature, the tensing of a facial muscle, the clicking of the teeth,
the motion of the jaw, and the like, of the user wearing the
eyepiece through a facial actuation sensor 1502B. For instance, as
shown in FIG. 15B, the eyepiece may have a facial actuation sensor
as an extension from the eyepiece earphone assembly 1504B, from the
arm 1508B of the eyepiece, and the like, where the facial actuation
sensor may sense a force, a vibration, and the like associated with
the motion of a facial feature. The facial actuation sensor may
also be mounted separate from the eyepiece assembly, such as part
of a standalone earpiece, where the sensor output of the earpiece
and the facial actuation sensor may be either transferred to the
eyepiece by either wired or wireless communication (e.g. Bluetooth
or other communications protocol known to the art). The facial
actuation sensor may also be attached to around the ear, in the
mouth, on the face, on the neck, and the like. The facial actuation
sensor may also be comprised of a plurality of sensors, such as to
optimize the sensed motion of different facial or interior motions
or actions. In embodiments, the facial actuation sensor may detect
motions and interpret them as commands, or the raw signals may be
sent to the eyepiece for interpretation. Commands may be commands
for the control of eyepiece functions, controls associated with a
cursor or pointer as provided as part of the display of content to
the user, and the like. For example, a user may click their teeth
once or twice to indicate a single or double click, such as
normally associated with the click of a computer mouse. In another
example, the user may tense a facial muscle to indicate a command,
such as a selection associated with the projected image. In
embodiments, the facial actuation sensor may utilize noise
reduction processing to minimize the background motions of the
face, the head, and the like, such as through adaptive signal
processing technologies. A voice activity sensor may also be
utilized to reduce interference, such as from the user, from other
individuals nearby, from surrounding environmental noise, and the
like. In an example, the facial actuation sensor may also improve
communications and eliminate noise by detecting vibrations in the
cheek of the user during speech, such as with multiple microphones
to identify the background noise and eliminate it through noise
cancellation, volume augmentation, and the like.
[0258] In embodiments, the user of the eyepiece may be able to
obtain information on some environmental feature, location, object,
and the like, viewed through the eyepiece by raising their hand
into the field of view of the eyepiece and pointing at the object
or position. For instance, the pointing finger of the user may
indicate an environmental feature, where the finger is not only in
the view of the eyepiece but also in the view of an embedded
camera. The system may now be able to correlate the position of the
pointing finger with the location of the environmental feature as
seen by the camera. Additionally, the eyepiece may have position
and orientation sensors, such as GPS and a magnetometer, to allow
the system to know the location and line of sight of the user. From
this, the system may be able to extrapolate the position
information of the environmental feature, such as to provide the
location information to the user, to overlay the position of the
environmental information onto a 2D or 3D map, to further associate
the established position information to correlate that position
information to secondary information about that location (e.g.
address, names of individuals at the address, name of a business at
that location, coordinates of the location), and the like.
Referring to FIG. 15C, in an example, the user is looking though
the eyepiece 1502C and pointing with their hand 1504C at a house
1508C in their field of view, where an embedded camera 1510C has
both the pointed hand 1504C and the house 1508C in its field of
view. In this instance, the system is able to determine the
location of the house 1508C and provide location information 1514C
and a 3D map superimposed onto the user's view of the environment.
In embodiments, the information associated with an environmental
feature may be provided by an external facility, such as
communicated with through a wireless communication connection,
stored internal to the eyepiece, such as downloaded to the eyepiece
for the current location, and the like.
[0259] In embodiments, the user may be able to control their view
perspective relative to a 3D projected image, such as a 3D
projected image associated with the external environment, a 3D
projected image that has been stored and retrieved, a 3D displayed
movie (such as downloaded for viewing), and the like. For instance,
and referring again to FIG. 15C, the user may be able to change the
view perspective of the 3D displayed image 1512C, such as by
turning their head, and where the live external environment and the
3D displayed image stay together even as the user turns their head,
moves their position, and the like. In this way, the eyepiece may
be able to provide an augmented reality by overlaying information
onto the user's viewed external environment, such as the overlaid
3D displayed map 1512C, the location information 1514C, and the
like, where the displayed map, information, and the like, may
change as the user's view changes. In another instance, with 3D
movies or 3D converted movies, the perspective of the viewer may be
changed to put the viewer `into` the movie environment with some
control of the viewing perspective, where the user may be able to
move their head around and have the view change in correspondence
to the changed head position, where the user may be able to `walk
into` the image when they physically walk forward, have the
perspective change as the user moves the gazing view of their eyes,
and the like. In addition, additional image information may be
provided, such as at the sides of the user's view that could be
accessed by turning the head.
[0260] Referring to FIG. 15D, in embodiments the user of the
eyepiece 1502D may be able to use multiple hand/finger points from
of their hand 1504D to define the field of view (FOV) 1508D of the
camera 1510D relative to the see-thru view, such as for augmented
reality applications. For instance, in the example shown, the user
is utilizing their first finger and thumb to adjust the FOV 1508D
of the camera 1510D of the eyepiece 1502D. The user may utilize
other combinations to adjust the FOV 1508D, such as with
combinations of fingers, fingers and thumb, combinations of fingers
and thumbs from both hands, use of the palm(s), cupped hand(s), and
the like. The use of multiple hand/finger points may enable the
user to alter the FOV 1508 of the camera 1510D in much the same way
as users of touch screens, where different points of the
hand/finger establish points of the FOV to establish the desired
view. In this instance however, there is no physical contact made
between the user's hand(s) and the eyepiece. Here, the camera may
be commanded to associate portions of the user's hand(s) to the
establishing or changing of the FOV of the camera. The command may
be any command type described herein, including and not limited to
hand motions in the FOV of the camera, commands associated with
physical interfaces on the eyepiece, commands associated with
sensed motions near the eyepiece, commands received from a command
interface on some portion of the user, and the like. The eyepiece
may be able to recognize the finger/hand motions as the command,
such as in some repetitive motion. In embodiments, the user may
also utilize this technique to adjust some portion of the projected
image, where the eyepiece relates the viewed image by the camera to
some aspect of the projected image, such as the hand/finger points
in view to the projected image of the user. For example, the user
may be simultaneously viewing the external environment and a
projected image, and the user utilizes this technique to change the
projected viewing area, region, magnification, and the like. In
embodiments, the user may perform a change of FOV for a plurality
of reasons, including zooming in or out from a viewed scene in the
live environment, zoom in or out from a viewed portion of the
projected image, to change the viewing area allocated to the
projected image, to change the perspective view of the environment
or projected image, and the like.
[0261] In embodiments the eyepiece may be able to determine where
the user is gazing, or the motion of the user's eye, by tracking
the eye through reflected light off the user's eye. This
information may then be used to help correlate the user's line of
sight with respect to the projected image, a camera view, the
external environment, and the like, and used in control techniques
as described herein. For instance, the user may gaze at a location
on the projected image and make a selection, such as with an
external remote control or with some detected eye movement (e.g.
blinking). In an example of this technique, and referring to FIG.
15E, transmitted light 1508E, such as infrared light, may be
reflected 1510E from the eye 1504E and sensed at the optical
display 502 (e.g. with a camera or other optical sensor). The
information may then be analyzed to extract eye rotation from
changes in reflections. In embodiments, an eye tracking facility
may use the corneal reflection and the center of the pupil as
features to track over time; use reflections from the front of the
cornea and the back of the lens as features to track; image
features from inside the eye, such as the retinal blood vessels,
and follow these features as the eye rotates; and the like.
Alternatively, the eyepiece may use other techniques to track the
motions of the eye, such as with components surrounding the eye,
mounted in contact lenses on the eye, and the like. For instance, a
special contact lens may be provided to the user with an embedded
optical component, such as a mirror, magnetic field sensor, and the
like, for measuring the motion of the eye. In another instance,
electric potentials may be measured and monitored with electrodes
placed around the eyes, utilizing the steady electric potential
field from the eye as a dipole, such as with its positive pole at
the cornea and its negative pole at the retina. In this instance,
the electric signal may be derived using contact electrodes placed
on the skin around the eye, on the frame of the eyepiece, and the
like. If the eye moves from the centre position towards the
periphery, the retina approaches one electrode while the cornea
approaches the opposing one. This change in the orientation of the
dipole and consequently the electric potential field results in a
change in the measured signal. By analyzing these changes. eye
movement can be tracked.
[0262] In embodiments, the eyepiece may have a plurality of modes
of operation where control of the eyepiece is controlled at least
in part by positions, shapes, motions of the hand, and the like. To
provide this control the eyepiece may utilize hand recognition
algorithms to detect the shape of the hand/fingers, and to then
associate those hand configurations, possibly in combination with
motions of the hand, as commands. Realistically, as there may be
only a limited number of hand configurations and motions available
to command the eyepiece, these hand configurations may need to be
reused depending upon the mode of operation of the eyepiece. In
embodiments, certain hand configurations or motions may be assigned
for transitioning the eyepiece from one mode to the next, thereby
allowing for the reuse of hand motions. For instance, and referring
to FIG. 15F, the user's hand 1504F may be moved in view of a camera
on the eyepiece, and the movement may then be interpreted as a
different command depending upon the mode, such as a circular
motion 1508F, a motion across the field of view 1510F, a back and
forth motion 1512F, and the like. In a simplistic example, suppose
there are two modes of operation, mode one for panning a view from
the projected image and mode two for zooming the projected image.
In this example the user may want to use a left-to-right
finger-pointed hand motion to command a panning motion to the
right. However, the user may also want to use a left-to-right
finger-pointed hand motion to command a zooming of the image to
greater magnification. To allow the dual use of this hand motion
for both command types, the eyepiece may be configured to interpret
the hand motion differently depending upon the mode the eyepiece is
currently in, and where specific hand motions have been assigned
for mode transitions. For instance, a clockwise rotational motion
may indicate a transition from pan to zoom mode, and a
counter-clockwise rotational motion may indicate a transition from
zoom to pan mode. This example is meant to be illustrative and not
limiting in anyway, where one skilled in the art will recognize how
this general technique could be used to implement a variety of
command/mode structures using the hand(s) and finger(s), such as
hand-finger configurations-motions, two-hand configuration-motions,
and the like.
[0263] In embodiments, a system may comprise an interactive
head-mounted eyepiece worn by a user, wherein the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content, wherein the optical assembly
comprises a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly; and an integrated
camera facility that images a gesture, wherein the integrated
processor identifies and interprets the gesture as a command
instruction. The control instruction may provide manipulation of
the content for display, a command communicated to an external
device, and the like.
[0264] In embodiments, control of the eyepiece may be enabled
through eye movement, an action of the eye, and the like. For
instance, there may be a camera on the eyepiece that views back to
the wearer's eye(s), where eye movements or actions may be
interpreted as command information, such as through blinking,
repetitive blinking, blink count, blink rate, eye open-closed, gaze
tracking, eye movements to the side, up and down, side to side,
through a sequence of positions, to a specific position, dwell time
in a position, gazing toward a fixed object (e.g. the corner of the
lens of the eyepiece), through a certain portion of the lens, at a
real-world object, and the like. In addition, eye control may
enable the viewer to focus on a certain point on the displayed
image from the eyepiece, and because the camera may be able to
correlate the viewing direction of the eye to a point on the
display, the eyepiece may be able to interpret commands though a
combination of where the wearer is looking and an action by the
wearer (e.g. blinking, touching an interface device, movement of a
position sense device, and the like). For example, the viewer may
be able to look at an object on the display, and select that object
through the motion of a finger enabled through a position sense
device.
[0265] In some embodiments, the glasses may be equipped with eye
tracking devices for tracking movement of the user's eye, or
preferably both eyes; alternatively, the glasses may be equipped
with sensors for six-degree freedom of movement tracking, i.e.,
head movement tracking These devices or sensors are available, for
example, from Chronos Vision GmbH, Berlin, Germany and ISCAN,
Woburn, Mass. Retinal scanners are also available for tracking eye
movement. Retinal scanners may also be mounted in the augmented
reality glasses and are available from a variety of companies, such
as Tobii, Stockholm, Sweden, and SMI, Teltow, Germany, and
ISCAN.
[0266] The augmented reality eyepiece also includes a user input
interface, as shown, to allow a user to control the device. Inputs
used to control the device may include any of the sensors discussed
above, and may also include a trackpad, one or more function keys
and any other suitable local or remote device. For example, an eye
tracking device may be used to control another device, such as a
video game or external tracking device. As an example, an augmented
reality eyepiece equipped with an eye tracking device, discussed
elsewhere in this document, which allows the eyepiece to track the
direction of the user's eye or preferably, eyes, and send the
movements to the controller of the eyepiece. The movements may then
be transmitted to a control device for a video game controlled by
the control device, which may be within sight of the user. The
movement of the user's eyes is then converted by suitable software
to signals for controlling movement in the video game, such as
quadrant (range) and azimuth (direction). Additional controls may
be used in conjunction with eye tracking, such as with the user's
trackpad or function keys.
[0267] In embodiments, control of the eyepiece may be enabled
though gestures by the wearer. For instance, the eyepiece may have
a camera that views outward (e.g. forward, to the side, down) and
interprets gestures or movements of the hand of the wearer as
control signals. Hand signals may include passing the hand past the
camera, hand positions or sign language in front of the camera,
pointing to a real-world object (such as to activate augmentation
of the object), and the like. Hand motions may also be used to
manipulate objects displayed on the inside of the translucent lens,
such as moving an object, rotating an object, deleting an object,
opening-closing a screen or window in the image, and the like.
Although hand motions have been used in the preceding examples, any
portion of the body or object held or worn by the wearer may also
be utilized for gesture recognition by the eyepiece.
[0268] In embodiments, head motion control may be used to send
commands to the eyepiece, where motion sensors such as
accelerometers, gyros, or any other sensor described herein, may be
mounted on the wearer's head, on the eyepiece, in a hat, in a
helmet, and the like. Referring to FIG. 14A, head motions may
include quick motions of the head, such as jerking the head in a
forward and/or backward motion 1412, in an up and/or down motion
1410, in a side to side motion as a nod, dwelling in a position,
such as to the side, moving and holding in position, and the like.
Motion sensors may be integrated into the eyepiece, mounted on the
user's head or in a head covering (e.g. hat, helmet) by wired or
wireless connection to the eyepiece, and the like. In embodiments,
the user may wear the interactive head-mounted eyepiece, where the
eyepiece includes an optical assembly through which the user views
a surrounding environment and displayed content. The optical
assembly may include a corrective element that corrects the user's
view of the surrounding environment, an integrated processor for
handling content for display to the user, and an integrated image
source for introducing the content to the optical assembly. At
least one of a plurality of head motion sensing control devices may
be integrated or in association with the eyepiece that provide
control commands to the processor as command instructions based
upon sensing a predefined head motion characteristic. The head
motion characteristic may be a nod of the user's head such that the
nod is an overt motion dissimilar from ordinary head motions. The
overt motion may be a jerking motion of the head. The control
instructions may provide manipulation of the content for display,
be communicated to control an external device, and the like. Head
motion control may be used in combination with other control
mechanisms, such as using another control mechanism as discussed
herein to activate a command and for the head motion to execute it.
For example, a wearer may want to move an object to the right, and
through eye control, as discussed herein, select the object and
activate head motion control. Then, by tipping their head to the
right, the object may be commanded to move to the right, and the
command terminated through eye control.
[0269] In embodiments, the eyepiece may be controlled through
audio, such as through a microphone. Audio signals may include
speech recognition, voice recognition, sound recognition, sound
detection, and the like. Audio may be detected though a microphone
on the eyepiece, a throat microphone, a jaw bone microphone, a boom
microphone, a headphone, ear bud with microphone, and the like.
[0270] In embodiments, command inputs may provide for a plurality
of control functions, such as turning on/off the eyepiece
projector, turn on/off audio, turn on/off a camera, turn on/off
augmented reality projection, turn on/off GPS, interaction with
display (e.g. select/accept function displayed, replay of captured
image or video, and the like), interaction with the real-world
(e.g. capture image or video, turn a page of a displayed book, and
the like), perform actions with an embedded or external mobile
device (e.g. mobile phone, navigation device, music device, VoIP,
and the like), browser controls for the Internet (e.g. submit, next
result, and the like), email controls (e.g. read email, display
text, text-to-speech, compose, select, and the like), GPS and
navigation controls (e.g. save position, recall saved position,
show directions, view location on map), and the like.
[0271] In embodiments, the eyepiece may provide 3D display imaging
to the user, such as through conveying a stereoscopic,
auto-stereoscopic, computer-generated holography, volumetric
display image, stereograms/stereoscopes, view-sequential displays,
electro-holographic displays, parallax "two view" displays and
parallax panoramagrams, re-imaging systems, and the like, creating
the perception of 3D depth to the viewer. Display of 3D images to
the user may employ different images presented to the user's left
and right eyes, such as where the left and right optical paths have
some optical component that differentiates the image, where the
projector facility is projecting different images to the user's
left and right eye's, and the like. The optical path, including
from the projector facility through the optical path to the user's
eye, may include a graphical display device that forms a visual
representation of an object in three physical dimensions. A
processor, such as the integrated processor in the eyepiece or one
in an external facility, may provide 3D image processing as at
least a step in the generation of the 3D image to the user.
[0272] In embodiments, holographic projection technologies may be
employed in the presentation of a 3D imaging effect to the user,
such as computer-generated holography (CGH), a method of digitally
generating holographic interference patterns. For instance, a
holographic image may be projected by a holographic 3D display,
such as a display that operates on the basis of interference of
coherent light. Computer generated holograms have the advantage
that the objects which one wants to show do not have to possess any
physical reality at all, that is, they may be completely generated
as a `synthetic hologram`. There are a plurality of different
methods for calculating the interference pattern for a CGH,
including from the fields of holographic information and
computational reduction as well as in computational and
quantization techniques. For instance, the Fourier transform method
and point source holograms are two examples of computational
techniques. The Fourier transformation method may be used to
simulate the propagation of each plane of depth of the object to
the hologram plane, where the reconstruction of the image may occur
in the far field. In an example process, there may be two steps,
where first the light field in the far observer plane is
calculated, and then the field is Fourier transformed back to the
lens plane, where the wavefront to be reconstructed by the hologram
is the superposition of the Fourier transforms of each object plane
in depth. In another example, a target image may be multiplied by a
phase pattern to which an inverse Fourier transform is applied.
Intermediate holograms may then be generated by shifting this image
product, and combined to create a final set. The final set of
holograms may then be approximated to form kinoforms for sequential
display to the user, where the kinoform is a phase hologram in
which the phase modulation of the object wavefront is recorded as a
surface-relief profile. In the point source hologram method the
object is broken down in self-luminous points, where an elementary
hologram is calculated for every point source and the final
hologram is synthesized by superimposing all the elementary
holograms.
[0273] In an embodiment, 3-D or holographic imagery may be enabled
by a dual projector system where two projectors are stacked on top
of each other for a 3D image output. Holographic projection mode
may be entered by a control mechanism described herein or by
capture of an image or signal, such as an outstretched hand with
palm up, an SKU, an RFID reading, and the like. For example, a
wearer of the eyepiece may view a letter `X` on a piece of
cardboard which causes the eyepiece to enter holographic mode and
turning on the second, stacked projector. Selecting what hologram
to display may be done with a control technique. The projector may
project the hologram onto the cardboard over the letter `X`.
Associated software may track the position of the letter `X` and
move the projected image along with the movement of the letter `X`.
In another example, the eyepiece may scan a SKU, such as a SKU on a
toy construction kit, and a 3-D image of the completed toy
construction may be accessed from an online source or non-volatile
memory. Interaction with the hologram, such as rotating it, zooming
in/out, and the like, may be done using the control mechanisms
described herein. Scanning may be enabled by associated bar
code/SKU scanning software. In another example, a keyboard may be
projected in space or on a surface. The holographic keyboard may be
used in or to control any of the associated
applications/functions.
[0274] In embodiments, eyepiece facilities may provide for locking
the position of a virtual keyboard down relative to a real
environmental object (e.g. a table, a wall, a vehicle dashboard,
and the like) where the virtual keyboard then does not move as the
wearer moves their head. In an example, and referring to FIG. 24,
the user may be sitting at a table and wearing the eyepiece 2402,
and wish to input text into an application, such as a word
processing application, a web browser, a communications
application, and the like. The user may be able to bring up a
virtual keyboard 2408, or other interactive control element (e.g.
virtual mouse, calculator, touch screen, and the like), to use for
input. The user may provide a command for bringing up the virtual
keyboard 2408, and use a hand gesture 2404 for indicating the fixed
location of the virtual keyboard 2408. The virtual keyboard 2408
may then remain fixed in space relative to the outside environment,
such as fixed to a location on the table 2410, where the eyepiece
facilities keep the location of the virtual keyboard 2408 on the
table 2410 even when the user turns their head. That is, the
eyepiece 2402 may compensate for the user's head motion in order to
keep the user's view of the virtual keyboard 2408 located on the
table 2410. In embodiments, the user may wear the interactive
head-mounted eyepiece, where the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content. The optical assembly may include a corrective
element that corrects the user's view of the surrounding
environment, an integrated processor for handling content for
display to the user, and an integrated image source for introducing
the content to the optical assembly. An integrated camera facility
may be provided that images the surrounding environment, and
identifies a user hand gesture as an interactive control element
location command, such as a hand-finger configuration moved in a
certain way, positioned in a certain way, and the like. The
location of the interactive control element then may remain fixed
in position with respect to an object in the surrounding
environment, in response to the interactive control element
location command, regardless of a change in the viewing direction
of the user. In this way, the user may be able to utilize a virtual
keyboard in much the same way they would a physical keyboard, where
the virtual keyboard remains in the same location. However, in the
case of the virtual keyboard there are not `physical limitations`,
such as gravity, to limit where the user may locate the keyboard.
For instance, the user could be standing next to a wall, and place
the keyboard location on the wall, and the like.
[0275] In embodiments, eyepiece facilities may provide for removing
the portions of a virtual keyboard projection where intervening
obstructions appear (e.g. the user's hand getting in the way, where
it is not desired to project the keyboard onto the user's hand). In
an example, and referring to FIG. 30, the eyepiece 6202 may provide
a projected virtual keyboard 6208 to the wearer, such as onto a
tabletop. The wearer may then reach `over` the virtual keyboard
6208 to type. As the keyboard is merely a projected virtual
keyboard, rather than a physical keyboard, without some sort of
compensation to the projected image the projected virtual computer
would be projected `onto` the back of the user's hand. However, as
in this example, the eyepiece may provide compensation to the
projected image such that the portion of the wearer's hand 6204
that is obstructing the intended projection of the virtual keyboard
onto the table may be removed from the projection. That is, it may
not be desirable for portions of the keyboard projection 6208 to be
visualized onto the user's hand, and so the eyepiece subtracts the
portion of the virtual keyboard projection that is co-located with
the wearer's hand 6204. In embodiments, the user may wear the
interactive head-mounted eyepiece, where the eyepiece includes an
optical assembly through which the user views a surrounding
environment and displayed content. The optical assembly may include
a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly. The displayed
content may include an interactive control element (e.g. virtual
keyboard, virtual mouse, calculator, touch screen, and the like).
An integrated camera facility may image a user's body part as it
interacts with the interactive control element, wherein the
processor removes a portion of the interactive control element by
subtracting the portion of the interactive control element that is
determined to be co-located with the imaged user body part based on
the user's view. In embodiments, this technique of partial
projected image removal may be applied to other projected images
and obstructions, and is not meant to be restricted to this example
of a hand over a virtual keyboard.
[0276] In embodiments, eyepiece facilities may provide for the
ability to determine an intended text input from a sequence of
character contacts swiped across a virtual keypad, such as with the
finger, a stylus, and the like. For example, and referring to FIG.
37, the eyepiece may be projecting a virtual keyboard 6302, where
the user wishes to input the word `wind`. Normally, the user would
discretely press the key positions for `w`, then `i`, then `n`, and
finally `d`, and a facility (camera, accelerometer, and the like,
such as described herein) associated with the eyepiece would
interpret each position as being the letter for that position.
However, the system may also be able to monitor the movement, or
swipe, of the user's finger or other pointing device across the
virtual keyboard and determine best fit matches for the pointer
movement. In the figure, the pointer has started at the character
`w` and swept a path 6304 though the characters e, r, t, y, u, i,
k, n, b, v, f, and d where it stops. The eyepiece may observe this
sequence and determine the sequence through an input path analyzer,
feed the sensed sequence into a word matching search facility, and
output a best fit word, in this case `wind` as text 6308. In
embodiments, the eyepiece may provide the best-fit word, a listing
of best-fit words, and the like. In embodiments, the user may wear
the interactive head-mounted eyepiece, where the eyepiece includes
an optical assembly through which the user views a surrounding
environment and displayed content. The optical assembly may include
a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling
content for display to the user, and an integrated image source for
introducing the content to the optical assembly. The displayed
content may comprise an interactive keyboard control element (e.g.
a virtual keyboard, calculator, touch screen, and the like), and
where the keyboard control element is associated with an input path
analyzer, a word matching search facility, and a keyboard input
interface. The user may input text by sliding a pointing device
(e.g. a finger, a stylus, and the like) across character keys of
the keyboard input interface in a sliding motion through an
approximate sequence of a word the user would like to input as
text, wherein the input path analyzer determines the characters
contacted in the input path, the word matching facility finds a
best word match to the sequence of characters contacted and inputs
the best word match as input text.
[0277] In embodiments, eyepiece facilities may provide for
presenting displayed content corresponding to an identified marker
indicative of the intention to display the content. That is, the
eyepiece may be commanded to display certain content based upon
sensing a predetermined external visual cue. The visual cue may be
an image, an icon, a picture, face recognition, a hand
configuration, a body configuration, and the like. The displayed
content may be an interface device that is brought up for use, a
navigation aid to help the user find a location once they get to
some travel location, an advertisement when the eyepiece views a
target image, an informational profile, and the like. In
embodiments, visual marker cues and their associated content for
display may be stored in memory on the eyepiece, in an external
computer storage facility and imported as needed (such as by
geographic location, proximity to a trigger target, command by the
user, and the like), generated by a third-party, and the like. In
embodiments, the user may wear the interactive head-mounted
eyepiece, where the eyepiece includes an optical assembly through
which the user views a surrounding environment and displayed
content. The optical assembly may include a corrective element that
corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user,
and an integrated image source for introducing the content to the
optical assembly. An integrated camera facility may be provided
that images an external visual cue, wherein the integrated
processor identifies and interprets the external visual cue as a
command to display content associated with the visual cue.
Referring to FIG. 38, in embodiments the visual cue 6412 may be
included in a sign 6414 in the surrounding environment, where the
projected content is associated with an advertisement. The sign may
be a billboard, and the advertisement for a personalized
advertisement based on a preferences profile of the user. The
visual cue 6402, 6410 may be a hand gesture, and the projected
content a projected virtual keyboard 6404, 6408. For instance, the
hand gesture may be a thumb and index finger gesture 6402 from a
first user hand, and the virtual keyboard 6404 projected on the
palm of the first user hand, and where the user is able to type on
the virtual keyboard with a second user hand. The hand gesture 6410
may be a thumb and index finger gesture combination of both user
hands, and the virtual keyboard 6408 projected between the user
hands as configured in the hand gesture, where the user is able to
type on the virtual keyboard using the thumbs of the user's hands.
Visual cues may provide the wearer of the eyepiece with an
automated resource for associating a predetermined external visual
cue with a desired outcome in the way of projected content, thus
freeing the wearer from searching for the cues themselves.
[0278] The eyepiece may be useful for various applications and
markets. It should be understood that the control mechanisms
described herein may be used to control the functions of the
applications described herein. The eyepiece may run a single
application at a time or multiple applications may run at a time.
Switching between applications may be done with the control
mechanisms described herein. The eyepiece may be used in military
applications, gaming, image recognition applications, to view/order
e-books, GPS Navigation (Position, Direction, Speed and ETA),
Mobile TV, athletics (view pacing, ranking, and competition times;
receive coaching), telemedicine, industrial inspection, aviation,
shopping, inventory management tracking, firefighting (enabled by
VIS/NIRSWIR sensor that sees through fog, haze, dark),
outdoor/adventure, custom advertising, and the like. In an
embodiment, the eyepiece may be used with e-mail, such as GMAIL in
FIG. 7, the Internet, web browsing, viewing sports scores, video
chat, and the like. In an embodiment, the eyepiece may be used for
educational/training purposes, such as by displaying step by step
guides, such as hands-free, wireless maintenance and repair
instructions. For example, a video manual and/or instructions may
be displayed in the field of view. In an embodiment, the eyepiece
may be used in Fashion, Health, and Beauty. For example, potential
outfits, hairstyles, or makeup may be projected onto a mirror image
of a user. In an embodiment, the eyepiece may be used in Business
Intelligence, Meetings, and Conferences. For example, a user's name
tag can be scanned, their face run through a facial recognition
system, or their spoken name searched in database to obtain
biographical information. Scanned name tags, faces, and
conversations may be recorded for subsequent viewing or filing.
[0279] In an embodiment, a "Mode" may be entered by the eyepiece.
In the mode, certain applications may be available. For example, a
consumer version of the eyepiece may have a Tourist Mode,
Educational Mode, Internet Mode, TV Mode, Gaming Mode, Exercise
Mode, Stylist Mode, Personal Assistant Mode, and the like.
[0280] A user of the augmented reality glasses may wish to
participate in video calling or video conferencing while wearing
the glasses. Many computers, both desktop and laptop have
integrated cameras to facilitate using video calling and
conferencing. Typically, software applications are used to
integrate use of the camera with calling or conferencing features.
With the augmented reality glasses providing much of the
functionality of laptops and other computing devices, many users
may wish to utilize video calling and video conferencing while on
the move wearing the augmented reality glasses.
[0281] In an embodiment, a video calling or video conferencing
application may work with a WiFi connection, or may be part of a 3G
or 4G calling network associated with a user's cell phone. The
camera for video calling or conferencing is placed on a device
controller, such as a watch or other separate electronic computing
device. Placing the video calling or conferencing camera on the
augmented reality glasses is not feasible, as such placement would
provide the user with a view only of themselves, and would not
display the other participants in the conference or call. However,
the user may choose to use the forward-facing camera to display
their surroundings or another individual in the video call.
[0282] FIG. 32 depicts a typical camera 3200 for use in video
calling or conferencing. Such cameras are typically small and could
be mounted on a watch 3202, as shown in FIG. 32, cell phone or
other portable computing device, including a laptop computer. Video
calling works by connecting the device controller with the cell
phone or other communications device. The devices utilize software
compatible with the operating system of the glasses and the
communications device or computing device. In an embodiment, the
screen of the augmented reality glasses may display a list of
options for making the call and the user may gesture using a
pointing control device or use any other control technique
described herein to select the video calling option on the screen
of the augmented reality glasses.
[0283] FIG. 33 illustrates an embodiment of a block diagram of a
video calling camera. The camera incorporates a lens 3302, a
CCD/CMOS sensor 3304, analog to digital converters for video
signals, 3306, and audio signals, 3314. Microphone 3312 collects
audio input. Both analog to digital converters 3306 and 3314 send
their output signals to a signal enhancement module 3308. The
signal enhancement module 3308 forwards the enhanced signal, which
is a composite of both video and audio signals to interface 3310.
Interface 3310 is connected to an IEEE 1394 standard bus interface,
along with a control module 3316.
[0284] In operation, the video call camera depends on the signal
capture which transforms the incident light, as well as incident
sound into electrons. For light this process is performed by CCD or
CMOS chip 3304. The microphone transforms sound into electrical
impulses.
[0285] The first step in the process of generating an image for a
video call is to digitize the image. The CCD or CMOS chip 3304
dissects the image and converts it into pixels. If a pixel has
collected many photons, the voltage will be high. If the pixel has
collected few photons, the voltage will be low. This voltage is an
analog value. During the second step of digitization, the voltage
is transformed into a digital value by the analog to digital
converter 3306, which handles image processing. At this point, a
raw digital image is available.
[0286] Audio captured by the microphone 3312 is also transformed
into a voltage. This voltage is sent to the analog to digital
converter 3314 where the analog values are transformed into digital
values.
[0287] The next step is to enhance the signal so that it may be
sent to viewers of the video call or conference. Signal enhancement
includes creating color in the image using a color filter, located
in front of the CCD or CMOS chip 3304. This filter is red, green,
or blue and changes its color from pixel to pixel, and in an
embodiment, may be a color filter array, or Bayer filter. These raw
digital images are then enhanced by the filter to meet aesthetic
requirements. Audio data may also be enhanced for a better calling
experience.
[0288] In the final step before transmission, the image and audio
data are compressed and output as a digital video stream, in an
embodiment using a digital video camera. If a photo camera is used,
single images may be output, and in a further embodiment, voice
comments may be appended to the files. The enhancement of the raw
digital data takes place away from the camera, and in an embodiment
may occur in the device controller or computing device that the
augmented reality glasses communicate with during a video call or
conference.
[0289] Further embodiments may provide for portable cameras for use
in industry, medicine, astronomy, microscopy, and other fields
requiring specialized camera use. These cameras often forgo signal
enhancement and output the raw digital image. These cameras may be
mounted on other electronic devices or the user's hand for ease of
use.
[0290] The camera interfaces to the augmented reality glasses and
the device controller or computing device using an IEEE 1394
interface bus. This interface bus transmits time critical data,
such as a video and data whose integrity is critically important,
including parameters or files to manipulate data or transfer
images.
[0291] In addition to the interface bus, protocols define the
behavior of the devices associated with the video call or
conference. The camera for use with the augmented reality glasses,
may, in embodiments, employ one of the following protocols: AV/C,
DCAM, or SBP-2.
[0292] AV/C is a protocol for Audio Video Control and defines the
behavior of digital video devices, including video cameras and
video recorders.
[0293] DCAM refers to the 1394 based Digital Camera Specification
and defines the behavior of cameras that output uncompressed image
data without audio.
[0294] SBP-2 refers to Serial Bus Protocol and defines the behavior
of mass storage devices, such as hard drives or disks.
[0295] Devices that use the same protocol are able to communicate
with each other. Thus, for video calling using the augmented
reality glasses, the same protocol may be used by the video camera
on the device controller and the augmented reality glasses. Because
the augmented reality glasses, device controller, and camera use
the same protocol, data may be exchanged among these devices. Files
that may be transferred among devices include: image and audio
files, image and audio data flows, parameters to control the
camera, and the like.
[0296] In an embodiment, a user desiring to initiate a video call
may select a video call option from a screen presented when the
call process is initiated. The user selects by making a gesture
using a pointing device, or gesture to signal the selection of the
video call option. The user then positions the camera located on
the device controller, wristwatch, or other separable electronic
device so that the user's image is captured by the camera. The
image is processed through the process described above and is then
streamed to the augmented reality glasses and the other
participants for display to the users.
[0297] In embodiments, the camera may be mounted on a cell phone,
personal digital assistant, wristwatch, pendant, or other small
portable device capable of being carried, worn, or mounted. The
images or video captured by the camera may be streamed to the
eyepiece.
[0298] In embodiments, the present disclosure may provide the
wearer with GPS-based content reception, as in FIG. 6. As noted,
augmented reality glasses of the present disclosure may include
memory, a global positioning system, a compass or other orienting
device, and a camera. GPS-based computer programs available to the
wearer may include a number of applications typically available
from the Apple Inc. App Store for iPhone use. Similar versions of
these programs are available for other brands of smartphones and
may be applied to embodiments of the present disclosure. These
programs include, for example, SREngine (scene recognition engine),
NearestTube, TAT Augmented ID, Yelp, Layar, and TwittARound, as
well as other more specialized applications, such as RealSki.
[0299] SREngine is a scene recognition engine that is able to
identify objects viewed by the user's camera. It is a software
engine able to recognize static scenes, such as scenes of
architecture, structures, pictures, objects, rooms, and the like.
It is then able to automatically apply a virtual "label" to the
structures or objects according to what it recognizes. For example,
the program may be called up by a user of the present disclosure
when viewing a street scene, such as FIG. 6. Using a camera of the
augmented reality glasses, the engine will recognize the Fontaines
de la Concorde in Paris. The program will then summon a virtual
label, shown in FIG. 6 as part of a virtual image 618 projected
onto the lens 602. The label may be text only, as seen at the
bottom of the image 618. Other labels applicable to this scene may
include "fountain," "museum," "hotel," or the name of the columned
building in the rear. Other programs of this type may include the
Wikitude AR Travel Guide, Yelp and many others.
[0300] NearestTube, for example, uses the same technology to direct
a user to the closest subway station in London, and other programs
may perform the same function, or similar, in other cities. Layar
is another application that uses the camera, a compass or
direction, and GPS data to identify a user's location and field of
view. With this information, an overlay or label may appear
virtually to help orient and guide the user. Yelp and Monocle
perform similar functions, but their databases are somewhat more
specialized, helping to direct users in a similar manner to
restaurants or to other service providers.
[0301] The user may control the glasses, and call up these
functions, using any of the controls described in this patent. For
example, the glasses may be equipped with a microphone to pick up
voice commands from a user and process them using software
contained with a memory of the glasses. The user may then respond
to prompts from small speakers or earbuds also contained within the
glasses frame. The glasses may also be equipped with a tiny track
pad, similar to those found on smartphones. The trackpad may allow
a user to move a pointer or indicator on the virtual screen within
the AR glasses, similar to a touch screen. When the user reaches a
desired point on the screen, the user depresses the track pad to
indicate his or her selection. Thus, a user may call up a program,
e.g., a travel guide, and then find his or her way through several
menus, perhaps selecting a country, a city and then a category. The
category selections may include, for example, hotels, shopping,
museums, restaurants, and so forth. The user makes his or her
selections and is then guided by the AR program. In one embodiment,
the glasses also include a GPS locator, and the present country and
city provides default locations that may be overridden.
[0302] In an embodiment, the eyepiece's object recognition software
may process the images being received by the eyepiece's forward
facing camera in order to determine what is in the field of view.
In other embodiments, the GPS coordinates of the location as
determined by the eyepiece's GPS may be enough to determine what is
in the field of view. In other embodiments, an RFID or other beacon
in the environment may be broadcasting a location. Any one or
combination of the above may be used by the eyepiece to identify
the location and the identity of what is in the field of view.
[0303] When an object is recognized, the resolution for imaging
that object may be increased or images or video may be captured at
low compression. Additionally, the resolution for other objects in
the user's view may be decreased, or captured at a higher
compression rate in order to decrease the needed bandwidth.
[0304] Once determined, content related to points of interest in
the field of view may be overlaid on the real world image, such as
social networking content, interactive tours, local information,
and the like. Information and content related to movies, local
information, weather, restaurants, restaurant availability, local
events, local taxis, music, and the like may be accessed by the
eyepiece and projected on to the lens of the eyepiece for the user
to view and interact with. For example, as the user looks at the
Eiffel Tower, the forward facing camera may take an image and send
it for processing to the eyepiece's associated processor. Object
recognition software may determine that the structure in the
wearer's field of view is the Eiffel Tower. Alternatively, the GPS
coordinates determined by the eyepiece's GPS may be searched in a
database to determine that the coordinates match those of the
Eiffel Tower. In any event, content may then be searched relating
to the Eiffel Tower visitor's information, restaurants in the
vicinity and in the Tower itself, local weather, local Metro
information, local hotel information, other nearby tourist spots,
and the like. Interacting with the content may be enabled by the
control mechanisms described herein. In an embodiment, GPS-based
content reception may be enabled when a Tourist Mode of the
eyepiece is entered.
[0305] In an embodiment, the eyepiece may be used to view streaming
video. For example, videos may be identified via search by GPS
location, search by object recognition of an object in the field of
view, a voice search, a holographic keyboard search, and the like.
Continuing with the example of the Eiffel Tower, a video database
may be searched via the GPS coordinates of the Tower or by the term
`Eiffel Tower` once it has been determined that is the structure in
the field of view. Search results may include geo-tagged videos or
videos associated with the Eiffel Tower. The videos may be scrolled
or flipped through using the control techniques described herein.
Videos of interest may be played using the control techniques
described herein. The video may be laid over the real world scene
or may be displayed on the lens out of the field of view. In an
embodiment, the eyepiece may be darkened via the mechanisms
described herein to enable higher contrast viewing. In another
example, the eyepiece may be able to utilize a camera and network
connectivity, such as described herein, to provide the wearer with
streaming video conferencing capabilities.
[0306] As noted, the user of augmented reality may receive content
from an abundance of sources. A visitor or tourist may desire to
limit the choices to local businesses or institutions; on the other
hand, businesses seeking out visitors or tourists may wish to limit
their offers or solicitations to persons who are in their area or
location but who are visiting rather than local residents. Thus, in
one embodiment, the visitor or tourist may limit his or her search
only to local businesses, say those within certain geographic
limits. These limits may be set via GPS criteria or by manually
indicating a geographic restriction. For example, a person may
require that sources of streaming content or ads be limited to
those within a certain radius (a set number or km or miles) of the
person. Alternatively, the criteria may require that the sources
are limited to those within a certain city or province. These
limits may be set by the augmented reality user just as a user of a
computer at a home or office would limit his or her searches using
a keyboard or a mouse; the entries for augmented reality users are
simply made by voice, by hand motion, or other ways described
elsewhere in the portions of this disclosure discussing
controls.
[0307] In addition, the available content chosen by a user may be
restricted or limited by the type of provider. For example, a user
may restrict choices to those with a website operated by a
government institution (.gov) or by a non-profit institution or
organization (.org). In this way, a tourist or visitor who may be
more interested in visiting government offices, museums, historical
sites and the like, may find his or her choices less cluttered. The
person may be more easily able to make decisions when the available
choices have been pared down to a more reasonable number. The
ability to quickly cut down the available choices is desirable in
more urban areas, such as Paris or Washington, D.C., where there
are many choices.
[0308] The user controls the glasses in any of the manners or modes
described elsewhere in this patent. For example, the user may call
up a desired program or application by voice or by indicating a
choice on the virtual screen of the augmented reality glasses. The
augmented glasses may respond to a track pad mounted on the frame
of the glasses, as described above. Alternatively, the glasses may
be responsive to one or more motion or position sensors mounted on
the frame. The signals from the sensors are then sent to a
microprocessor or microcontroller within the glasses, the glasses
also providing any needed signal transducing or processing. Once
the program of choice has begun, the user makes selections and
enters a response by any of the methods discussed herein, such as
signaling "yes" or "no" with a head movement, a hand gesture, a
trackpad depression, or a voice command.
[0309] At the same time, content providers, that is, advertisers,
may also wish to restrict their offerings to persons who are within
a certain geographic area, e.g., their city limits. At the same
time, an advertiser, perhaps a museum, may not wish to offer
content to local persons, but may wish to reach visitors or
out-of-towners. The augmented reality devices discussed herein are
desirably equipped with both GPS capability and telecommunications
capability. It will be a simple matter for the museum to provide
streaming content within a limited area by limiting its broadcast
power. The museum, however, may provide the content through the
Internet and its content may be available world-wide. In this
instance, a user may receive content through an augmented reality
device advising that the museum is open today and is available for
touring.
[0310] The user may respond to the content by the augmented reality
equivalent of clicking on a link for the museum. The augmented
reality equivalent may be a voice indication, a hand or eye
movement, or other sensory indication of the user's choice, or by
using an associated body-mounted controller. The museum then
receives a cookie indicating the identity of the user or at least
the user's internet service provider (ISP). If the cookie indicates
or suggests an internet service provider other than local
providers, the museum server may then respond with advertisements
or offers tailored to visitors. The cookie may also include an
indication of a telecommunications link, e.g., a telephone number.
If the telephone number is not a local number, this is an
additional clue that the person responding is a visitor. The museum
or other institution may then follow up with the content desired or
suggested by its marketing department.
[0311] Another application of the augmented reality eyepiece takes
advantage of a user's ability to control the eyepiece and its tools
with a minimum use of the user's hands, using instead voice
commands, gestures or motions. As noted above, a user may call upon
the augmented reality eyepiece to retrieve information. This
information may already be stored in a memory of the eyepiece, but
may instead be located remotely, such as a database accessible over
the Internet or perhaps via an intranet which is accessible only to
employees of a particular company or organization. The eyepiece may
thus be compared to a computer or to a display screen which can be
viewed and heard at an extremely close range and generally
controlled with a minimal use of one's hands.
[0312] Applications may thus include providing information
on-the-spot to a mechanic or electronics technician. The technician
can don the glasses when seeking information about a particular
structure or problem encountered, for example, when repairing an
engine or a power supply. Using voice commands, he or she may then
access the database and search within the database for particular
information, such as manuals or other repair and maintenance
documents. The desired information may thus be promptly accessed
and applied with a minimum of effort, allowing the technician to
more quickly perform the needed repair or maintenance and to return
the equipment to service. For mission-critical equipment, such time
savings may also save lives, in addition to saving repair or
maintenance costs.
[0313] The information imparted may include repair manuals and the
like, but may also include a full range of audio-visual
information, i.e., the eyepiece screen may display to the
technician or mechanic a video of how to perform a particular task
at the same time the person is attempting to perform the task. The
augmented reality device also includes telecommunications
capabilities, so the technician also has the ability to call on
others to assist if there is some complication or unexpected
difficulty with the task. This educational aspect of the present
disclosure is not limited to maintenance and repair, but may be
applied to any educational endeavor, such as secondary or
post-secondary classes, continuing education courses or topics,
seminars, and the like.
[0314] In an embodiment, a Wi-Fi enabled eyepiece may run a
location-based application for geo-location of opted-in users.
Users may opt-in by logging into the application on their phone and
enabling broadcast of their location, or by enabling geo-location
on their own eyepiece. As a wearer of the eyepiece scans people,
and thus their opted-in device, the application may identify
opted-in users and send an instruction to the projector to project
an augmented reality indicator on an opted-in user in the user's
field of view. For example, green rings may be placed around people
who have opted-in to have their location seen. In another example,
yellow rings may indicate people who have opted-in but don't meet
some criteria, such as they do not have a FACEBOOK account, or that
there are no mutual friends if they do have a FACEBOOK account.
[0315] Some social networking, career networking, and dating
applications may work in concert with the location-based
application. Software resident on the eyepiece may coordinate data
from the networking and dating sites and the location-based
application. For example, TwittARound is one such program which
makes use of a mounted camera to detect and label location-stamped
tweets from other tweeters nearby. This will enable a person using
the present disclosure to locate other nearby Twitter users.
Alternatively, users may have to set their devices to coordinate
information from various networking and dating sites. For example,
the wearer of the eyepiece may want to see all E-HARMONY users who
are broadcasting their location. If an opted-in user is identified
by the eyepiece, an augmented reality indicator may be laid over
the opted-in user. The indicator may take on a different appearance
if the user has something in common with the wearer, many things in
common with the user, and the like. For example, and referring to
FIG. 16, two people are being viewed by the wearer. Both of the
people are identified as E-HARMONY users by the rings placed around
them. However, the woman shown with solid rings has more than one
item in common with the wearer while the woman shown with dotted
rings has no items in common with the wearer. Any available profile
information may get accessed and displayed to the user.
[0316] In an embodiment, when the wearer directs the eyepiece in
the direction of a user who has a networking account, such as
FACEBOOK, TWITTER, BLIPPY, LINKEDIN, GOOGLE, WIKIPEDIA, and the
like, the user's recent posts or profile information may be
displayed to the wearer. For example, recent status updates,
"tweets", "blips", and the like may get displayed, as mentioned
above for TwittARound. In an embodiment, when the wearer points the
eyepiece in a target user's direction, they may indicate interest
in the user if the eyepiece is pointed for a duration of time
and/or a gesture, head, eye, or audio control is activated. The
target user may receive an indication of interest on their phone or
in their glasses. If the target user had marked the wearer as
interesting but was waiting on the wearer to show interest first,
an indication may immediately pop up in the eyepiece of the target
user's interest. A control mechanism may be used to capture an
image and store the target user's information on associated
non-volatile memory or in an online account.
[0317] In other applications for social networking, a facial
recognition program, such as TAT Augmented ID, from TAT--The
Astonishing Tribe, Malmo, Sweden, may be used. Such a program may
be used to identify a person by his or her facial characteristics.
This software uses facial recognition software to identify a
person. Using other applications, such as photo identifying
software from Flickr, one can then identify the particular nearby
person, and one can then download information from social
networking sites with information about the person. This
information may include the person's name and the profile the
person has made available on sites such as Facebook, Twitter, and
the like. This application may be used to refresh a user's memory
of a person or to identify a nearby person, as well as to gather
information about the person.
[0318] In other applications for social networking, the wearer may
be able to utilize location-based facilities of the eyepiece to
leave notes, comments, reviews, and the like, at locations, in
association with people, places, products, and the like. For
example, a person may be able to post a comment on a place they
visited, where the posting may then be made available to others
through the social network. In another example, a person may be
able to post that comment at the location of the place such that
the comment is available when another person comes to that
location. In this way, a wearer may be able to access comments left
by others when they come to the location. For instance, a wearer
may come to the entrance to a restaurant, and be able to access
reviews for the restaurant, such as sorted by some criteria (e.g.
most recent review, age of reviewer, and the like).
[0319] A user may initiate the desired program by voice, by
selecting a choice from a virtual touchscreen, as described above,
by using a trackpad to select and choose the desired program, or by
any of the control techniques described herein. Menu selections may
then be made in a similar or complementary manner. Sensors or input
devices mounted in convenient locations on the user's body may also
be used, e.g., sensors and a track pad mounted on a wrist pad, on a
glove, or even a discreet device, perhaps of the size of a smart
phone or a personal digital assistant.
[0320] Applications of the present disclosure may provide the
wearer with Internet access, such as for browsing, searching,
shopping, entertainment, and the like, such as through a wireless
communications interface to the eyepiece. For instance, a wearer
may initiate a web search with a control gesture, such as through a
control facility worn on some portion of the wearer's body (e.g. on
the hand, the head, the foot), on some component being used by the
wearer (e.g. a personal computer, a smart phone, a music player),
on a piece of furniture near the wearer (e.g. a chair, a desk, a
table, a lamp), and the like, where the image of the web search is
projected for viewing by the wearer through the eyepiece. The
wearer may then view the search through the eyepiece and control
web interaction though the control facility.
[0321] In an example, a user may be wearing an embodiment
configured as a pair of glasses, with the projected image of an
Internet web browser provided through the glasses while retaining
the ability to simultaneously view at least portions of the
surrounding real environment. In this instance, the user may be
wearing a motion sensitive control facility on their hand, where
the control facility may transmit relative motion of the user's
hand to the eyepiece as control motions for web control, such as
similar to that of a mouse in a conventional personal computer
configuration. It is understood that the user would be enabled to
perform web actions in a similar fashion to that of a conventional
personal computer configuration. In this case, the image of the web
search is provided through the eyepiece while control for selection
of actions to carry out the search is provided though motions of
the hand. For instance, the overall motion of the hand may move a
cursor within the projected image of the web search, the flick of
the finger(s) may provide a selection action, and so forth. In this
way, the wearer may be enabled to perform the desired web search,
or any other Internet browser-enabled function, through an
embodiment connected to the Internet. In one example, a user may
have downloaded computer programs Yelp or Monocle, available from
the App Store, or a similar product, such as NRU ("near you"), an
application from Zagat to locate nearby restaurants or other
stores, Google Earth, Wikipedia, or the like. The person may
initiate a search, for example, for restaurants, or other providers
of goods or services, such as hotels, repairmen, and the like, or
information. When the desired information is found, locations are
displayed or a distance and direction to a desired location is
displayed. The display may take the form of a virtual label
co-located with the real world object in the user's view.
[0322] Other applications from Layar (Amsterdam, the Netherlands)
include a variety of "layers" tailored for specific information
desired by a user. A layer may include restaurant information,
information about a specific company, real estate listings, gas
stations, and so forth. Using the information provided in a
software application, such as a mobile application and a user's
global positioning system (GPS), information may be presented on a
screen of the glasses with tags having the desired information.
Using the haptic controls or other control discussed elsewhere in
this disclosure, a user may pivot or otherwise rotate his or her
body and view buildings tagged with virtual tags containing
information. If the user seeks restaurants, the screen will display
restaurant information, such as name and location. If a user seeks
a particular address, virtual tags will appear on buildings in the
field of view of the wearer. The user may then make selections or
choices by voice, by trackpad, by virtual touch screen, and so
forth.
[0323] Applications of the present disclosure may provide a way for
advertisements to be delivered to the wearer. For example,
advertisements may be displayed to the viewer through the eyepiece
as the viewer is going about his or her day, while browsing the
Internet, conducting a web search, walking though a store, and the
like. For instance, the user may be performing a web search, and
through the web search the user is targeted with an advertisement.
In this example, the advertisement may be projected in the same
space as the projected web search, floating off to the side, above,
or below the view angle of the wearer. In another example,
advertisements may be triggered for delivery to the eyepiece when
some advertising providing facility, perhaps one in proximity to
the wearer, senses the presence of the eyepiece (e.g. through a
wireless connection, RFID, and the like), and directs the
advertisement to the eyepiece.
[0324] For example, the wearer may be window-shopping in Manhattan,
where stores are equipped with such advertising providing
facilities. As the wearer walks by the stores, the advertising
providing facilities may trigger the delivery of an advertisement
to the wearer based on a known location of the user determined by
an integrated location sensor of the eyepiece, such as a GPS. In an
embodiment, the location of the user may be further refined via
other integrated sensors, such as a magnetometer to enable
hyperlocal augmented reality advertising. For example, a user on a
ground floor of a mall may receive certain advertisements if the
magnetometer and GPS readings place the user in front of a
particular store. When the user goes up one flight in the mall, the
GPS location may remain the same, but the magnetometer reading may
indicate a change in elevation of the user and a new placement of
the user in front of a different store. In embodiments, one may
store personal profile information such that the advertising
providing facility is able to better match advertisements to the
needs of the wearer, the wearer may provide preferences for
advertisements, the wearer may block at least some of the
advertisements, and the like. The wearer may also be able to pass
advertisements, and associated discounts, on to friends. The wearer
may communicate them directly to friends that are in close
proximity and enabled with their own eyepiece; they may also
communicate them through a wireless Internet connection, such as to
a social network of friends, though email, SMS; and the like. The
wearer may be connected to facilities and/or infrastructure that
enables the communication of advertisements from a sponsor to the
wearer; feedback from the wearer to an advertisement facility, the
sponsor of the advertisement, and the like; to other users, such as
friends and family, or someone in proximity to the wearer; to a
store, such as locally on the eyepiece or in a remote site, such as
on the Internet or on a user's home computer; and the like. These
interconnectivity facilities may include integrated facilities to
the eyepiece to provide the user's location and gaze direction,
such as through the use of GPS, 3-axis sensors, magnetometer,
gyros, accelerometers, and the like, for determining direction,
speed, attitude (e.g. gaze direction) of the wearer.
Interconnectivity facilities may provide telecommunications
facilities, such as cellular link, a WiFi/MiFi bridge, and the
like. For instance, the wearer may be able to communicate through
an available WiFi link, through an integrated MiFi (or any other
personal or group cellular link) to the cellular system, and the
like. There may be facilities for the wearer to store
advertisements for a later use. There may be facilities integrated
with the wearer's eyepiece or located in local computer facilities
that enable caching of advertisements, such as within a local area,
where the cached advertisements may enable the delivery of the
advertisements as the wearer nears the location associated with the
advertisement. For example, local advertisements may be stored on a
server that contains geo-located local advertisements and specials,
and these advertisements may be delivered to the wearer
individually as the wearer approaches a particular location, or a
set of advertisements may be delivered to the wearer in bulk when
the wearer enters a geographic area that is associated with the
advertisements so that the advertisements are available when the
user nears a particular location. The geographic location may be a
city, a part of the city, a number of blocks, a single block, a
street, a portion of the street, sidewalk, and the like,
representing regional, local, hyper-local areas. Note that the
preceding discussion uses the term advertisement, but one skilled
in the art will appreciate that this can also mean an announcement,
a broadcast, a circular, a commercial, a sponsored communication,
an endorsement, a notice, a promotion, a bulletin, a message, and
the like.
[0325] FIGS. 18-20A depict ways to deliver custom messages to
persons within a short distance of an establishment that wishes to
send a message, such as a retail store. Referring to FIG. 18 now,
embodiments may provide for a way to view custom billboards, such
as when the wearer of the eyepiece is walking or driving, by
applications as mentioned above for searching for providers of
goods and services. As depicted in FIG. 18, the billboard 1800
shows an exemplary augmented reality-based advertisement displayed
by a seller or a service provider. The exemplary advertisement, as
depicted, may relate to an offer on drinks by a bar. For example,
two drinks may be provided for the cost of just one drink. With
such augmented reality-based advertisements and offers, the
wearer's attention may be easily directed towards the billboards.
The billboards may also provide details about location of the bar
such as street address, floor number, phone number, and the like.
In accordance with other embodiments, several devices other than
eyepiece may be utilized to view the billboards. These devices may
include without limitations smartphones, IPHONEs, IPADs, car
windshields, user glasses, helmets, wristwatches, headphones,
vehicle mounts, and the like. In accordance with an embodiment, a
user (wearer in case the augmented reality technology is embedded
in the eyepiece) may automatically receive offers or view a scene
of the billboards as and when the user passes or drives by the
road. In accordance with another embodiment, the user may receive
offers or view the scene of the billboards based on his
request.
[0326] FIG. 19 illustrates two exemplary roadside billboards 1900
containing offers and advertisements fromsellers or service
providers that may be viewed in the augmented reality manner. The
augmented advertisement may provide a live and near-to-reality
perception to the user or the wearer.
[0327] As illustrated in FIG. 20, the augmented reality enabled
device such as the camera lens provided in the eyepiece may be
utilized to receive and/or view graffiti 2000, slogans, drawings,
and the like, that may be displayed on the roadside or on top,
side, front of the buildings and shops. The roadside billboards and
the graffiti may have a visual (e.g. a code, a shape) or wireless
indicator that may link the advertisement, or advertisement
database, to the billboard. When the wearer nears and views the
billboard, a projection of the billboard advertisement may then be
provided to the wearer. In embodiments, one may also store personal
profile information such that the advertisements may better match
the needs of the wearer, the wearer may provide preferences for
advertisements, the wearer may block at least some of the
advertisements, and the like. In embodiments, the eyepiece may have
brightness and contrast control over the eyepiece projected area of
the billboard so as to improve readability for the advertisement,
such as in a bright outside environment.
[0328] In other embodiments, users may post information or messages
on a particular location, based on its GPS location or other
indicator of location, such as a magnetometer reading. The intended
viewer is able to see the message when the viewer is within a
certain distance of the location, as explained with FIG. 20A. In a
first step 2001 of the method FIG. 20A, a user decides the location
where the message is to be received by persons to whom the message
is sent. The message is then posted 2003, to be sent to the
appropriate person or persons when the recipient is close to the
intended "viewing area." Location of the wearers of the augmented
reality eyepiece is continuously updated 2005 by the GPS system
which forms a part of the eyepiece. When the GPS system determines
that the wearer is within a certain distance of the desired viewing
area, e.g., 10 meters, the message is then sent 2007 to the viewer.
In one embodiment, the message then appears as e-mail or a text
message to the recipient, or if the recipient is wearing an
eyepiece, the message may appear in the eyepiece. Because the
message is sent to the person based on the person's location, in
one sense, the message may be displayed as "graffiti" on a building
or feature at or near the specified location. Specific settings may
be used to determine if all passersby to the "viewing area" can see
the message or if only a specific person or group of people or
devices with specific identifiers.
[0329] Embodiments may provide for a way to view information
associated with products, such as in a store. Information may
include nutritional information for food products, care
instructions for clothing products, technical specifications for
consumer electronics products, e-coupons, promotions, price
comparisons with other like products, price comparisons with other
stores, and the like. This information may be projected in relative
position with the product, to the periphery of sight to the wearer,
in relation to the store layout, and the like. The product may be
identified visually through a SKU, a brand tag, and the like;
transmitted by the product packaging, such as through an RFID tag
on the product; transmitted by the store, such as based on the
wearer's position in the store, in relative position to the
products; and the like.
[0330] For example, a viewer may be walking though a clothing
store, and as they walk are provided with information on the
clothes on the rack, where the information is provided through the
product's RFID tag. In embodiments, the information may be
delivered as a list of information, as a graphic representation, as
audio and/or video presentation, and the like. In another example,
the wearer may be food shopping, and advertisement providing
facilities may be providing information to the wearer in
association with products in the wearer's proximity, the wearer may
be provided information when they pick up the product and view the
brand, product name, SKU, and the like. In this way, the wearer may
be provided a more informative environment in which to effectively
shop.
[0331] One embodiment may allow a user to receive or share
information about shopping or an urban area through the use of the
augmented reality enabled devices such as the camera lens fitted in
the eyepiece of exemplary sunglasses. These embodiments will use
augmented reality (AR) software applications such as those
mentioned above in conjunction with searching for providers of
goods and services. In one scenario, the wearer of the eyepiece may
walk down a street or a market for shopping purposes. Further, the
user may activate various modes that may assist in defining user
preferences for a particular scenario or environment. For example
the user may enter navigation mode through which the wearer may be
guided across the streets and the market for shopping of the
preferred accessories and products. The mode may be selected and
various directions may be given by the wearer through various
methods such as through text commands, voice commands, and the
like. In an embodiment, the wearer may give a voice command to
select the navigation mode which may result in the augmented
display in front of the wearer. The augmented information may
depict information pertinent to the location of various shops and
vendors in the market, offers in various shops and by various
vendors, current happy hours, current date and time and the like.
Various sorts of options may also be displayed to the wearer. The
wearer may scroll the options and walk down the street guided
through the navigation mode. Based on options provided, the wearer
may select a place that suits him the best for shopping based on
such as offers and discounts and the like.
[0332] The wearer may give a voice command to navigate toward the
place and the wearer may then be guided toward it. The wearer may
also receive advertisements and offers automatically or based on
request regarding current deals, promotions and events in the
interested location such as a nearby shopping store. The
advertisements, deals and offers may appear in proximity of the
wearer and options may be displayed for purchasing desired products
based on the advertisements, deals and offers. The wearer may for
example select a product and purchase it through a Google checkout.
A message or an email may appear on the eyepiece, similar to the
one depicted in FIG. 7, with information that the transaction for
the purchase of the product has been completed. A product delivery
status/information may also be displayed. The wearer may further
convey or alert friends and relatives regarding the offers and
events through social networking platforms and may also ask them to
join.
[0333] In embodiments, the user may wear the head-mounted eyepiece
wherein the eyepiece includes an optical assembly through which the
user may view a surrounding environment and displayed content. The
displayed content may comprise one or more local advertisements.
The location of the eyepiece may be determined by an integrated
location sensor and the local advertisement may have a relevance to
the location of the eyepiece. By way of example, the user's
location may be determined via GPS, RFID, manual input, and the
like. Further, the user may be walking by a coffee shop, and based
on the user's proximity to the shop, an advertisement, similar to
those depicted in FIG. 19, showing the store's brand of coffee may
appear in the user's field of view. The user may experience similar
types of local advertisements as he or she moves about the
surrounding environment.
[0334] In other embodiments, the eyepiece may contain a capacitive
sensor capable of sensing whether the eyepiece is in contact with
human skin. Such sensor or group of sensors may be placed on the
eyepiece and or eyepiece arm in such a manner that allows detection
of when the glasses are being worn by a user. In other embodiments,
sensors may be used to determine whether the eyepiece is in a
position such that they may be worn by a user, for example, when
the earpiece is in the unfolded position. Furthermore, local
advertisements may be sent only when the eyepiece is in contact
with human skin, in a wearable position, a combination of the two,
actually worn by the user and the like. In other embodiments, the
local advertisement may be sent in response to the eyepiece being
powered on or in response to the eyepiece being powered on and worn
by the user and the like. By way of example, an advertiser may
choose to only send local advertisements when a user is in
proximity to a particular establishment and when the user is
actually wearing the glasses and they are powered on allowing the
advertiser to target the advertisement to the user at the
appropriate time.
[0335] In accordance with other embodiments, the local
advertisement may be displayed to the user as a banner
advertisement, two-dimensional graphic, text and the like. Further,
the local advertisement may be associated with a physical aspect of
the user's view of the surrounding environment. The local
advertisement may also be displayed as an augmented reality
advertisement wherein the advertisement is associated with a
physical aspect of the surrounding environment. Such advertisement
may be two or three-dimensional. By way of example, a local
advertisement may be associated with a physical billboard as
described further in FIG. 18 wherein the user's attention may be
drawn to displayed content showing a beverage being poured from a
billboard 1800 onto an actual building in the surrounding
environment. The local advertisement may also contain sound that is
displayed to the user through an earpiece, audio device or other
means. Further, the local advertisement may be animated in
embodiments. For example, the user may view the beverage flow from
the billboard onto an adjacent building and, optionally, into the
surrounding environment. Similarly, an advertisement may display
any other type of motion as desired in the advertisement.
Additionally, the local advertisement may be displayed as a
three-dimensional object that may be associated with or interact
with the surrounding environment. In embodiments where the
advertisement is associated with an object in the user's view of
the surrounding environment, the advertisement may remain
associated with or in proximity to the object even as the user
turns his head. For example, if an advertisement, such as the
coffee cup as described in FIG. 19, is associated with a particular
building, the coffee cup advertisement may remain associated with
and in place over the building even as the user turns his head to
look at another object in his environment.
[0336] In other embodiments, local advertisements may be displayed
to the user based on a web search conducted by the user where the
advertisement is displayed in the content of the web search
results. For example, the user may search for "happy hour" as he is
walking down the street, and in the content of the search results,
a local advertisement may be displayed advertising a local bar's
beer prices.
[0337] Further, the content of the local advertisement may be
determined based on the user's personal information. The user's
information may be made available to a web application, an
advertising facility and the like. Further, a web application,
advertising facility or the user's eyepiece may filter the
advertising based on the user's personal information. Generally,
for example, a user may store personal information about his likes
and dislikes and such information may be used to direct advertising
to the user's eyepiece. By way of specific example, the user may
store data about his affinity for a local sports team, and as
advertisements are made available, those advertisements with his
favorite sports team may be given preference and pushed to the
user. Similarly, a user's dislikes may be used to exclude certain
advertisements from view. In various embodiments, the
advertisements may be cashed on a server where the advertisement
may be accessed by at least one of an advertising facility, web
application and eyepiece and displayed to the user.
[0338] In various embodiments, the user may interact with any type
of local advertisement in numerous ways. The user may request
additional information related to a local advertisement by making
at least one action of an eye movement, body movement and other
gesture. For example, if an advertisement is displayed to the user,
he may wave his hand over the advertisement in his field of view or
move his eyes over the advertisement in order to select the
particular advertisement to receive more information relating to
such advertisement. Moreover, the user may choose to ignore the
advertisement by any movement or control technology described
herein such as through an eye movement, body movement, other
gesture and the like. Further, the user may chose to ignore the
advertisement by allowing it to be ignored by default by not
selecting the advertisement for further interaction within a given
period of time. For example, if the user chooses not to gesture for
more information from the advertisement within five seconds of the
advertisement being displayed, the advertisement may be ignored by
default and disappear from the users view. Furthermore, the user
may select to not allow local advertisements to be displayed
whereby said user selects such an option on a graphical user
interface or by turning such feature off via a control on said
eyepiece.
[0339] In other embodiments, the eyepiece may include an audio
device. Accordingly, the displayed content may comprise a local
advertisement and audio such that the user is also able to hear a
message or other sound effects as they relate to the local
advertisement. By way of example, and referring again to FIG. 18,
while the user sees the beer being poured, he will actually be able
to hear an audio transmission corresponding to the actions in the
advertisement. In this case, the user may hear the bottle open and
then the sound of the liquid pouring out of the bottle and onto the
rooftop. In yet other embodiments, a descriptive message may be
played, and or general information may be given as part of the
advertisement. In embodiments, any audio may be played as desired
for the advertisement.
[0340] In accordance with another embodiment, social networking may
be facilitated with the use of the augmented reality enabled
devices such as a camera lens fitted in the eyepiece. This may be
utilized to connect several users or other persons that may not
have the augmented reality enabled device together who may share
thoughts and ideas with each other. For instance, the wearer of the
eyepiece may be sitting in a school campus along with other
students. The wearer may connect with and send a message to a first
student who may be present in a coffee shop. The wearer may ask the
first student regarding persons interested in a particular subject
such as environmental economics for example. As other students pass
through the field of view of the wearer, the camera lens fitted
inside the eyepiece may track and match the students to a
networking database such as `Google me` that may contain public
profiles. Profiles of interested and relevant persons from the
public database may appear and pop-up in front of the wearer on the
eyepiece. Some of the profiles that may not be relevant may either
be blocked or appear blocked to the user. The relevant profiles may
be highlighted for quick reference of the wearer. The relevant
profiles selected by the wearer may be interested in the subject
environmental economics and the wearer may also connect with them.
Further, they may also be connected with the first student. In this
manner, a social network may be established by the wearer with the
use of the eyepiece enabled with the feature of the augmented
reality. The social networks managed by the wearer and the
conversations therein may be saved for future reference.
[0341] The present disclosure may be applied in a real estate
scenario with the use of the augmented reality enabled devices such
as a camera lens fitted in an eyepiece. The wearer, in accordance
with this embodiment, may want to get information about a place in
which the user may be present at a particular time such as during
driving, walking, jogging and the like. The wearer may, for
instance, want to understand residential benefits and loss in that
place. He may also want to get detailed information about the
facilities in that place. Therefore, the wearer may utilize a map
such as a Google online map and recognize the real estate that may
be available there for lease or purchase. As noted above, the user
may receive information about real estate for sale or rent using
mobile Internet applications such as Layar. In one such
application, information about buildings within the user's field of
view is projected onto the inside of the glasses for consideration
by the user. Options may be displayed to the wearer on the eyepiece
lens for scrolling, such as with a trackpad mounted on a frame of
the glasses. The wearer may select and receive information about
the selected option. The augmented reality enabled scenes of the
selected options may be displayed to the wearer and the wearer may
be able to view pictures and take a facility tour in the virtual
environment. The wearer may further receive information about real
estate agents and fix an appointment with one of those. An email
notification or a call notification may also be received on the
eyepiece for confirmation of the appointment. If the wearer finds
the selected real estate of worth, a deal may be made and that may
be purchased by the wearer.
[0342] In accordance with another embodiment, customized and
sponsored tours and travels may be enhanced through the use of the
augmented reality-enabled devices, such as a camera lens fitted in
the eyepiece. For instance, the wearer (as a tourist) may arrive in
a city such as Paris and wants to receive tourism and sightseeing
related information about the place to accordingly plan his visit
for the consecutive days during his stay. The wearer may put on his
eyepiece or operate any other augmented reality enabled device and
give a voice or text command regarding his request. The augmented
reality enabled eyepiece may locate wearer position through
geo-sensing techniques and decide tourism preferences of the
wearer. The eyepiece may receive and display customized information
based on the request of the wearer on a screen. The customized
tourism information may include information about art galleries and
museums, monuments and historical places, shopping complexes,
entertainment and nightlife spots, restaurants and bars, most
popular tourist destinations and centers/attractions of tourism,
most popular local/cultural/regional destinations and attractions,
and the like without limitations. Based on user selection of one or
more of these categories, the eyepiece may prompt the user with
other questions such as time of stay, investment in tourism and the
like. The wearer may respond through the voice command and in
return receive customized tour information in an order as selected
by the wearer. For example the wearer may give a priority to the
art galleries over monuments. Accordingly, the information may be
made available to the wearer. Further, a map may also appear in
front of the wearer with different sets of tour options and with
different priority rank such as: [0343] Priority Rank 1: First tour
Option (Champs Elyse, Louvre, Rodin, Museum, Famous Cafe) [0344]
Priority Rank 2: Second option [0345] Priority Rank 3: Third
Option
[0346] The wearer, for instance, may select the first option since
it is ranked as highest in priority based on wearer indicated
preferences. Advertisements related to sponsors may pop up right
after selection. Subsequently, a virtual tour may begin in the
augmented reality manner that may be very close to the real
environment. The wearer may for example take a 30 seconds tour to a
vacation special to the Atlantis Resort in the Bahamas. The virtual
3D tour may include a quick look at the rooms, beach, public
spaces, parks, facilities, and the like. The wearer may also
experience shopping facilities in the area and receive offers and
discounts in those places and shops. At the end of the day, the
wearer might have experienced a whole day tour sitting in his
chamber or hotel. Finally, the wearer may decide and schedule his
plan accordingly.
[0347] Another embodiment may allow information concerning auto
repairs and maintenance services with the use of the augmented
reality enabled devices such as a camera lens fitted in the
eyepiece. The wearer may receive advertisements related to auto
repair shops and dealers by sending a voice command for the
request. The request may, for example include a requirement of oil
change in the vehicle/car. The eyepiece may receive information
from the repair shop and display to the wearer. The eyepiece may
pull up a 3D model of the wearer's vehicle and show the amount of
oil left in the car through an augmented reality enabled
scene/view. The eyepiece may show other relevant information also
about the vehicle of the wearer such as maintenance requirements in
other parts like brake pads. The wearer may see 3D view of the
wearing brake pads and may be interested in getting those repaired
or changed. Accordingly, the wearer may schedule an appointment
with a vendor to fix the problem via using the integrated wireless
communication capability of the eyepiece. The confirmation may be
received through an email or an incoming call alert on the eyepiece
camera lens.
[0348] In accordance with another embodiment, gift shopping may
benefit through the use of the augmented reality enabled devices
such as a camera lens fitted in the eyepiece. The wearer may post a
request for a gift for some occasion through a text or voice
command. The eyepiece may prompt the wearer to answer his
preferences such as type of gifts, age group of the person to
receive the gift, cost range of the gift and the like. Various
options may be presented to the user based on the received
preferences. For instance, the options presented to the wearer may
be: Cookie basket, Wine and cheese basket, Chocolate assortment,
Golfer's gift basket, and the like.
[0349] The available options may be scrolled by the wearer and the
best fit option may be selected via the voice command or text
command. For example, the wearer may select the Golfer's gift
basket. A 3D view of the Golfer's gift basket along with a golf
course may appear in front of the wearer. The virtual 3D view of
the Golfer's gift basket and the golf course enabled through the
augmented reality may be perceived very close to the real world
environment. The wearer may finally respond to the address,
location and other similar queries prompted through the eyepiece. A
confirmation may then be received through an email or an incoming
call alert on the eyepiece camera lens.
[0350] Another application that may appeal to users is mobile
on-line gaming using the augmented reality glasses. These games may
be computer video games, such as those furnished by Electronic Arts
Mobile, UbiSoft and Activision Blizzard, e.g., World of
Warcraft.RTM. (WoW). Just as games and recreational applications
are played on computers at home (rather than computers at work),
augmented reality glasses may also use gaming applications. The
screen may appear on an inside of the glasses so that a user may
observe the game and participate in the game. In addition, controls
for playing the game may be provided through a virtual game
controller, such as a joystick, control module or mouse, described
elsewhere herein. The game controller may include sensors or other
output type elements attached to the user's hand, such as for
feedback from the user through acceleration, vibration, force,
electrical impulse, temperature, electric field sensing, and the
like. Sensors and actuators may be attached to the user's hand by
way of a wrap, ring, pad, glove, bracelet, and the like. As such,
an eyepiece virtual mouse may allow the user to translate motions
of the hand, wrist, and/or fingers into motions of the cursor on
the eyepiece display, where "motions" may include slow movements,
rapid motions, jerky motions, position, change in position, and the
like, and may allow users to work in three dimensions, without the
need for a physical surface, and including some or all of the six
degrees of freedom.
[0351] As seen in FIG. 27, gaming applications may use both the
internet and a GPS. In one embodiment, a game is downloaded from a
customer database via a game provider, perhaps using their web
services and the internet as shown, to a user computer or augmented
reality glasses. At the same time, the glasses, which also have
telecommunication capabilities, receive and send telecommunications
and telemetry signals via a cellular tower and a satellite. Thus,
an on-line gaming system has access to information about the user's
location as well as the user's desired gaming activities.
[0352] Games may take advantage of this knowledge of the location
of each player. For example, the games may build in features that
use the player's location, via a GPS locator or magnetometer
locator, to award points for reaching the location. The game may
also send a message, e.g., display a clue, or a scene or images,
when a player reaches a particular location. A message, for
example, may be to go to a next destination, which is then provided
to the player. Scenes or images may be provided as part of a
struggle or an obstacle which must be overcome, or as an
opportunity to earn game points. Thus, in one embodiment, augmented
reality eyepieces or glasses may use the wearer's location to
quicken and enliven computer-based video games.
[0353] One method of playing augmented reality games is depicted in
FIG. 28. In this method, a user logs into a website whereby access
to a game is permitted. The game is selected. In one example, the
user may join a game, if multiple player games are available and
desired; alternatively, the user may create a custom game, perhaps
using special roles the user desired. The game may be scheduled,
and in some instances, players may select a particular time and
place for the game, distribute directions to the site where the
game will be played, etc. Later, the players meet and check into
the game, with one or more players using the augmented reality
glasses. Participants then play the game and if applicable, the
game results and any statistics (scores of the players, game times,
etc.) may be stored. Once the game has begun, the location may
change for different players in the game, sending one player to one
location and another player or players to a different location. The
game may then have different scenarios for each player or group of
players, based on their GPS or magnetometer-provided locations.
Each player may also be sent different messages or images based on
his or her role, his or her location, or both. Of course, each
scenario may then lead to other situations, other interactions,
directions to other locations, and so forth. In one sense, such a
game mixes the reality of the player's location with the game in
which the player is participating.
[0354] Games can range from simple games of the type that would be
played in a palm of a player's hand, such as small, single player
games. Alternatively, more complicated, multi-player games may also
be played. In the former category are games such as SkySiege, AR
Drone and Fire Fighter 360. In addition, multiplayer games are also
easily envisioned. Since all players must log into the game, a
particular game may be played by friends who log in and specify the
other person or persons. The location of the players is also
available, via GPS or other method. Sensors in the augmented
reality glasses or in a game controller as described above, such as
accelerometers, gyroscopes or even a magnetic compass, may also be
used for orientation and game playing. An example is AR Invaders,
available for iPhone applications from the App Store. Other games
may be obtained from other vendors and for non-iPhone type systems,
such as Layar, of Amsterdam and Paris SA, Paris, France, supplier
of AR Drone, AR Flying Ace and AR Pursuit.
[0355] In embodiments, games may also be in 3D such that the user
can experience 3D gaming. For example, when playing a 3D game, the
user may view a virtual, augmented reality or other environment
where the user is able to control his view perspective. The user
may turn his head to view various aspects of the virtual
environment or other environment. As such, when the user turns his
head or makes other movements, he may view the game environment as
if he were actually in such environment. For example, the
perspective of the user may be such that the user is put `into` a
3D game environment with at least some control over the viewing
perspective where the user may be able to move his head and have
the view of the game environment change in correspondence to the
changed head position. Further, the user may be able to `walk into`
the game when he physically walks forward, and have the perspective
change as the user moves. Further, the perspective may also change
as the user moves the gazing view of his eyes, and the like.
Additional image information may be provided, such as at the sides
of the user's view that could be accessed by turning the head.
[0356] In embodiments, the 3D game environment may be projected
onto the lenses of the glasses or viewed by other means. Further,
the lenses may be opaque or transparent. In embodiments, the 3D
game image may be associated with and incorporate the external
environment of the user such that the user may be able to turn his
head and the 3D image and external environment stay together.
Further, such 3D gaming image and external environment associations
may change such that the 3D image associates with more than one
object or more than one part of an object in the external
environment at various instances such that it appears to the user
that the 3D image is interacting with various aspects or objects of
the actual environment. By way of example, the user may view a 3D
game monster climb up a building or on to an automobile where such
building or automobile is an actual object in the user's
environment. In such a game, the user may interact with the monster
as part of the 3D gaming experience. The actual environment around
the user may be part of the 3D gaming experience. In embodiments
where the lenses are transparent, the user may interact in a 3D
gaming environment while moving about his or her actual
environment. The 3D game may incorporate elements of the user's
environment into the game, it may be wholly fabricated by the game,
or it may be a mixture of both.
[0357] In embodiments, the 3D images may be associated with or
generated by an augmented reality program, 3D game software and the
like or by other means. In embodiments where augmented reality is
employed for the purpose of 3D gaming, a 3D image may appear or be
perceived by the user based on the user's location or other data.
Such an augmented reality application may provide for the user to
interact with such 3D image or images to provide a 3D gaming
environment when using the glasses. As the user changes his
location, for example, play in the game may advance and various 3D
elements of the game may become accessible or inaccessible to the
viewer. By way of example, various 3D enemies of the user's game
character may appear in the game based on the actual location of
the user. The user may interact with or cause reactions from other
users playing the game and or 3D elements associated with the other
users playing the game. Such elements associated with users may
include weapons, messages, currency, a 3D image of the user and the
like. Based on a user's location or other data, he or she may
encounter, view, or engage, by any means, other users and 3D
elements associated with other users. In embodiments, 3D gaming may
also be provided by software installed in or downloaded to the
glasses where the user's location is or is not used.
[0358] In embodiments, the lenses may be opaque to provide the user
with a virtual reality or other virtual 3D gaming experience where
the user is `put into` the game where the user's movements may
change the viewing perspective of the 3D gaming environment for the
user. The user may move through or explore the virtual environment
through various body, head, and or eye movements, use of game
controllers, one or more touch screens, or any of the control
techniques described herein which may allow the user to navigate,
manipulate, and interact with the 3D environment, and thereby play
the 3D game.
[0359] In various embodiments, the user may navigate, interact with
and manipulate the 3D game environment and experience 3D gaming via
body, hand, finger, eye, or other movements, through the use of one
or more wired or wireless controllers, one or more touch screens,
any of the control techniques described herein, and the like.
[0360] In embodiments, internal and external facilities available
to the eyepiece may provide for learning the behavior of a user of
the eyepiece, and storing that learned behavior in a behavioral
database to enable location-aware control, activity-aware control,
predictive control, and the like. For example, a user may have
events and/or tracking of actions recorded by the eyepiece, such as
commands from the user, images sensed through a camera, GPS
location of the user, sensor inputs over time, triggered actions by
the user, communications to and from the user, user requests, web
activity, music listened to, directions requested, recommendations
used or provided, and the like. This behavioral data may be stored
in a behavioral database, such as tagged with a user identifier or
autonomously. The eyepiece may collect this data in a learn mode,
collection mode, and the like. The eyepiece may utilize past data
taken by the user to inform or remind the user of what they did
before, or alternatively, the eyepiece may utilize the data to
predict what eyepiece functions and applications the user may need
based on past collected experiences. In this way, the eyepiece may
act as an automated assistant to the user, for example, launching
applications at the usual time the user launches them, turning off
augmented reality and the GPS when nearing a location or entering a
building, streaming in music when the user enters the gym, and the
like. Alternately, the learned behavior and/or actions of a
plurality of eyepiece users may be autonomously stored in a
collective behavior database, where learned behaviors amongst the
plurality of users are available to individual users based on
similar conditions. For example, a user may be visiting a city, and
waiting for a train on a platform, and the eyepiece of the user
accesses the collective behavior database to determine what other
users have done while waiting for the train, such as getting
directions, searching for points of interest, listening to certain
music, looking up the train schedule, contacting the city website
for travel information, connecting to social networking sites for
entertainment in the area, and the like. In this way, the eyepiece
may be able to provide the user with an automated assistant with
the benefit of many different user experiences. In embodiments, the
learned behavior may be used to develop preference profiles,
recommendations, advertisement targeting, social network contacts,
behavior profiles for the user or groups of users, and the like,
for/to the user.
[0361] In an embodiment, the augmented reality eyepiece or glasses
may include one or more acoustic sensors for detecting sound. An
example is depicted above in FIG. 29. In one sense, acoustic
sensors are similar to microphones, in that they detect sounds.
Acoustic sensors typically have one or more frequency bandwidths at
which they are more sensitive, and the sensors can thus be chosen
for the intended application. Acoustic sensors are available from a
variety of manufacturers and are available with appropriate
transducers and other required circuitry. Manufacturers include ITT
Electronic Systems, Salt Lake City, Utah, USA; Meggitt Sensing
Systems, San Juan Capistrano, Calif., USA; and National
Instruments, Austin, Tex., USA. Suitable microphones include those
which comprise a single microphone as well as those which comprise
an array of microphones, or a microphone array.
[0362] Acoustic sensors may include those using micro
electromechanical systems (MEMS) technology. Because of the very
fine structure in a MEMS sensor, the sensor is extremely sensitive
and typically has a wide range of sensitivity. MEMS sensors are
typically made using semiconductor manufacturing techniques. An
element of a typical MEMS accelerometer is a moving beam structure
composed of two sets of fingers. One set is fixed to a solid ground
plane on a substrate; the other set is attached to a known mass
mounted on springs that can move in response to an applied
acceleration. This applied acceleration changes the capacitance
between the fixed and moving beam fingers. The result is a very
sensitive sensor. Such sensors are made, for example, by
STMicroelectronics, Austin, Tex. and Honeywell International,
Morristown N.J., USA.
[0363] In addition to identification, sound capabilities of the
augmented reality devices may also be applied to locating an origin
of a sound. As is well known, at least two sound or acoustic
sensors are needed to locate a sound. The acoustic sensor will be
equipped with appropriate transducers and signal processing
circuits, such as a digital signal processor, for interpreting the
signal and accomplishing a desired goal. One application for sound
locating sensors may be to determine the origin of sounds from
within an emergency location, such as a burning building, an
automobile accident, and the like. Emergency workers equipped with
embodiments described herein may each have one or more than one
acoustic sensors or microphones embedded within the frame. Of
course, the sensors could also be worn on the person's clothing or
even attached to the person. In any event, the signals are
transmitted to the controller of the augmented reality eyepiece.
The eyepiece or glasses are equipped with GPS technology and may
also be equipped with direction-finding capabilities;
alternatively, with two sensors per person, the microcontroller can
determine a direction from which the noise originated.
[0364] If there are two or more firefighters, or other emergency
responders, their location is known from their GPS capabilities.
Either of the two, or a fire chief, or the control headquarters,
then knows the position of two responders and the direction from
each responder to the detected noise. The exact point of origin of
the noise can then be determined using known techniques and
algorithms. See e.g., Acoustic Vector-Sensor Beamforming and Capon
Direction Estimation, M. Hawkes and A. Nehorai, IEEE Transactions
on Signal Processing, vol. 46, no. 9, September 1998, at 2291-2304;
see also Cramer-Rer Bounds for Direction Finding by an Acoustic
Vector Sensor Under Nonideal Gain-Phase Responses, Noncollocation
or Nonorthogonal Orientation, P. K. Tam and K. T. Wong, IEEE
Sensors Journal, vol. 9. No. 8, August 2009, at 969-982. The
techniques used may include timing differences (differences in time
of arrival of the parameter sensed), acoustic velocity differences,
and sound pressure differences. Of course, acoustic sensors
typically measure levels of sound pressure (e.g., in decibels), and
these other parameters may be used in appropriate types of acoustic
sensors, including acoustic emission sensors and ultrasonic sensors
or transducers.
[0365] The appropriate algorithms and all other necessary
programming may be stored in the microcontroller of the eyepiece,
or in memory accessible to the eyepiece. Using more than one
responder, or several responders, a likely location may then be
determined, and the responders can attempt to locate the person to
be rescued. In other applications, responders may use these
acoustic capabilities to determine the location of a person of
interest to law enforcement. In still other applications, a number
of people on maneuvers may encounter hostile fire, including direct
fire (line of sight) or indirect fire (out of line of sight,
including high angle fire). The same techniques described here may
be used to estimate a location of the hostile fire. If there are
several persons in the area, the estimation may be more accurate,
especially if the persons are separated at least to some extent,
over a wider area. This may be an effective tool to direct
counter-battery or counter-mortar fire against hostiles. Direct
fire may also be used if the target is sufficiently close.
[0366] In addition to microphones, the augmented reality eyepiece
may be equipped with ear buds, which may be articulating ear buds,
as mentioned elsewhere herein, and may be removably attached 1403,
or may be equipped with an audio output jack 1401. The eyepiece and
ear buds may be equipped to deliver noise-cancelling interference,
allowing the user to better hear sounds delivered from the
audio-video communications capabilities of the augmented reality
eyepiece or glasses, and may feature automatic gain control. The
speakers or ear buds of the augmented reality eyepiece may also
connect with the full audio and visual capabilities of the device,
with the ability to deliver high quality and clear sound from the
included telecommunications device. As noted elsewhere herein, this
includes radio or cellular telephone (smart phone) audio
capabilities, and may also include complementary technologies, such
as Bluetooth.TM. capabilities or related technologies, such as IEEE
802.11, for wireless personal area networks (WPAN).
[0367] Another aspect of the augmented audio capabilities includes
speech recognition and identification capabilities. Speech
recognition concerns understanding what is said while speech
identification concerns understanding who the speaker is. Speech
identification may work hand in hand with the facial recognition
capabilities of these devices to more positively identify persons
of interest. As described elsewhere in this document, a camera
connected as part of the augmented reality eyepiece can
unobtrusively focus on desired personnel, such as a single person
in a crowd. Using the camera and appropriate facial recognition
software, an image of the person may be taken. The features of the
image are then broken down into any number of measurements and
statistics, and the results are compared to a database of known
persons. An identity may then be made. In the same manner, a voice
or voice sampling from the person of interest may be taken. The
sample may be marked or tagged, e.g., at a particular time
interval, and labeled, e.g., a description of the person's physical
characteristics or a number. The voice sample may be compared to a
database of known persons, and if the person's voice matches, then
an identification may be made.
[0368] In one embodiment, important characteristics of a particular
person's speech may be understood from a sample or from many
samples of the person's voice. The samples are typically broken
into segments, frames and subframes. Typically, important
characteristics include a fundamental frequency of the person's
voice, energy, formants, speaking rate, and the like. These
characteristics are analyzed by software which analyses the voice
according to certain formulae or algorithms. This field is
constantly changing and improving. However, currently such
classifiers may include algorithms such as neural network
classifiers, k-classifiers, hidden Markov models, Gaussian mixture
models and pattern matching algorithms, among others.
[0369] A general template 3100 for speech recognition and speaker
identification is depicted in FIG. 31. A first step 3101 is to
provide a speech signal. Ideally, one has a known sample from prior
encounters with which to compare the signal. The signal is then
digitized in step 3102 and is partitioned in step 3103 into
fragments, such as segments, frames and subframes. Features and
statistics of the speech sample are then generated and extracted in
step 3104. The classifier, or more than one classifier, is then
applied in step 3105 to determine general classifications of the
sample. Post-processing of the sample may then be applied in step
3106, e.g., to compare the sample to known samples for possible
matching and identification. The results may then be output in step
3107. The output may be directed to the person requesting the
matching, and may also be recorded and sent to other persons and to
one or more databases.
[0370] In an embodiment, the audio capabilities of the eyepiece
include hearing protection with the associated earbuds. The audio
processor of the eyepiece may enable automatic noise suppression,
such as if a loud noise is detected near the wearer's head. Any of
the control technologies described herein may be used with
automatic noise suppression.
[0371] In an embodiment, the eyepiece may include a nitinol head
strap. The head strap may be a thin band of curved metal which may
either pull out from the arms of the eyepiece or rotate out and
extend out to behind the head to secure the eyepiece to the head.
In one embodiment, the tip of the nitinol strap may have a silicone
cover such that the silicone cover is grasped to pull out from the
ends of the arms. In embodiments, only one arm has a nitinol band,
and it gets secured to the other arm to form a strap. In other
embodiments, both arms have a nitinol band and both sides get
pulled out to either get joined to form a strap or independently
grasp a portion of the head to secure the eyepiece on the wearer's
head.
[0372] Referring to FIG. 21, the eyepiece may include one or more
adjustable wrap around extendable arms 2134. The adjustable wrap
around extendable arms 2134 may secure the position of the eyepiece
to the user's head. One or more of the extendable arms 2134 may be
made out of a shape memory material. In embodiments, one or both of
the arms may be made of nitinol and/or any shape-memory material.
In other instances, the end of at least one of the wrap around
extendable arms 2134 may be covered with silicone. Further, the
adjustable wrap around extendable arms 2134 may extend from the end
of an eyepiece arm 2116. They may extend telescopically and/or they
may slide out from an end of the eyepiece arms. They may slide out
from the interior of the eyepiece arms 2116 or they may slide along
an exterior surface of the eyepiece arms 2116. Further, the
extendable arms 2134 may meet and secure to each other. The
extendable arms may also attach to another portion of the head
mounted eyepiece to create a means for securing the eyepiece to the
user's head. The wrap around extendable arms 2134 may meet to
secure to each other, interlock, connect, magnetically couple, or
secure by other means so as to provide a secure attachment to the
user's head. In embodiments, the adjustable wrap around extendable
arms 2134 may also be independently adjusted to attach to or grasp
portions of the user's head. As such the independently adjustable
arms may allow the user increased customizability for a
personalized fit to secure the eyepiece to the user's head.
Further, in embodiments, at least one of the wrap around extendable
arms 2134 may be detachable from the head mounted eyepiece. In yet
other embodiments, the wrap around extendable arms 2134 may be an
add-on feature of the head mounted eyepiece. In such instances, the
user may chose to put extendable, non-extendable or other arms on
to the head mounted eyepiece. For example, the arms may be sold as
a kit or part of a kit that allows the user to customize the
eyepiece to his or her specific preferences. Accordingly, the user
may customize that type of material from which the adjustable wrap
around extendable arm 2134 is made by selecting a different kit
with specific extendable arms suited to his preferences.
Accordingly, the user may customize his eyepiece for his particular
needs and preferences.
[0373] In yet other embodiments, an adjustable strap, 2142, may be
attached to the eyepiece arms such that it extends around the back
of the user's head in order to secure the eyepiece in place. The
strap may be adjusted to a proper fit. It may be made out of any
suitable material, including but not limited to rubber, silicone,
plastic, cotton and the like.
[0374] In an embodiment, the eyepiece may include security
features, such as M-Shield Security, Secure content, DSM, Secure
Runtime, IPSec, and the like. Other software features may include:
User Interface, Apps, Framework, BSP, Codecs, Integration, Testing,
System Validation, and the like.
[0375] In an embodiment, the eyepiece materials may be chosen to
enable ruggedization.
[0376] In an embodiment, the eyepiece may be able to access a 3G
access point that includes a 3G radio, an 802.11b connection and a
Bluetooth connection to enable hopping data from a device to a
3G-enable embodiment of the eyepiece.
[0377] A further embodiment of the eyepiece may be used to provide
biometric data collection and result reporting. Biometric data may
be visual biometric data, such as facial biometric data or iris
biometric data, or may be audio biometric data. FIG. 39 depicts an
embodiment providing biometric data capture. The assembly, 3900
incorporates the eyepiece 100, discussed above in connection with
FIG. 1. Eyepiece 100 provides an interactive head-mounted eyepiece
that includes an optical assembly. Other eyepieces providing
similar functionality may also be used. Eyepieces may also
incorporate global positioning system capability to permit location
information display and reporting.
[0378] The optical assembly allows a user to view the surrounding
environment, including individuals in the vicinity of the wearer.
An embodiment of the eyepiece allows a user to biometrically
identify nearby individuals using facial images and iris images or
both facial and iris images or audio samples. The eyepiece
incorporates a corrective element that corrects a user's view of
the surrounding environment and also displays content provided to
the user through in integrated processor and image source. The
integrated image source introduces the content to be displayed to
the user to the optical assembly.
[0379] The eyepiece also includes an optical sensor for capturing
biometric data. The integrated optical sensor, in an embodiment may
incorporate a camera mounted on the eyepiece. This camera is used
to capture biometric images of an individual near the user of the
eyepiece. The user directs the optical sensor or the camera toward
a nearby individual by positioning the eyepiece in the appropriate
direction, which may be done just by looking at the individual. The
user may select whether to capture one or more of a facial image,
an iris image, or an audio sample.
[0380] The biometric data that may be captured by the eyepiece
illustrated in FIG. 39 includes facial images for facial
recognition, iris images for iris recognition, and audio samples
for voice identification. The eyepiece 100 incorporates multiple
microphones 3902 in an endfire array disposed along both the right
and left temples of the eyepiece 100. The microphone arrays 3902
are specifically tuned to enable capture of human voices in an
environment with a high level of ambient noise. Microphones 3902
provide selectable options for improved audio capture, including
omni-directional operation, or directional beam operation.
Directional beam operation allows a user to record audio samples
from a specific individual by steering the microphone array in the
direction of the subject individual.
[0381] Audio biometric capture is enhanced by incorporating phased
array audio and video tracking for audio and video capture. Audio
tracking allows for continuing to capture an audio sample when the
target individual is moving in an environment with other noise
sources.
[0382] To provide power for the display optics and biometric data
collection the eyepiece 100 also incorporates a lithium-ion battery
3904, that is capable of operating for over twelve hours on a
single charge. In addition, the eyepiece 100 also incorporates a
processor and solid-state memory 3906 for processing the captured
biometric data. The processor and memory are configurable to
function with any software or algorithm used as part of a biometric
capture protocol or format, such as the .wav format.
[0383] A further embodiment of the eyepiece assembly 3900 provides
an integrated communications facility that transmits the captured
biometric data to a remote facility that stores the biometric data
in a biometric data database. The biometric data database
interprets the captured biometric data, interprets the data, and
prepares content for display on the eyepiece.
[0384] In operation, a wearer of the eyepiece desiring to capture
biometric data from a nearby observed individual positions himself
or herself so that the individual appears in the field of view of
the eyepiece. Once in position the user initiates capture of
biometric information. Biometric information that may be captured
includes iris images, facial images, and audio data.
[0385] In operation, a wearer of the eyepiece desiring to capture
audio biometric data from a nearby observed individual positions
himself or herself so that the individual appears is near the
eyepiece, specifically, near the microphone arrays located in the
eyepiece temples. Once in position the user initiates capture of
audio biometric information. This audio biometric information
consists of a recorded sample of the target individual speaking
Audio samples may be captured in conjunction with visual biometric
data, such as iris and facial images.
[0386] To capture an iris image, the wearer/user observes the
desired individual and positions the eyepiece such that the optical
sensor assembly or camera may collect an image of the biometric
parameters of the desired individual. Once captured the eyepiece
processor and solid-state memory prepare the captured image for
transmission to the remote computing facility for further
processing.
[0387] The remote computing facility receives the transmitted
biometric image and compares the transmitted image to previously
captured images of the same type. Iris or facial images are
compared with previously collected iris or facial images to
determine if the individual has been previously encountered and
identified.
[0388] Once the comparison has been made, the remote computing
facility transmits a report of the comparison to the wearer/user's
eyepiece, for display. The report may indicate that the captured
biometric image matches previously captured images. In such cases,
the user receives a report including the identity of the
individual, along with other identifying information or statistics.
Not all captured biometric data allows for an unambiguous
determination of identity. In such cases, the remote computing
facility provides a report of findings and may request the user to
collect additional biometric data, possibly of a different type, to
aid in the identification and comparison process. Visual biometric
data may be supplemented with audio biometric data as a further aid
to identification.
[0389] Facial images are captured in a similar manner as iris
images. The field of view is necessarily larger, due to the size of
the images collected. This also permits to user to stand further
off from the subject whose facial biometric data is being
captured.
[0390] In operation the user may have originally captured a facial
image of the individual. However, the facial image may be
incomplete or inconclusive because the individual may be wearing
clothing or other apparel, such as a hat, that obscures facial
features. In such a case, the remote computing facility may request
that a different type of biometric capture be used and additional
images or data be transmitted. In the case described above, the
user may be directed to obtain an iris image to supplement the
captured facial image. In other instances, the additional requested
data may be an audio sample of the individual's voice.
[0391] FIG. 40 illustrates capturing an iris image for iris
recognition. The figure illustrates the focus parameters used to
analyze the image and includes a geographical location of the
individual at the time of biometric data capture. FIG. 40 also
depicts a sample report that is displayed on the eyepiece.
[0392] FIG. 41 illustrates capture of multiple types of biometric
data, in this instance, facial and iris images. The capture may be
done at the same time, or by request of the remote computing
facility if a first type of biometric data leads to an inconclusive
result.
[0393] FIG. 42 shows the electrical configuration of the multiple
microphone arrays contained in the temples of the eyepiece of FIG.
39. The endfire microphone arrays allow for greater discrimination
of signals and better directionality at a greater distance. Signal
processing is improved by incorporating a delay into the
transmission line of the back microphone. The use of dual
omni-directional microphones enables switching from an
omni-directional microphone to a directional microphone. This
allows for better direction finding for audio capture of a desired
individual. FIG. 43 illustrates the directionality improvements
available with multiple microphones.
[0394] The multiple microphones may be arranged in a composite
microphone array. Instead of using one standard high quality
microphone to capture an audio sample, the eyepiece temple pieces
house multiple microphones of different character. One example of
multiple microphone use uses microphones from cut off cell phones
to reproduce the exact electrical and acoustic properties of the
individual's voice. This sample is stored for future comparison in
a database. If the individual's voice is later captured, the
earlier sample is available for comparison, and will be reported to
the eyepiece user, as the acoustic properties of the two samples
will match.
[0395] FIG. 44 shows the use of adaptive arrays to improve audio
data capture. By modifying pre-existing algorithms for audio
processing adaptive arrays can be created that allow the user to
steer the directionality of the antenna in three dimensions.
Adaptive array processing permits location of the source of the
speech, thus tying the captured audio data to a specific
individual. Array processing permits simple summing of the cardioid
elements of the signal to be done either digitally or using analog
techniques. In normal use, a user should switch the microphone
between the omni-directional pattern and the directional array. The
processor allows for beamforming, array steering and adaptive array
processing, to be performed on the eyepiece.
[0396] In an embodiment, the integrated camera may continuously
record a video file, and the integrated microphone may continuously
record an audio file. The integrated processor of the eyepiece may
enable event tagging in long sections of the continuous audio or
video recording. For example, a full day of passive recording may
be tagged whenever an event, conversation, encounter, or other item
of interest takes place. Tagging may be accomplished through the
explicit press of a button, a noise or physical tap, a hand
gesture, or any other control technique described herein. A marker
may be placed in the audio or video file or stored in a metadata
header. In embodiments, the marker may include the GPS coordinate
of the event, conversation, encounter, or other item of interest.
In other embodiments, the marker may be time-synced with a GPS log
of the day. Other logic based triggers can also tag the audio or
video file such as proximity relationships to other users, devices,
locations, or the like.
[0397] The methods and systems described herein may be deployed in
part or in whole through a machine that executes computer software,
program codes, and/or instructions on a processor. The processor
may be part of a server, client, network infrastructure, mobile
computing platform, stationary computing platform, or other
computing platform. A processor may be any kind of computational or
processing device capable of executing program instructions, codes,
binary instructions and the like. The processor may be or include a
signal processor, digital processor, embedded processor,
microprocessor or any variant such as a co-processor (math
co-processor, graphic co-processor, communication co-processor and
the like) and the like that may directly or indirectly facilitate
execution of program code or program instructions stored thereon.
In addition, the processor may enable execution of multiple
programs, threads, and codes. The threads may be executed
simultaneously to enhance the performance of the processor and to
facilitate simultaneous operations of the application. By way of
implementation, methods, program codes, program instructions and
the like described herein may be implemented in one or more thread.
The thread may spawn other threads that may have assigned
priorities associated with them; the processor may execute these
threads based on priority or any other order based on instructions
provided in the program code. The processor may include memory that
stores methods, codes, instructions and programs as described
herein and elsewhere. The processor may access a storage medium
through an interface that may store methods, codes, and
instructions as described herein and elsewhere. The storage medium
associated with the processor for storing methods, programs, codes,
program instructions or other type of instructions capable of being
executed by the computing or processing device may include but may
not be limited to one or more of a CD-ROM, DVD, memory, hard disk,
flash drive, RAM, ROM, cache and the like.
[0398] A processor may include one or more cores that may enhance
speed and performance of a multiprocessor. In embodiments, the
process may be a dual core processor, quad core processors, other
chip-level multiprocessor and the like that combine two or more
independent cores (called a die).
[0399] The methods and systems described herein may be deployed in
part or in whole through a machine that executes computer software
on a server, client, firewall, gateway, hub, router, or other such
computer and/or networking hardware. The software program may be
associated with a server that may include a file server, print
server, domain server, internet server, intranet server and other
variants such as secondary server, host server, distributed server
and the like. The server may include one or more of memories,
processors, computer readable media, storage media, ports (physical
and virtual), communication devices, and interfaces capable of
accessing other servers, clients, machines, and devices through a
wired or a wireless medium, and the like. The methods, programs or
codes as described herein and elsewhere may be executed by the
server. In addition, other devices required for execution of
methods as described in this application may be considered as a
part of the infrastructure associated with the server.
[0400] The server may provide an interface to other devices
including, without limitation, clients, other servers, printers,
database servers, print servers, file servers, communication
servers, distributed servers, social networks, and the like.
Additionally, this coupling and/or connection may facilitate remote
execution of program across the network. The networking of some or
all of these devices may facilitate parallel processing of a
program or method at one or more location. In addition, any of the
devices attached to the server through an interface may include at
least one storage medium capable of storing methods, programs, code
and/or instructions. A central repository may provide program
instructions to be executed on different devices. In this
implementation, the remote repository may act as a storage medium
for program code, instructions, and programs.
[0401] The software program may be associated with a client that
may include a file client, print client, domain client, internet
client, intranet client and other variants such as secondary
client, host client, distributed client and the like. The client
may include one or more of memories, processors, computer readable
media, storage media, ports (physical and virtual), communication
devices, and interfaces capable of accessing other clients,
servers, machines, and devices through a wired or a wireless
medium, and the like. The methods, programs or codes as described
herein and elsewhere may be executed by the client. In addition,
other devices required for execution of methods as described in
this application may be considered as a part of the infrastructure
associated with the client.
[0402] The client may provide an interface to other devices
including, without limitation, servers, other clients, printers,
database servers, print servers, file servers, communication
servers, distributed servers and the like. Additionally, this
coupling and/or connection may facilitate remote execution of
program across the network. The networking of some or all of these
devices may facilitate parallel processing of a program or method
at one or more location. In addition, any of the devices attached
to the client through an interface may include at least one storage
medium capable of storing methods, programs, applications, code
and/or instructions. A central repository may provide program
instructions to be executed on different devices. In this
implementation, the remote repository may act as a storage medium
for program code, instructions, and programs.
[0403] The methods and systems described herein may be deployed in
part or in whole through network infrastructures. The network
infrastructure may include elements such as computing devices,
servers, routers, hubs, firewalls, clients, personal computers,
communication devices, routing devices and other active and passive
devices, modules and/or components as known in the art. The
computing and/or non-computing device(s) associated with the
network infrastructure may include, apart from other components, a
storage medium such as flash memory, buffer, stack, RAM, ROM and
the like. The processes, methods, program codes, instructions
described herein and elsewhere may be executed by one or more of
the network infrastructural elements.
[0404] The methods, program codes, and instructions described
herein and elsewhere may be implemented on a cellular network
having multiple cells. The cellular network may either be frequency
division multiple access (FDMA) network or code division multiple
access (CDMA) network. The cellular network may include mobile
devices, cell sites, base stations, repeaters, antennas, towers,
and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh,
or other networks types.
[0405] The methods, programs codes, and instructions described
herein and elsewhere may be implemented on or through mobile
devices. The mobile devices may include navigation devices, cell
phones, mobile phones, mobile personal digital assistants, laptops,
palmtops, netbooks, pagers, electronic books readers, music players
and the like. These devices may include, apart from other
components, a storage medium such as a flash memory, buffer, RAM,
ROM and one or more computing devices. The computing devices
associated with mobile devices may be enabled to execute program
codes, methods, and instructions stored thereon. Alternatively, the
mobile devices may be configured to execute instructions in
collaboration with other devices. The mobile devices may
communicate with base stations interfaced with servers and
configured to execute program codes. The mobile devices may
communicate on a peer to peer network, mesh network, or other
communications network. The program code may be stored on the
storage medium associated with the server and executed by a
computing device embedded within the server. The base station may
include a computing device and a storage medium. The storage device
may store program codes and instructions executed by the computing
devices associated with the base station.
[0406] The computer software, program codes, and/or instructions
may be stored and/or accessed on machine readable media that may
include: computer components, devices, and recording media that
retain digital data used for computing for some interval of time;
semiconductor storage known as random access memory (RAM); mass
storage typically for more permanent storage, such as optical
discs, forms of magnetic storage like hard disks, tapes, drums,
cards and other types; processor registers, cache memory, volatile
memory, non-volatile memory; optical storage such as CD, DVD;
removable media such as flash memory (e.g. USB sticks or keys),
floppy disks, magnetic tape, paper tape, punch cards, standalone
RAM disks, Zip drives, removable mass storage, off-line, and the
like; other computer memory such as dynamic memory, static memory,
read/write storage, mutable storage, read only, random access,
sequential access, location addressable, file addressable, content
addressable, network attached storage, storage area network, bar
codes, magnetic ink, and the like.
[0407] The methods and systems described herein may transform
physical and/or or intangible items from one state to another. The
methods and systems described herein may also transform data
representing physical and/or intangible items from one state to
another.
[0408] The elements described and depicted herein, including in
flow charts and block diagrams throughout the figures, imply
logical boundaries between the elements. However, according to
software or hardware engineering practices, the depicted elements
and the functions thereof may be implemented on machines through
computer executable media having a processor capable of executing
program instructions stored thereon as a monolithic software
structure, as standalone software modules, or as modules that
employ external routines, code, services, and so forth, or any
combination of these, and all such implementations may be within
the scope of the present disclosure. Examples of such machines may
include, but may not be limited to, personal digital assistants,
laptops, personal computers, mobile phones, other handheld
computing devices, medical equipment, wired or wireless
communication devices, transducers, chips, calculators, satellites,
tablet PCs, electronic books, gadgets, electronic devices, devices
having artificial intelligence, computing devices, networking
equipments, servers, routers, processor-embedded eyewear and the
like. Furthermore, the elements depicted in the flow chart and
block diagrams or any other logical component may be implemented on
a machine capable of executing program instructions. Thus, while
the foregoing drawings and descriptions set forth functional
aspects of the disclosed systems, no particular arrangement of
software for implementing these functional aspects should be
inferred from these descriptions unless explicitly stated or
otherwise clear from the context. Similarly, it will be appreciated
that the various steps identified and described above may be
varied, and that the order of steps may be adapted to particular
applications of the techniques disclosed herein. All such
variations and modifications are intended to fall within the scope
of this disclosure. As such, the depiction and/or description of an
order for various steps should not be understood to require a
particular order of execution for those steps, unless required by a
particular application, or explicitly stated or otherwise clear
from the context.
[0409] The methods and/or processes described above, and steps
thereof, may be realized in hardware, software or any combination
of hardware and software suitable for a particular application. The
hardware may include a general purpose computer and/or dedicated
computing device or specific computing device or particular aspect
or component of a specific computing device. The processes may be
realized in one or more microprocessors, microcontrollers, embedded
microcontrollers, programmable digital signal processors or other
programmable device, along with internal and/or external memory.
The processes may also, or instead, be embodied in an application
specific integrated circuit, a programmable gate array,
programmable array logic, or any other device or combination of
devices that may be configured to process electronic signals. It
will further be appreciated that one or more of the processes may
be realized as a computer executable code capable of being executed
on a machine readable medium.
[0410] The computer executable code may be created using a
structured programming language such as C, an object oriented
programming language such as C++, or any other high-level or
low-level programming language (including assembly languages,
hardware description languages, and database programming languages
and technologies) that may be stored, compiled or interpreted to
run on one of the above devices, as well as heterogeneous
combinations of processors, processor architectures, or
combinations of different hardware and software, or any other
machine capable of executing program instructions.
[0411] Thus, in one aspect, each method described above and
combinations thereof may be embodied in computer executable code
that, when executing on one or more computing devices, performs the
steps thereof. In another aspect, the methods may be embodied in
systems that perform the steps thereof, and may be distributed
across devices in a number of ways, or all of the functionality may
be integrated into a dedicated, standalone device or other
hardware. In another aspect, the means for performing the steps
associated with the processes described above may include any of
the hardware and/or software described above. All such permutations
and combinations are intended to fall within the scope of the
present disclosure.
[0412] While the present disclosure includes many embodiments shown
and described in detail, various modifications and improvements
thereon will become readily apparent to those skilled in the art.
Accordingly, the spirit and scope of the present invention is not
to be limited by the foregoing examples, but is to be understood in
the broadest sense allowable by law.
[0413] All documents referenced herein are hereby incorporated by
reference.
* * * * *