U.S. patent application number 15/587260 was filed with the patent office on 2017-11-09 for methods and apparatus for active transparency modulation.
The applicant listed for this patent is Ostendo Technologies, Inc.. Invention is credited to Hussein S. El-Ghoroury, Siddharth S. Hazra.
Application Number | 20170323615 15/587260 |
Document ID | / |
Family ID | 60203402 |
Filed Date | 2017-11-09 |
United States Patent
Application |
20170323615 |
Kind Code |
A1 |
Hazra; Siddharth S. ; et
al. |
November 9, 2017 |
Methods and Apparatus for Active Transparency Modulation
Abstract
A viewing system is provided including an active transparency
modulation film in the form of addressable arrays of electrochromic
pixel structures. The viewing system may be used in, for instance,
a head-mounted display (HMD) or head-up display (HUD). The film is
located on one side of a viewing lens of the system and is
selectively variable from opaque to transparent at certain regions
on the lens to provide an opaque silhouetted image upon which a
virtual image is projected. The viewing system including the film
and pixel structure therefore provide improved viewing by
minimizing the undesirable effects of image ghosting in a viewed
scene.
Inventors: |
Hazra; Siddharth S.;
(Carlsbad, CA) ; El-Ghoroury; Hussein S.;
(Carlsbad, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Ostendo Technologies, Inc. |
Carlsbad |
CA |
US |
|
|
Family ID: |
60203402 |
Appl. No.: |
15/587260 |
Filed: |
May 4, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62332168 |
May 5, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G02B 2027/0118 20130101;
G02B 2027/0178 20130101; G09G 3/002 20130101; G09G 2320/0257
20130101; G09G 2340/12 20130101; G06F 3/147 20130101; G09G 2360/144
20130101; G06T 1/20 20130101; G09G 2360/08 20130101; G09G 3/38
20130101; G06T 11/60 20130101; G02B 2027/014 20130101; G02B 27/0172
20130101; G09G 2320/064 20130101 |
International
Class: |
G09G 3/38 20060101
G09G003/38; G06T 1/20 20060101 G06T001/20; G06T 11/60 20060101
G06T011/60; G02B 27/01 20060101 G02B027/01 |
Claims
1. A see-through optical lens or waveguide element comprising: an
active transparency modulation film comprising an electrochromic
pixel layer applied to one side of the see-through optical lens or
waveguide element, each pixel of the electrochromic pixel layer
being electrically controllable by a processor to be transparent in
a first state and opaque in a second state, whereby patterns of
opaque areas may be electrically controlled for projection of
images onto the back side of the patterns without the images being
ghosted by light coming from the backside of the images.
2. The see-through optical lens or waveguide element of claim 1
wherein the active transparency modulation film is a composite of
one or more of transparent polymer substrates, transparent
conductive oxide electrodes, thin film transistor arrays, or
electrochromic pixel arrays.
3. The see-through optical lens or waveguide element of claim 1
wherein the active transparency modulation film is a tungsten
trioxide thin film or polymer dispersed liquid crystal based film
laminated on a surface of the see-through optical lens or waveguide
element.
4. The see-through optical lens or waveguide element of claim 1
wherein the see-through optical lens or waveguide element is
comprised of glass.
5. The see-through optical lens or waveguide element of claim 1
further comprising a graphics processing unit and memory storing a
rendered outline and depth buffer information and coupled to a
video coprocessor for activating on the electrochromic film only
the pixels that are contained inside the outline to create a region
of reduced transparency.
6. The see-through optical lens or waveguide element of claim 5
further comprising an imaging unit including a light engine coupled
to the graphics processing unit to project an image to the region
of reduced transparency to enhance the solidness of the image using
the active transparency modulation film.
7. The see-through optical lens or waveguide element of claim 1
wherein each pixel of the electrochromic pixel layer is modulated
by proportional voltage modulation between transparency and
opaqueness.
8. The see-through optical lens or waveguide element of claim 1
wherein each pixel of the electrochromatic pixel layer is
pulse-width modulated for control between transparency and
opaqueness.
9. The see-through optical lens or waveguide element of claim 1 as
the lenses in a pair of glasses, the active transparency modulation
film being oriented to controllably block light from passing
through the lenses toward eyes of a wearer of the glasses.
10. The see-through optical lens or waveguide element of claim 9
further comprising one or more environment monitoring cameras.
11. The see-through optical lens or waveguide element of claim 9
further comprising at least one head tracking sensor and at least
two eye tracking sensors.
12. A method of minimizing the undesirable effects of image
ghosting in a viewed scene of a viewing system, the method
comprising: providing a see-through optical lens or waveguide
element with an active transparency modulation film comprising an
electrochromatic pixel layer applied to one side of the see-through
optical lens or waveguide element, each pixel being electrically
controllable by a processor to be transparent in a first state and
to be opaque in a second state, and variable between the first
state and the second state by proportional voltage or pulse width
modulation; defining, by control of selected pixels of the
electrochromatic pixel layer, an area of lesser transparency to a
viewer on the see-through optical lens or waveguide element; and
projecting an image onto the area of lesser transparency from the
viewing side of the see-through optical lens or waveguide element;
whereby the image is superimposed onto the area of lesser
transparency.
13. The method of claim 12 further comprising controlling which
pixels are selected to determine the position of the area of lesser
transparency of the see-through optical lens or waveguide element
and controlling the position of the projected image on the
see-through optical lens or waveguide element to keep the projected
image superimposed on the area of lesser transparency as the
projected image moves relative to the see-through optical lens or
waveguide element.
14. The method of claim 12 wherein the image is generated under
program control.
15. The method of claim 12 wherein a degree of transparency of the
area of lesser transparency is based on an ambient illumination of
the viewer's environment.
16. The method of claim 12 wherein defining the area of lesser
transparency comprises determining a boundary of the image using an
image segmentation process.
17. The method of claim 16 wherein the image segmentation process
is a convex hull algorithm.
18. The method of claim 12 further comprising scanning for a light
source and determining a location of the light source, wherein
defining the area of lesser transparency is based on the location
of the light source.
19. The method of claim 12 further comprising obtaining information
regarding the viewer's eye gaze direction, interpupillary distance
and head orientation to calculate a depth of the viewer's
focus.
20. A non-transitory machine-readable medium having instructions
stored thereon, which when executed by a processor cause the
processor to perform the following method of minimizing the
undesirable effects of image ghosting in a viewed scene of a
viewing system, the method comprising: providing a see-through
optical lens or waveguide element with an active transparency
modulation film comprising an electrochromatic pixel layer applied
to one side of the see-through optical lens or waveguide element,
each pixel being electrically controllable by a processor to be
transparent in a first state and to be opaque in a second state,
and variable between the first state and the second state by
proportional voltage or pulse width modulation; defining, by
control of selected pixels of the electrochromatic pixel layer, an
area of lesser transparency to a viewer on the see-through optical
lens or waveguide element; and projecting an image onto the area of
lesser transparency from the viewing side of the see-through
optical lens or waveguide element; whereby the image is
superimposed onto the area of lesser transparency.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 62/332,168, filed May 5, 2016, the entirety
of which is incorporated herein by reference.
FIELD
[0002] One aspect of the present disclosure generally relates to
active transparency modulation of lens elements for near-eye
displays, wearable displays, augmented reality displays and virtual
reality displays.
BACKGROUND
[0003] Numerous deficiencies exist in passive optics and waveguides
currently used in near-eye, wearable and projected displays in
augmented, mixed and virtual reality applications. Conventional
passive optics tend to create see-through or "ghosted" images or
objects instead of an impression of solidity and lead to a ghosted
effect of the displayed object as perceived by a viewer.
Stereoscopy with ghosted objects also creates complicated issues
for binocular vision applications.
[0004] Conventional transparent lens/display substrates also
typically suffer from display quality degradation in the presence
of ambient illumination or specular reflection sources in the
environment around the user from sources such as sunlight, lamps,
headlights or reflections from reflective surfaces.
[0005] Current attempts to overcome the above-described ghosting
problem have included, for instance, increasing display brightness
and/or contrast and reducing the light that is admitted through the
viewing lens using a visor element that partially gates the amount
of light admitted to the user's eyes. Unfortunately, such prior
approaches tend to reduce the "immersiveness" of the display
quality and may also increase power consumption. Additionally, such
prior approaches are typically application-dependent and
necessarily require the use of three distinct, application-specific
technologies to meet the different requirements for each of the
mixed, augmented or virtual reality systems respectively.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The embodiments herein are illustrated by way of example and
not by way of limitation in the figures of the accompanying
drawings in which like references indicate similar elements. It
should be noted that references to "an" or "one" embodiment in this
disclosure are not necessarily to the same embodiment, and they
mean at least one.
[0007] In the drawings:
[0008] FIG. 1 illustrates a first example for explaining a viewing
system for active transparency modulation according to an
embodiment.
[0009] FIG. 2 illustrates a second example for explaining a viewing
system for active transparency modulation according to an
embodiment.
[0010] FIG. 3 illustrates an example for explaining a wearable
near-eye display according to an embodiment.
[0011] FIG. 4 illustrates a view for explaining a "ghosted"
translucent image of a predetermined augmented reality image (e.g.,
a person) as viewed through a pair of conventional augmented
reality near-eye glasses.
[0012] FIG. 5 illustrates a view for explaining an opaque
silhouetted augmented reality image (e.g., a person) selectively
formed on an active transparency modulation film as viewed through
a pair of augmented reality near-eye glasses incorporating an
active transparency modulation film according to an embodiment.
[0013] FIG. 6 illustrates a view for explaining an ""unghosted""
augmented reality image (e.g., a person) projected upon and
superimposed over an opaque silhouetted image such as the opaque
silhouetted augmented reality image of FIG. 5 according to an
embodiment.
[0014] FIG. 7 illustrates a view for explaining a virtual reality
image (e.g., a person) projected upon an opaque viewing area of an
active transparency modulation film as viewed in a pair of
virtual-reality near-eye glasses according to an embodiment.
[0015] FIG. 8 illustrates a view for explaining an automobile
heads-up display (HUD) incorporating an active transparency
modulation film in which a displayed image is projected upon and
superimposed over a predetermined opaque region of the HUD
according to an embodiment.
[0016] FIGS. 9A-9E illustrate examples for explaining selective
object masking and projected object textures superimposed on
physical objects according to an embodiment.
[0017] FIGS. 10A-10D illustrate examples for explaining convex
hulls according to an embodiment.
[0018] FIG. 11 illustrates a flow diagram for explaining an example
method for active transparency modulation according to an
embodiment herein.
DETAILED DESCRIPTION
[0019] The present disclosure and various of its embodiments are
set forth in the following description of the embodiments which are
presented as illustrated examples of the disclosure in the
subsequent claims. It is expressly noted that the disclosure as
defined by such claims may be broader than the illustrated
embodiments described below. The word "exemplary" is used herein to
mean serving as an example, instance, or illustration. Any aspect
or design described herein as "exemplary" is not necessarily to be
construed as preferred or advantageous over other aspects or
designs.
[0020] According to one aspect of the disclosure herein, a viewing
system is provided including an active transparency modulation film
comprised of addressable arrays of electrochromic pixel structures
and electronics that may be used in a head mounted display (HMD)
and a head-up display (HUD). In one embodiment, the active
transparency modulation film may be electrically controllable from
highly transparent to highly reflective. The active transparency
modulation film, pixel structures and supporting processing
electronics (e.g., circuitry) provide improved viewing, such as by
minimizing the undesirable effects of image ghosting in a viewed
scene.
[0021] By virtue of the embodiments disclosed herein, it is
possible to provide a low-power system solution that can be
configured to transition between mixed, augmented, and virtual
reality modalities, such that the deficiencies commonly found in
conventional augmented reality, mixed reality, and virtual reality
wearable devices are addressed. For example, display quality may be
improved and ghosting may be reduced by virtue of the embodiments
disclosed herein.
[0022] Turning to the embodiment depicted in FIG. 1, a viewing
system 100 may comprise an imaging unit 120 comprising a light
engine and an imager to project displayed content (e.g., still or
moving images). The viewing system 100 may also comprise a
see-through patterned optical lens 110 or one or more waveguide
elements operating in cooperation with one or more user-defined
refractive or diffractive optical elements and beam-splitting
elements that are disposed within thickness of optical lens 110,
whereby displayed content from imaging unit 120 is transmitted
through the thickness of optical lens 110 and projected toward the
pupil of the viewer. Examples of such devices incorporating the
above refractive or diffractive optical and beam-splitting elements
are disclosed in, for instance, copending U.S. patent application
Ser. No. 15/381,459, filed Dec. 16, 2016, entitled "Systems and
Methods for Augmented Near-Eye Wearable Displays", and, U.S. patent
application Ser. No. 15/294,447, filed Oct. 14, 2016, entitled
"Dual-Mode Augmented/Virtual Reality (AR/VR) Near-Eye Wearable
Displays", the entirety of each of which is incorporated herein by
reference. Lens 110 may be comprised of glass or polymer.
[0023] Lens 110 includes an active transparency modulation film 115
comprising an electrochromic pixel layer that is constructed to
allow alteration of light transmission properties of film 115 by
applying an electrical current or potential. In the embodiment of
FIG. 1, film 115 is an electrochromic pixel layer applied to the
scene-facing side of lens 110. In other embodiments, film 115 may
be applied to either side of the lens 110. In one embodiment, film
115 itself may be a composite of transparent polymer substrates,
transparent conductive oxide electrodes, thin film transistor
arrays, and electrochromic pixel arrays. In other embodiments, film
115 may be a composite of any combination of: transparent polymer
substrates, transparent conductive oxide electrodes, thin film
transistor arrays, and electrochromic pixel arrays. In embodiments
involving transparent conductive oxide electrodes, film 115 may be
in electrical connection through the transparent conductive oxide
electrodes to the other components of viewing system 100.
[0024] In one embodiment, pixels of the electrochromic pixel layer
may be pulse-width modulated ("PWM") in order to actively control
transparency modulation of the film 115. In addition, proportional
voltage or current modulation can be used for ratiometric control
of the admitted ambient light through the electrochromic film layer
115. The translucency of the lens element 110 can thereby be
modulated between optically clear and opaque states. Accordingly,
viewing system 100 may be switched between augmented or mixed
reality modes to a virtual reality mode.
[0025] In one embodiment, the pixels of the electrochromic film 115
may have a different spatial resolution than the light engine of
the imaging unit 120 used for projected display. For example, the
spatial resolution of the pixels of the electrochromic film 115 may
be lower than that of the light engine of the imaging unit 120.
[0026] In one embodiment, the electrochromic pixel layer (also
referred to herein as the electrochromic film) may be comprised of
materials such as a tungsten trioxide ("WO.sub.3") thin film or a
polymer dispersed liquid crystal ("PDLC")-based film laminated on
the surface of the optical lens 110. These films are bi-stable and
active power is not required to maintain the On or Off state of the
film. In other words, for electrochromic film that is
electrochromatically bi-stable, once a color change has occurred,
the state of film 115 remains even in absence of excitation or
pulse modulation.
[0027] It is generally known that WO.sub.3 does not typically
switch well at high frequencies and that WO.sub.3 is generally not
well-suited for active displays due to its relatively slow
switching rate of .about.100 msec. On the other hand, PDLC based
films can typically be switched at acceptably high rates. While
WO.sub.3 exhibits relatively slow switching rates, the content rate
(i.e., the rate at which content moves across a user's field of
view in a near-eye display) is far slower than the display refresh
(frame) rate. This distinction between content and display refresh
rates allows the use of electrochromic materials that may have
slower switching frequencies in the embodiments herein. In some
embodiments, the display refresh rate may be controlled by imagers
such as the Quantum Photonic Imager or "QPI.RTM." imager (discussed
below), DLP, LCoS, OLED or LBS light modulation engines.
[0028] Thus, the electrochromic content refresh rate is typically
slower than the display refresh rate. In one embodiment, a display
refresh rate may be approximately .about.60 Hz or greater and a
content refresh may be approximately .about.10 Hz or greater, thus
making it possible to switch WO.sub.3 mode well within the content
refresh rate.
[0029] In one embodiment, the imager of the imaging unit 120 is
capable of filling the field of view of the range of the possible
positions of a projected image, and only uses a portion of the
projection capability (e.g., a subset of pixels) for smaller images
within that field of view. In one embodiment, the imager moves or
actuates to cover the range of the possible positions of a
projected image with respect to the see-through optical lens or
waveguide element.
[0030] With respect to imagers, a new class of emissive micro-scale
pixel array imager devices has been introduced as disclosed in U.S.
Pat. No. 7,623,560, U.S. Pat. No. 7,767,479, U.S. Pat. No.
7,829,902, U.S. Pat. No. 8,049,231, U.S. Pat. No. 8,243,770, and
U.S. Pat. No. 8,567,960, the contents of each of which is fully
incorporated herein by reference. The disclosed light emitting
structures and devices referred to herein may be based on the
Quantum Photonic Imager or "QPI.RTM." imager. QPI.RTM. is a
registered trademark of Ostendo Technologies, Inc. These disclosed
devices desirably feature high brightness, very fast multi-color
light intensity and spatial modulation capabilities, all in a very
small single device size that includes all necessary image
processing drive circuitry. The solid-state light-(SSL) emitting
pixels of the disclosed devices may be either a light emitting
diode (LED) or laser diode (LD), or both, whose on-off state is
controlled by drive circuitry contained within a CMOS chip (or
device) upon which the emissive micro-scale pixel array of the
imager is bonded and electronically coupled. The size of the pixels
comprising the disclosed emissive arrays of such imager devices is
typically in the range of approximately 5-20 microns with a typical
emissive surface area being in the range of approximately 15-150
square millimeters. The pixels within the above emissive
micro-scale pixel array devices are individually addressable
spatially, chromatically and temporally, typically through the
drive circuitry of its CMOS chip. The brightness of the light
generated by such imager devices can reach multiple 100,000 cd/m2
at reasonably low power consumption.
[0031] The QPI imager is well-suited for use in the imagers
described herein. See U.S. Pat. No. 7,623,560, U.S. Pat. No.
7,767,479, U.S. Pat. No. 7,829,902, U.S. Pat. No. 8,049,231, U.S.
Pat. No. 8,243,770, and U.S. Pat. No. 8,567,960. However, it is to
be understood that the QPI imagers are merely examples of the types
of devices that may be used in the present disclosure, which
devices may, by way of a non-limiting set of examples, include
OLED, LED, micro-LED imaging devices. Thus, in the disclosure
herein, references to the QPI imager, display, display device or
imager are to be understood to be for purposes of specificity in
the embodiments disclosed, and not for any limitation of the
present disclosure.
[0032] Returning to the embodiment of FIG. 1, video coprocessor 132
extracts at least one rendered content outline (e.g., boundary of
the content to be displayed) from a video random access memory
(VRAM) of graphics processing unit (GPU) 130 which contains "k
buffer" information. Generally, a k buffer algorithm is a GPU-based
fragment-level sorting algorithm for rendering transparent
surfaces. The extracted outline is provided to film layer control
coprocesser 135 which then activates the pixels of the
electrochromic film 115 to block light at only the pixels 155 that
are contained inside the extracted outline. Imaging unit 120
projects the image content onto film 115 in accordance with the
pixels inside the extracted outline.
[0033] In this context, blocking refers to reflecting or absorbing
light incident on pixels of the electrochromic film 115 that are
controlled by the electrochromic layer control coprocessor 135 to
reflect or absorb only light incident at the pixels 155 that are
contained inside the extracted outline, resulting at least some of
the incident light being blocked by the patterns), such that a
portion of the film 115 is opaque to some degree (e.g., the portion
containing pixels 155). In one embodiment, the blocking may be
controlled to range from substantially no blocking to substantially
full blocking. Such control may be achieved by proportional
excitation of at least a portion of electrochromic film 115, or by
pulse modulation of at least a portion of electrochromic film 115,
or by pulse modulation of at least the portion of the
electrochromic film that is at least partially electrochromically
bistable, as discussed above.
[0034] In one embodiment, the intensity of the modulation of the
electrochromic pixel layer 115 (e.g., degree or level of
transparency) may further be controlled based upon the ambient
illumination of the environment of the user.
[0035] In one embodiment, one or more of the processors of viewing
system 100 (e.g., GPU 130, video coprocessor 132, electrochromic
layer control coprocessor 135) may also be connected to a memory
block that can be implemented via one or more memory devices
including volatile storage (or memory) devices such as random
access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM),
static RAM (SRAM), or other types of storage devices. The one or
more of the processors may be implemented in software, hardware, or
a combination thereof. For example, the one or more of the
processors can be implemented as software installed and stored in a
persistent storage device, which can be loaded and executed in a
memory by the processor to carry out the processes or operations
described throughout this application. The one or more of the
processors may each represent a single processor or multiple
processors with a single processor core or multiple processor cores
included therein. The one or more of the processors may each
represent a microprocessor, a central processing unit (CPU),
graphic processing unit (GPU), or the like. The one or more of the
processors may each be a complex instruction set computing (CISC)
microprocessor, reduced instruction set computing (RISC)
microprocessor, matched instruction set microprocessor (MISP), very
long instruction word (VLIW) microprocessor, or processor
implementing other instruction sets, or processors implementing a
combination of instruction sets. The one or more of the processors
can also be implemented as executable code programmed or embedded
into dedicated hardware such as an integrated circuit (e.g., an
application specific IC or ASIC), a digital signal processor (DSP),
or a field programmable gate array (FPGA), which can be accessed
via a corresponding driver and/or operating system from an
application. The one or more of the processors may each be a
cellular or baseband processor, a network processor, a graphics
processor, a communications processor, a cryptographic processor,
an embedded processor, or any other type of logic capable of
processing instructions. Furthermore, the one or more of the
processors can be implemented as specific hardware logic in a
processor or processor core as part of an instruction set
accessible by a software component via one or more specific
instructions.
[0036] Turning to the embodiment of FIG. 2, similar to the
embodiment of FIG. 1, viewing system 200 includes lens 210 (e.g.,
glass or polymer see-through patterned optics similar to lens 110),
film 215 (e.g., similar to film 115), electrochromic layer control
coprocessor 235 (e.g., similar to electrochromic layer control
coprocessor 135), GPU 230 (e.g., similar to GPU 130), video
coprocessor 232 (e.g., similar to video coprocessor 132), and an
imaging unit 220 (e.g., similar to imaging unit 120). Also similar
to FIG. 1, video coprocessor 232 determines an outline of content
to be displayed and the outline is provided to electrochromic layer
control coprocessor 235 which then activates the pixels of the
electrochromic film 215 to block light incident at pixels 255 that
are inside the image outline, resulting in at least some of the
incident light being reflected or absorbed by the pattern. Similar
to FIG. 1, pixels of the film layer 215 may be actively controlled
by proportional excitation of at least a portion of electrochromic
film 215, or by pulse modulation of at least a portion of
electrochromic film 215, or by pulse modulation of at least the
portion of the electrochromic film that is at least partially
electrochromically bistable.
[0037] In the embodiment of FIG. 2, viewing system 200 includes one
or more environment (or ambient scene) monitoring cameras 240 that
may obtain data used to generate image outlines by scanning for a
high-intensity point or diffuse light sources (e.g., 252). In this
embodiment, video coprocessor 232 segments the high-intensity point
or diffuse light sources 252 from the scene, calculates their
relative spatial distributions, and localizes the locations of the
light sources 252. Video coprocessor 232 then calculates the
location of the light sources 252 relative to the user's eyes with
respect to the displayed image projected by imaging unit 220. This
process may be automated and configured to run alongside display of
the image, or may be turned on manually.
[0038] In one embodiment, the intensity of the modulation of the
electrochromic pixel layer 215 may be controlled based upon the
ambient illumination of the environment of the user, and/or any
other data obtained from the one or more environment monitoring
cameras 240 (e.g., the location of the light sources 252 relative
to the user's eyes). For example, as shown in the embodiment of
FIG. 2, the transparency of pixels 250, which comprise a portion of
electrochromic film 215, have been controlled so as to reduce
transmittance of light from light source 252 based at least in part
on the determined location of light source 252. The degree of
modulation may be given a predetermined transmissivity, such as
40%. The degree of modulation may also vary based on an intensity
and location of a light source (e.g., 252).
[0039] In the embodiment of FIG. 2, the displayed image may be
content for augmented reality (AR) or mixed reality (MR). The
content is generated programmatically (under program control, not
simple video), and therefore in one embodiment, the image outline
may be calculated predictively to reduce the number of calculations
required.
[0040] FIG. 3 illustrates an exemplar near-eye wearable display 300
according to an alternate embodiment. In one embodiment, wearable
display 300 comprises see-through optical glasses. As shown in FIG.
3, a wearable display 300 may include ambient light sensors 305a
and 305b, eye tracking sensors 310a and 310b, and head position
sensor 320. Although the embodiment of FIG. 3 shows two ambient
light sensors, two eye tracking sensors, and one head position
sensor 320, any suitable number of these sensors may be used in
other embodiments.
[0041] With respect to ambient light sensors 305a and 305b, these
sensors may be similar to environment monitoring cameras 240 and
may provide information for wearable display 300 to determine a
high-intensity point or diffuse light sources (e.g., 252). In one
embodiment, ambient light sensors 305a and 305b are configured to
sense only ambient light having a predetermined intensity. The
predetermined intensity may be set such that the ambient light
sensors 305a and 305b sense sun light. The predetermined intensity
may also be set to a user-defined brightness.
[0042] With respect to eye tracking sensors 310a and 310b and head
position sensor 320, these sensors provide both eye and head
tracking capabilities and may be able to obtain information
regarding a viewer's eye gaze direction, interpupillary distance
(IPD) and head orientation. For example, the optical see-through
glasses 300 may comprise at least one eye-tracking sensor per eye
(e.g., 310a, 310b) to detect multiple parameters of the viewer's
eyes including but not limited to the angular position (or look
angle) of each eye, the iris diameter, and the distance between the
two pupils (IPD). As one example, the eye-tracking sensors 310a,
310b may be a pair of miniature cameras each positioned to image
one eye. In one embodiment, the eye-tracking sensors 310a, 310b may
be placed in a non-obstructive position relative the eyes' field of
view (FOV) such as shown in FIG. 3. In addition, the eye-tracking
sensors 310a, 310b may be placed on the bridge section of the frame
of the glasses. The eye tracking components 310a, 310b and the head
tracking component 320 may be configured to detect, track and
predict where the viewer's head is positioned and where the viewer
is focused in depth and direction.
[0043] Wearable display 300 may also include or be in communication
with components similar to those illustrated in FIGS. 1 and 2, such
as a lens (e.g., glass or polymer see-through patterned optics
similar to lens 110 or 210), a film (e.g., similar to film 115 or
215), a electrochromic layer control coprocessor (e.g., similar to
electrochromic layer control coprocessor 135 or 235), a GPU (e.g.,
similar to GPU 130 or 230), a video coprocessor (e.g., similar to
video coprocessor 132 or 232), and an imaging unit (e.g., similar
to imaging unit 120 or 220).
[0044] The components of wearable display 300 may be interconnected
together via a wireless local area network (W-LAN) or wireless
personal area network (W-PAN) and may also be connected to the
internet to enable streaming of image content to be displayed.
[0045] In one embodiment, the active transparency modulation film
may be controlled by depth information received and calculated from
head and eye tracking sensors (320, 310a, 310b), where the wearable
display 300 is part of an augmented or mixed reality (AR or MR)
system. In this embodiment, the interpupillary distance (IPD) of
the viewer's eyes that is detected by the eye tracking sensors
(310a, 310b) is used to calculate the depth of the object of
interest that the viewer is focused on. The boundaries of the
object of interest are then calculated and used to control the
transparency of the film.
[0046] Accordingly, components of wearable display 300 (e.g.,
computation processing elements) may be configured such that the
outline of the content is used to create predetermined regions on
the lens of the see-through optics wearable display 300 that appear
less transparent to the viewer. In this manner, fewer lumens are
required to convey solidity and image brightness, and an amount of
required power is reduced. Such capabilities are particularly
advantageous for application in automotive head-up-display (HUD)
technologies and provide greatly improved contrast in high-ambient
brightness environments. In addition, the components of wearable
display 300 may be configured such that the light engine and the
display (imager) only projects light to the appropriate regions
with the afore-described reduced transparency to enhance the
`solidness` of the AR object using the active transparency
modulation film of the lens. It is therefore possible to alleviate
the problem of "ghosted" images as is often found in conventional
AR devices.
[0047] With respect to wearable displays, U.S. patent application
Ser. No. 15/294,447 filed Oct. 14, 2016, U.S. patent application
Ser. No. 15/381,459 filed Dec. 16, 2016, U.S. patent application
Ser. No. 15/391,583 filed Dec. 27, 2016, U.S. patent application
Ser. No. 15/477,712 filed Apr. 3, 2017, and U.S. patent application
Ser. No. 15/499,603 filed Apr. 27, 2017, the contents of each of
which are incorporated herein by reference, discuss various
wearable displays suitable for use in the embodiments disclosed
herein.
[0048] FIG. 4 illustrates one example for explaining a ghosted
image 450. As shown in FIG. 4, ghosted image 450 is perceived by a
viewer to be a see-through image in which elements of the
background scene may be viewed through the ghosted image 450.
[0049] Turning to FIG. 5, a silhouetted image 550 (illustrated in
FIG. 5 as silhouette of a person) may be an augmented reality image
for which the outline (silhouette) of the image is defined but
other details of the image are not defined. In particular,
augmented reality near-eye glasses 500 may include components
similar to those of viewing system 100, 200 or wearable display 300
including lenses having an active transparency modulation film
(e.g., film 115, 215). The silhouetted image 550 may be selectively
defined on the active transparency modulation film as viewed
through the pair of augmented reality near-eye glasses 500.
[0050] The image projected in the embodiment of FIG. 5 may be a
still image or a moving image. In the case of a still image, it may
still be considered that the still image will move relative to the
see-through optical lens or waveguide element with the viewer's
head movement (which may be sensed by one or more head tracking
sensors as shown in FIG. 3).
[0051] It should also be noted that in any of the embodiments
described herein, an image may be a black and white image or a
color image.
[0052] FIG. 6 illustrates an example for explaining an
""unghosted"" image 650 (illustrated in FIG. 6 as a person)
comprising an augmented reality image projected upon and
superimposed over an opaque silhouetted image, such as silhouetted
image 550 of FIG. 5, as viewed through a pair of augmented reality
near-eye glasses 600. Augmented reality near-eye glasses 500 may
include components similar to those of viewing system 100, 200 or
wearable display 300 including lenses having an active transparency
modulation film (e.g., film 115, 215). Augmented reality wearable
display 600 may allow a viewer to view unghosted image 650 as well
as the real world. As shown in FIG. 6, unghosted image 650 includes
details in addition to the outline of the image, and the impression
of solidity for unghosted image 650 is also increased, especially
as compared to ghosted image 450.
[0053] FIG. 7 illustrates an example for explaining an unghosted
image 750 (illustrated in FIG. 7 as a person) comprising a virtual
reality image projected upon the entire viewing area 720 of an
active transparency/reflectance film of a lens, the viewing area
720 being opaque such that a viewer may not view the real word, as
viewed through a pair of virtual reality near-eye glasses 700. The
near-eye glasses 700 may include components other than the lens and
film, similar to those of viewing system 100, 200 or wearable
display 300.
[0054] FIG. 8 illustrates an example for explaining a viewing
system including an active transparency modulation film used to
control the viewability of an automotive head-up display (HUD)
according to one embodiment. In this embodiment, the active
transparency modulation film is applied to the inside of the
automobile windshield 820 to display an augmented reality image
850. In addition the HUD may display content 855 on a display
screen, or may alternatively present content 855 as augmented
reality content.
[0055] By virtue of incorporating the active transparency
modulation film, it is possible to avoid ghosting of the
information presented by the HUD, and to reduce the illumination
perceived by the driver from particularly bright spots, both of
which make the information presented by the HUD more readable
without obstructing the driver's vision of the road ahead, and
improving the driver's vision of the road ahead.
[0056] Turning to FIGS. 9A-9E, an active transparency modulation
film may be incorporated into transparent display cases (e.g.,
900a) holding objects (e.g., 910a), thereby creating enclosures
that allow a change of appearance of the objects contained inside
the display cases. In FIGS. 9A-9E, each of the display cases 900a-E
include an active transparency modulation film that may be
comprised of PDLC.
[0057] Thus, as shown in FIG. 9A, a transparent display 900a
displays object 910a. In one embodiment, display 900a may be
cylindrical. In FIG. 9B, the active transparency modulation film on
the display 900b is controlled to create mask 910b for the object,
whereby the mask defines pixels for which light is blocked and the
object appears opaque. In FIGS. 9C-9E, for displays 900c, 900d,
900e, different light textures are projected onto the masked object
of FIG. 9B. The light textures are projected by imaging devices or
projectors (not shown) to create different skins 910C, 910D, 910E
for the objects.
[0058] FIG. 11 illustrates a flow diagram for explaining an example
method for active transparency modulation according to an
embodiment herein, and particularly how bright areas may be masked
in a lens incorporating an active transparency modulation film. It
should be understood that in embodiments involving a mask, masked
areas may be given a predetermined transmissivity, such as 40%.
[0059] In this regard, the following embodiments may be described
as a process 1100, which is usually depicted as a flowchart, a flow
diagram, a structure diagram, or a block diagram. Although a
flowchart may describe the operations as a sequential process, many
of the operations can be performed in parallel or concurrently. In
addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed. A process
corresponds to a method, procedure, etc.
[0060] In some embodiments discussed herein, GPU depth buffers are
used for direct control of an active transparency modulation film.
However, in situations where GPU depth buffer data is unavailable,
process 1100 may be used to calculate an outline of content to be
displayed.
[0061] At block 1101, the image content to be displayed is loaded
in a memory, such as SRAM, accessible by one or more processors
(e.g., GPU, video coprocessor, electrochromic layer control
coprocessor). In addition, the mask image containing the states of
the pixels of the active transparency modulation layer is first
initialized at zero (e.g., the `off` position). At block 1102,
color channels are discarded. For example, the image may be
transformed into a binary, black and white image based on entropy,
cluster and statistical analysis. At block 1103, the application
processor then separates the bright field (e.g., foreground)
component of the image from the dark field component (e.g.,
background, which is black in one embodiment). In other
embodiments, the order of execution of blocks 1102 and 1103 is
switched, such that the bright field component of the image is
first separated from the dark field component and then the bright
field component is transformed into a binary, black and white image
based on entropy, cluster and statistical analysis. The morphology
of the black and white images is then analyzed to detect separated
or disconnected objects in the image. Pixels of the active
transparency modulation film associated with each separated object
are then grouped and labeled. At block 1104, each individual pixel
group may then be used to calculate the alpha value associated with
that group and the pixels that represent a convex hull of the
group. After these parameters are calculated, the corresponding
pixels on the active transparency modulation film are modulated
accordingly by the electrochromic layer control coprocessor at
block 1105.
[0062] With respect to convex hulls, generally, a convex hull or
convex envelope may be considered a set of all convex combinations
of points in a set of points. The convex hull may be generated from
a set of edges in an image. For example, only the outlying pixels
of desired opaque areas may be determined, and the opaque
silhouetted image is taken as a convex hull of the image. When the
set of points is a bounded subset of a Euclidean plane, the convex
hull may be visualized as the shape enclosed by a rubber band
stretched around the set of points. Examples of convex hulls are
illustrated in FIGS. 10A-D. In particular, object 1010 has convex
hull 1020 in FIG. 10A. In FIG. 10B, the outlying points of the
object and the convex hull have the same silhouette (outline) 1030.
In the FIG. 10C, one corner of the object from FIG. 10B has been
folded along the dashed line, and therefore object 1045 has convex
hull 1040. In FIG. 10D, object 1055 has convex hull 1050.
[0063] Although the embodiments of FIGS. 10 and 11 specifically
rely on convex hulls to segment an image and calculate an image
outline, in other embodiments, any image segmentation technique may
be used. In this regard, convex hull based segmentation may be
considered one example of an image segmentation technique. Other
examples may include thresholding, clustering, and using
statistical properties of display bitmap pixels. Edge detection
algorithms may also be used to calculate the image outline or
boundary. As discussed herein, the outline generated by the image
segmentation technique allows the system to "blank out" the
silhouette of the displayed image (e.g., as illustrated in FIG.
5).
[0064] Thus, the present disclosure has a number of aspects, which
aspects may be practiced alone or in various combinations or
sub-combinations, as desired. While certain preferred embodiments
have been disclosed and described herein for purposes of
illustration and not for purposes of limitation, it will be
understood by those skilled in the art that various changes in form
and detail may be made therein without departing from the spirit
and scope of the disclosure. Therefore, it must be understood that
the illustrated embodiments have been set forth only for the
purposes of example and should not be taken as limiting the
disclosure as defined by any claims in any subsequent application
claiming priority to this application.
[0065] For example, notwithstanding the fact that the elements of
such a claim may be set forth in a certain combination, it must be
expressly understood that the disclosure includes other
combinations of fewer, more or different elements. Therefore,
although elements may be described above as acting in certain
combinations and even subsequently claimed as such, it is to be
expressly understood that one or more elements from a claimed
combination can in some cases be excised from the combination and
that such claimed combination may be directed to a subcombination
or variation of a subcombination.
[0066] The words used in this specification to describe the
disclosure and its various embodiments are to be understood not
only in the sense of their commonly defined meanings, but to
include by special definition in this specification structure,
material or acts beyond the scope of the commonly defined meanings.
Thus, if an element can be understood in the context of this
specification as including more than one meaning, then its use in a
subsequent claim must be understood as being generic to all
possible meanings supported by the specification and by the word
itself.
[0067] The definitions of the words or elements of any claims in
any subsequent application claiming priority to this application
should be, therefore, defined to include not only the combination
of elements which are literally set forth, but all equivalent
structure, material or acts for performing substantially the same
function in substantially the same way to obtain substantially the
same result. In this sense, it is therefore contemplated that an
equivalent substitution of two or more elements may be made for any
one of the elements in such claims below or that a single element
may be substituted for two or more elements in such a claim.
* * * * *