U.S. patent application number 14/040749 was filed with the patent office on 2014-01-30 for man machine interface for a 3d display system.
This patent application is currently assigned to Real View Imaging Ltd.. Invention is credited to Aviad KAUFMAN, Carmel ROTSCHILD.
Application Number | 20140033052 14/040749 |
Document ID | / |
Family ID | 41120112 |
Filed Date | 2014-01-30 |
United States Patent
Application |
20140033052 |
Kind Code |
A1 |
KAUFMAN; Aviad ; et
al. |
January 30, 2014 |
MAN MACHINE INTERFACE FOR A 3D DISPLAY SYSTEM
Abstract
Disclosed are methods and systems for displaying images, and for
implementing volumetric user interfaces. One exemplary embodiment
provides a system comprising: a light source; an image producing
unit, which produces an image upon interaction with light
approaching the image producing unit from the light source; an
eyepiece; and a mirror, directing light from the image to a surface
of the eyepiece, wherein the surface has a shape of a solid of
revolution formed by revolving a planar curve at least 180.degree.
around an axis of revolution.
Inventors: |
KAUFMAN; Aviad;
(Zikhron-Yaakov, IL) ; ROTSCHILD; Carmel;
(Ganei-Tikva, IL) |
Assignee: |
Real View Imaging Ltd.
Yokneam
IL
|
Family ID: |
41120112 |
Appl. No.: |
14/040749 |
Filed: |
September 30, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13936262 |
Jul 8, 2013 |
|
|
|
14040749 |
|
|
|
|
13003309 |
Jan 10, 2011 |
8500284 |
|
|
PCT/IL2009/000686 |
Jul 9, 2009 |
|
|
|
13936262 |
|
|
|
|
61129665 |
Jul 10, 2008 |
|
|
|
Current U.S.
Class: |
715/733 ;
715/852 |
Current CPC
Class: |
G03H 2001/2231 20130101;
G03H 2001/266 20130101; H04N 13/388 20180501; G06F 3/0346 20130101;
G03H 2001/2242 20130101; G03H 2225/60 20130101; G06F 3/011
20130101; G03H 2001/2236 20130101; G03H 2001/2271 20130101; G03H
2001/0038 20130101; G06F 3/0304 20130101; G02B 30/56 20200101; G03H
2001/0061 20130101; G03H 2001/0426 20130101; G03H 2222/20 20130101;
G03H 2226/02 20130101; G03H 1/2294 20130101; G03H 2001/2239
20130101; G03H 1/0248 20130101; G03H 2001/0055 20130101; G03H
2210/30 20130101; G03H 2001/2207 20130101; G06F 3/017 20130101;
H04N 13/368 20180501; G02B 27/0093 20130101; G03H 2001/303
20130101; G03H 2225/61 20130101; G03H 2222/52 20130101; G03H
2223/24 20130101; G03H 1/0866 20130101; G03H 2001/2297 20130101;
G03H 2001/262 20130101; G03H 2226/05 20130101; G03H 1/2205
20130101; G03H 2222/33 20130101; G03H 2001/2284 20130101; G03H 1/08
20130101; G03H 2227/03 20130101; G03H 1/0005 20130101; G03H
2001/2655 20130101; G03H 2210/36 20130101; G03H 2222/22 20130101;
G03H 2001/221 20130101; G03H 2210/20 20130101; H04N 13/383
20180501; G03H 1/265 20130101; G03H 2001/2292 20130101; G03H
2240/61 20130101; G06F 3/04815 20130101; H04N 13/366 20180501; G03H
2210/32 20130101; G02B 30/50 20200101; G03H 2227/02 20130101; G03H
2001/2223 20130101; G06F 3/013 20130101; G03H 1/2286 20130101; G03H
2226/04 20130101 |
Class at
Publication: |
715/733 ;
715/852 |
International
Class: |
G06F 3/0481 20060101
G06F003/0481 |
Claims
1. A method for implementing a floating-in-the-air user interface,
comprising: displaying a first image in a display space of a first
floating-in-the-air display; inserting a real object into the
display space of the first floating-in-the-air display; locating a
location of the real object within the display space of the first
floating-in-the-air display; and providing the location as input to
the floating-in-the-air user interface.
2. The method of claim 1 in which said first image is produced by
computer generated holography.
3. The method of claim 1 wherein the real object is a finger.
4. The method of claim 1 and further comprising: displaying an
actuator in the first image; moving the location of the real object
to the actuator; and interpreting the location input as the real
object actuating the actuator.
5. The method of claim 1 and further comprising: moving the
location of the real object; tracking the location of the real
object over time; interpreting the location input as the real
object manipulating at least a part of the first image.
6. The method of claim 1, wherein the real object further comprises
a plurality of real objects, and using the locations of each of the
real objects as location inputs for the floating-in-the-air user
interface.
7. The method of claim 1, wherein the location comprises two
locations within the display space, and the two locations determine
an axis of rotation in display space.
8. The method of claim 1, wherein the location further comprises a
plurality of locations based, at least in part, on a plurality of
different locations of the point on the real object at different
times.
9. The method of claim 8, wherein a path connecting the plurality
of locations is displayed by the first floating-in-the-air
display.
10. The method of claim 8, and further comprising implementing,
based, at least in part, on the plurality of locations, at least
one function of the group of functions: zooming in on the first
image; zooming out of the first image; cropping the first image;
rotating the first image; slicing the first image; measuring a
length within the first image; measuring an area within the first
image; and measuring a volume within the first image.
11. The method of claim 1 wherein the floating-in-the-air user
interface is used to implement a Computer Aided Design (CAD) user
interface.
12. The method of claim 1 in which said input further comprises at
least one additional input selected from a group consisting of: a
voice command; a mouse click; a keyboard input; and a button
press.
13. The method of claim 1, and further comprising marking the point
so as to substantially contrast with a rest of the real object.
14. The method of claim 1, wherein the location comprises a line
defined by a long axis of the real object.
15. The method of claim 1 and further comprising: a second
floating-in-the-air display displaying a second image at
substantially the same time as the first floating-in-the-air
display displays the first image.
16. The method of claim 15 wherein the first floating-in-the-air
display displays an identical display as the second
floating-in-the-air display.
17. The method of claim 15 wherein the first floating-in-the-air
display and the second floating-in-the-air display are used to
implement a remote medicine interaction between a first user of the
first floating-in-the-air display and a second user of the second
floating-in-the-air display.
18. The method of claim 15 wherein the first floating-in-the-air
display and the second floating-in-the-air display are used to
implement a whiteboard-like collaborative sharing between the first
display and the second display.
19. The method of claim 15 wherein the first floating-in-the-air
display and the second floating-in-the-air display are used to
implement a game in which a first user of the first
floating-in-the-air display and a second user of the second
floating-in-the-air display participate.
20. A user interface comprising: a first floating-in-the-air
display; and a first input unit adapted to accept input from a
first location within a first display space, the first display
space being a volume within which objects are displayed by the
first floating-in-the-air display.
21. The user interface of claim 20, wherein the floating-in-the-air
display is a volumetric display.
22. The user interface of claim 20, wherein the floating-in-the-air
display is a two-dimensional floating-in-the-air display.
23. The user interface of claim 20, wherein the first
floating-in-the-air display is adapted to display the first
location.
24. The user interface of claim 20 wherein the first
floating-in-the-air display is adapted to display an image within
touching reach of a viewer.
25. The user interface of claim 23, further comprising a second
floating-in-the-air display, wherein the second floating-in-the-air
display displays a scene at least partially identical to a scene
displayed by the first floating-in-the-air display, including
displaying the first location.
26. The user interface of claim 25 wherein the first
floating-in-the-air display and the second floating-in-the-air
display, are connected by a communication channel between the first
floating-in-the-air display and the second floating-in-the-air
display.
27. The user interface of claim 23, further comprising a second
floating-in-the-air display, wherein the second floating-in-the-air
display displays a scene different from a scene displayed by the
first floating-in-the-air display.
28. The user interface of claim 25, and further comprising a second
input unit adapted to accept input from a second location within a
second display space, the second display space being a volume
within which objects displayed by the second floating-in-the-air
display appear.
29. The user interface of claim 20, wherein the first
floating-in-the-air display is adapted to provide sensory feedback
based, at least in part, on the location and on what is being
displayed in the location.
30. A method for implementing a three dimensional (3D) user
interface, comprising: locating a first direction for viewing a
first 3D scene; producing a first 3D image of the first 3D scene;
displaying the first 3D image of the first 3D scene to the first
direction; locating a second direction, different from the first
direction, for viewing the 3D scene; producing a second 3D image of
the first 3D scene; and displaying the second 3D image of the first
3D scene to the second direction; wherein the second 3D image is
produced so the first 3D scene appears from the second direction at
same coordinates in space and same direction in space as the first
3D scene appears from the first direction.
31. The method of claim 30 in which: a viewer views with two eyes;
the first direction comprises a direction of a first eye of the
viewer; and the second direction comprises a direction of a second
eye of the viewer, the displaying the first 3D image comprises
displaying the first 3D image to the first eye; and the displaying
the second 3D image comprises displaying the second 3D image to the
second eye.
32. The method of claim 30 in which: the 3D scene is viewed by two
viewers; the first direction comprises a direction of a first
viewer; and the second direction comprises a direction of a second
viewer, the displaying the first 3D image comprises displaying the
first 3D image to the first viewer; and the displaying the second
3D image comprises displaying the second 3D image to the second
viewer wherein the second 3D image is produced so the first 3D
scene appears to the second viewer viewing from the second
direction at same coordinates in space and same direction in space
as the first 3D scene appears to the first viewer viewing from the
first direction.
Description
RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 13/936,262 filed on Jul. 8, 2013, which is a
continuation of U.S. patent application Ser. No. 13/003,309 filed
on Jan. 10, 2011, now U.S. Pat. No. 8,500,284, which is a National
Phase of PCT Patent Application No. PCT/IL2009/000686 filed on Jul.
9, 2009, which claims the benefit of priority under 35 U.S.C.
119(e), of U.S. Provisional Patent Application No. 61/129,665 filed
on Jul. 10, 2008. The contents of which are incorporated herein by
reference.
FIELD AND BACKGROUND OF THE INVENTION
[0002] The present invention, in some embodiments thereof, relates
to methods and devices for displaying images and, more
particularly, but not exclusively, to such methods and devices that
allow seeing images from a broad viewing angle, for example, from
360.degree. around the image.
[0003] The present invention, in some embodiments thereof, relates
to a computerized user interface system, and, more particularly,
but not exclusively, to a user interface system including a
floating-in-the-air display.
[0004] US Patent Application Publication No. 2006-0171008 describes
a three-dimensional (3D) display system. The 3D display system
includes a projector device for projecting an image upon a display
medium to form a 3D image. The 3D image is formed such that a
viewer can view the image from multiple angles up to 360 degrees.
Multiple display media are described, namely a spinning diffusive
screen, a circular diffuser screen, and an aerogel. The spinning
diffusive screen utilizes spatial light modulators to control the
image such that the 3D image is displayed on the rotating screen in
a time-multiplexing manner. The circular diffuser screen includes
multiple, simultaneously-operated projectors to project the image
onto the circular diffuser screen from a plurality of locations,
thereby forming the 3D image. The aerogel can use the projection
device described as applicable to either the spinning diffusive
screen or the circular diffuser screen. While this publication
sometimes calls the 3-D image a hologram, in fact, the display
media taught thereby generate non-holographic 3-D images.
[0005] Some computer generated three dimensional displays are
known. Some use micro-lenses on flat screens. Some include computer
generated holograms which can be viewed from a relatively narrow
angle.
[0006] A class of three-dimensional (3D) displays, called
volumetric displays, is currently undergoing rapid advancement. The
types of displays in this class include swept volume displays and
static volume displays. Volumetric displays allow for
three-dimensional (3D) graphical scenes to be displayed within a
true 3D volume. That is, a volumetric display is not a projection
of volumetric data onto a 2D display but a true physical 3D
volume.
[0007] Some user interfaces display location of a user input
indicator in a first, display space, translating the location of
the user input from a second space, which is the input space. One
example of such a user interface is a mouse, where an indicator on
a computer screen moves corresponding to motions of the mouse, in
which the mouse moves on a desk, in one coordinate system, and the
indicator moves on the screen, in a second coordinate system.
[0008] U.S. Pat. No. 7,528,823 to Balakrishnan et al. describes a
system that creates a volumetric display and a user controllable
volumetric pointer within the volumetric display. The user can
point by aiming a beam which is vector, planar or tangent based,
positioning a device in three-dimensions in association with the
display, touching a digitizing surface of the display enclosure or
otherwise inputting position coordinates. The cursor can take a
number of different forms including a ray, a point, a volume and a
plane. The ray can include a ring, a bead, a segmented wand, a cone
and a cylinder. The user designates an input position and the
system maps the input position to a 3D cursor position within the
volumetric display. The system also determines whether any object
has been designated by the cursor by determining whether the object
is within a region of influence of the cursor. The system also
performs any function activated in association with the
designation.
[0009] White-boarding is a term used to describe the placement of
shared files on an on-screen "shared notebook" or "whiteboard".
Videoconferencing and data conferencing software often includes
tools that let the user mark up the electronic whiteboard much as
one would with a traditional wall-mounted board. A general property
of this type of software is to allow more than one person to work
on the image at any one time, with the two versions being kept in
sync with one another in near real time.
[0010] Haptic feedback, often referred to as simply "haptics", is
the use of the sense of touch in a user interface design to provide
information to an end user. When referring to mobile phones and
similar devices, this generally means the use of vibrations from
the device's vibration alarm to denote that a touch-screen button
has been pressed. In this particular example, the phone would
vibrate slightly in response to the user's activation of an
on-screen control, making up for the lack of a normal tactile
response that the user would experience when pressing a physical
button. The resistive force that some "force feedback" joysticks
and video game steering wheels provide is another form of haptic
feedback.
[0011] Background art includes: [0012] U.S. Pat. No. 6,377,238 to
McPheters; [0013] U.S. Pat. No. 7,054,045 to McPheters; [0014] U.S.
Pat. No. 7,528,823 to Balakrishnan et al; [0015] US Published
Patent Application No. 2006/0171008 of Mintz et al; and
[0016] An article titled "Overview of three-dimensional shape
measurement using optical methods", by Chen F., Brown G. M., Song
M., published in Opt. Eng. 39(1) 10-22 (January 2000).
SUMMARY OF THE INVENTION
[0017] The present invention, in some embodiments thereof, relates
to displaying holograms to many viewers, such that each viewer sees
the hologram at exactly the same place, and if touches a certain
portion of the hologram, all the other viewers see, each from his
own viewing angle, the image touched at the same place.
[0018] The present invention, in some embodiments thereof, relates
to projecting paraxial images 360.degree. around.
[0019] Thus, in accordance with an exemplary embodiment of the
invention, there is provided a method of displaying content to
multiple viewers, the method comprising:
[0020] forming multiple volumetric images, each of at least a
portion of the content and each viewable from its own viewability
space; and
[0021] overlapping a portion of one or more of said viewability
spaces with a pupil of each of the viewers.
[0022] In some embodiments, the content is a single scene; and each
of said volumetric images is of a face, whether solid or partly
transparent of said single scene as viewable from a different
viewing point.
[0023] Optionally, a plurality of the volumetric images overlap or
abut in space.
[0024] Optionally, all the volumetric images overlap in space.
[0025] Optionally, the overlap between volumetric images is
full.
[0026] Optionally, the volumetric images fully overlap in
space.
Optionally, volumetric images are considered to overlap if image
points of one image overlap or spaced between image points of the
other. Similarly, full overlap may be defined as the state when all
the image points of one image overlap or are between image points
of the other image. Optionally, each point in space identified by a
viewer as being a part of the image is an image point.
[0027] In some embodiments of the invention, the viewers are at
different azimuthal angles around the space occupied by one of the
volumetric images.
[0028] Optionally, the different azimuthal angles span an entire
circle, half a circle or quarter of a circle.
[0029] In some embodiments, two of the viewers are distanced from
each other at least 1 meter.
[0030] In some embodiments, the viewers see the images in the same
time.
[0031] Optionally, a viewability space overlaps with an eye of a
viewer only for a sequence of short periods, and said short periods
are timely spaced such that said viewer sees a continuous view.
[0032] The present invention, in some embodiments thereof, relates
to a user interface including a floating-in-the-air display.
[0033] The term floating-in-the-air display used herein for a
substrate-less display. The floating-in-the-air display is
optically produced, and does not require a substrate, therefore may
appear floating-in-the-air, or floating in water, or floating in a
solid.
[0034] In some embodiments, the user interface allows a user to
reach into the display space up to and/or into a displayed object
and/or scene. The "reaching into" provides natural hand-eye
coordination for the user interacting with the user interface. For
example, the user is enabled to "touch" a displayed object, and the
user, and optionally other viewers, see the "touching".
[0035] Where in some prior art the user manipulates an input
device, such as a mouse, in one space, and views a result of the
manipulation in another space, the space of the display. The user
interface of the invention enables manipulating an input device and
viewing the result of the manipulation in the same space.
[0036] In some embodiments, the user provides input to the user
interface, and the user interface effects some change in the
display, be it marking a part of the display, or causing a greater
change, such as cutting apart, uncovering a layer, and so on. The
user appears to have effected the change on the object itself,
since the user reached into the object and appeared to have
manipulated the object.
[0037] In some embodiments, sensory feedback is provided when a
user appears to touch an object being displayed. Since the
displayed object is floating-in-the-air and provides no resistance
to touching, the user may optionally use a device for pointing,
which the user interface optionally causes to provide sensory
feedback when the user "touches" the displayed object.
[0038] One example method for optionally providing sensory feedback
to the user when "touching" an object in the display includes
evoking an artificial touch sensation, as known in the art of
artificial reality, for example, by the user wearing a vibrating
ring or glove. Another example method is by projecting a beam, such
as infrared heat at the user's hand and/or finger such that the
hand and/or finger is heated. Yet another example method includes
projecting a beam of sound, for example, ultrasound, modulated to
cause sensation.
[0039] Yet another example method for providing sensory feedback to
the user includes visually marking the point of touch, for example,
by highlighting the point of touch. It is noted that the user
interface digitally defines the image displayed, therefore the user
interface may optionally cause a location in the displayed image to
be highlighted, to blink, to change hue, and so on.
[0040] Yet another example method for providing sensory feedback to
the user is by audible feedback, such as sounding a "knock" when a
pointer "touches" an object, and/or selecting from a variety of
sounds for feedback, optionally dependent on which object is
"touched".
[0041] The sensory feedback is to be considered as an option in
suitable locations herein where "touch", "grab", and other such
manipulative terms are used in describing a user interaction.
[0042] Where the term "sensory feedback" is used herein, the term
is intended to mean any of the methods listed above, as well as
other providing feedback to a user.
[0043] Some non-limiting examples of forms of command to the user
interface system described herein include; actuating actuators on
tools which are also used for interface within the display space,
such as pressing a button on a pointer which is also used to reach
into the display space; and voice commands.
[0044] In some embodiments, two or more user interfaces, in
different locations, display the same objects and/or scene. A user
in one location interacts with the user interface at that location,
and all users see the interaction. Optionally, a user in another
location interacts with the user interface, optionally at the same
time, and all users see both interactions. This enables the
above-mentioned natural hand-eye coordinated interaction between
remote locations, with many examples uses. Some non-limiting
examples include remote medical practice; remote teaching; remote
robotic manipulation; arcade gaming; and interactive gaming. The
distance by which one location mat be remote from another includes
being in another room; in another building; across town; across a
country; across an ocean; being two meters away; a hundred meters
or more away; a kilometer or more away; and hundreds or thousands
of kilometers away.
[0045] In some applications of the user interface, the
floating-in-the-air display utilizes embodiments of the volumetric
display described herein. In other applications of the user
interface, other volumetric displays are optionally used, provided
their properties support the specific application.
[0046] There is also provided by an exemplary embodiment of the
invention a system for displaying content to multiple viewers, the
system comprising:
[0047] a unit for generating volumetric images, each of at least a
portion of the content and each viewable from its own viewability
space; and
[0048] an optical system, controlling a portion of one or more of
said viewability spaces to overlap with a pupil of each of the
viewers.
[0049] In some embodiments, a plurality of the volumetric images
generated by the unit overlap in space.
[0050] Optionally, all the volumetric images generated by said unit
overlap in space.
[0051] Optionally, there is a full overlap between two or more of
volumetric images.
[0052] In some exemplary embodiments, the optical system comprises
an azimuth determining element that determines the azimuth of at
least one of said viewability spaces in respect of the volumetric
image viewable from said viewability space.
[0053] Optionally, the azimuth determining element comprises a
revolving mirror.
[0054] Optionally, the azimuth determining element is configured to
determine the azimuth of different viewability space to differ in
up to 90.degree., up to 180.degree., or up to 360.degree..
[0055] In some embodiments, the system comprises a time sharing
control, controlling each of the viewability spaces to overlap with
said pupil for only a sequence of short periods, and said short
periods are timely spaced such that said viewer sees a continuous
view.
[0056] Optionally, the time sharing control controls the revolution
of said revolving mirror.
[0057] In accordance with an exemplary embodiment of the invention,
there is provided a system comprising:
[0058] an image producing unit, producing a paraxial image; and
[0059] an optical system defining a stage and imaging said paraxial
image to said stage, such that the image on the stage is viewable
from a viewability space,
wherein said optical system comprises an eyepiece and a mirror, the
mirror being configured to direct light to the eyepiece in a
plurality of different azimuthal angles, and wherein
[0060] each of said azimuthal angles determines a different
location for said viewability space; and
[0061] for each of said azimuthal angles the location of the stage
is the same.
[0062] In some embodiments, if one viewer touches a given point and
another viewer sees the same point as being touched, the location
of the stage for the two viewers is considered the same.
Optionally, this allows a tolerance, in accordance with the ability
of a viewer to feel differences in locations.
[0063] Optionally, the location of the stage is considered the same
for all azimuthal angles if touching a point in the figure causes
viewers looking from all of the azimuthal angles to see the same
point touched. In this context, the point is "the same" if the
viewers cannot tell a difference.
[0064] In some exemplary embodiments, the eyepiece has a light
receiving surface receiving light from the paraxial image, and the
said light receiving surface has a shape of a revolved curve, which
resides on a plane and revolved around an axis that is out of said
plane. In an exemplary embodiment of the invention, the light
receiving surface is a cylinder, optionally having walls with
curvature to provide image magnification. Optionally, the curvature
is not symmetric up-down. Optionally or alternatively, the axis
crosses the image, for example at its center.
[0065] Optionally, the shape of the light receiving surface is of
the curve revolved at least 90.degree. around the axis. For
instance, if the curve is of half a circle, the surface is of
quarter of a spherical shell.
[0066] Optionally, the shape of the light receiving surface is of
the curve revolved 360.degree. around the axis, such that said
eyepiece defines an inner cavity. For instance, if the curve is of
half a circle, the defined inner cavity is a sphere.
[0067] Optionally, the curve is an arc, making a portion of a
circle.
[0068] Optionally, the curve is a parabola.
[0069] In some embodiments where the curve is an arch, the axis of
revolution does not go through the arc's center. Optionally or
alternatively, the axis crosses the image. Optionally, the axis
crosses the image but not as a perfect vertical. Optionally or
alternatively, the axis wobbles.
[0070] In some embodiments, the axis of revolution is in the
curve's plane, in the middle and perpendicular to the stage.
[0071] Optionally, the curve is concave in relation to the axis of
revolution.
[0072] Optionally, the curve is convex in relation to the axis of
revolution.
[0073] In some exemplary embodiments, the mirror revolves around an
axis. Optionally, the axis around which the mirror revolves is the
axis around which said curve is revolved to obtain the shape of the
inner surface of the eyepiece.
[0074] Optionally, the axis around which the mirror revolves and/or
the axis around which the curve is revolved to form the shape of
the inner surface, is a symmetry axis of the stage.
[0075] In some exemplary embodiments, the system comprises an
optical element, and light going from the paraxial image to the
mirror goes through the optical element.
[0076] Optionally, the optical element comprises a lens.
[0077] Optionally, the optical element comprises a curved
mirror.
[0078] In some embodiments, the light source is inside the cavity
defined by the eyepiece.
[0079] Optionally, the mirror, the image forming unit, and/or the
optical element is inside the cavity.
[0080] In some embodiments, at least a portion of the optical path
between the light source and the stage is inside the cavity.
[0081] In some embodiments of the invention, the image forming unit
comprises a transmissive LCD.
[0082] In some embodiments of the invention, the image forming unit
comprises a reflective LCD.
[0083] In some embodiments of the invention, the image forming unit
comprises a spatial light modulator (SLM). Optionally, the paraxial
image is a computer generated hologram (CGH).
[0084] Optionally, the paraxial image is a paraxial parallax
barrier image.
[0085] Optionally, the paraxial image is a two-dimensional
image.
[0086] Optionally, the paraxial image is a three-dimensional
image.
[0087] Optionally, the image on the stage is volumetric. In this
context, a volumetric image is an image composed of image points,
which are not confined to a single plane, but rather fill a
three-dimensional space. Thus, a volumetric image is an image that
occupies volume, but there is nothing in the volume, except for air
or the like, and light emerging from image points within the
volume. Optionally, the three physical dimensions of a volumetric
image are of the same order of magnitude, for instance, each of the
height, breadth, and depth of the image has a measurement of
between 1 cm and 20 cm, for example, 10 cm. Optionally, larger
measurements are provided for one or more of the dimensions, for
example, 30 cm, 50 cm, 80 cm, 100 cm or more. Optionally, this is
provided using a viewer location which is inside the imaging
system. In an exemplary embodiment of the invention, the diameter
of the light receiving surface its height are selected to match
desired viewing angles and image sizes. In an exemplary embodiment
of the invention, the stage is not curved or is piecewise curved
and the image forming unit and/or optics are used to compensate
therefore. Optionally, the image forming unit is not at a center of
the curve, so different magnifications and/or angular sizes may be
generated for different viewing angles at a same distance form the
system.
[0088] In exemplary embodiments, the image producing unit is
configured to produce the same image to be viewed from all said
different azimuthal angles. Optionally, the image is seen as being
different in size from different distances. Optionally or
alternatively, the image moves up or down for different viewing
height angles. Alternatively, the image may be the same even if a
viewer raises his head or lowers it. In an exemplary embodiment of
the invention, however, the system modifies and generates the
display so that any movement, distance, azimuth or height change,
generates a visual effect in the beholder which is the same as if a
real image was floating in space and being viewed. As noted, in
some implementations, such perfect fidelity is not provided and may
be, for example, degrade by not supporting one or more eye position
change types. For example, a same image (rotated to fit) may be
provided form any viewing angle, optionally with different views
for each eye. In another example, vertical repositioning of the
eyes does not provide a change in the part of image observed.
[0089] In some embodiments, the image producing unit is configured
to produce different images to be viewed from different azimuthal
angles. For example, the image producing unit may be configured to
produce partial images of a scene, each viewable from a different
angle, and the system is configured to image the partial images of
the scene to be each viewable from said different angle.
[0090] In some embodiments, the mirror is inclined in respect of
the axis around which the mirror revolves.
An aspect of some embodiments of the invention concerns a method of
imaging a paraxial image to be seen by a viewer having a pupil in a
first location and looking at a second location, the method
comprising:
[0091] generating the paraxial image;
[0092] imaging the paraxial image to the location at which the
viewer looks, such that an image of the paraxial image is viewable
from a viewability space having a broadest portion and narrower
portions;
[0093] selecting, responsive to the location of the viewer's pupil,
a third location; and
[0094] imaging the broadest portion of the viewability space to the
selected third location. Optionally, imaging comprises imaging the
paraxial image to the image viewability space, and simultaneously
imaging the projector's plane to the plane of the viewer's pupil.
For example, the projector in a holographic configuration is the
SLM.
[0095] In an exemplary embodiment of the invention, the paraxial
image is a computer generated hologram (CGH) generated with a
spatial light modulator (SLM); and the image of the SLM is at the
broadest portion of the viewability space.
[0096] In an exemplary embodiment, the third location is selected
to overlap with the pupil of the viewer.
[0097] Optionally, the image of the CGH is viewable from a
viewability space, and the third location is selected such that
said viewability space overlaps with the viewer's pupil.
Optionally, imaging comprises imaging the paraxial image to the
image viewability space, and simultaneously imaging the projector
(e.g., SLM in holographic configuration) plane to the plane of the
viewer's pupil.
[0098] In some embodiment, the method comprises:
[0099] receiving an indication of the position of the viewer's
pupil; and
[0100] defining, responsive to said indication, a viewing porthole,
inside which said pupil resides,
wherein the third location is selected such that the viewability
space at least partially overlaps said viewing porthole.
Optionally, the third location is selected such that the
viewability space overlaps the entirety of said viewing
porthole.
[0101] In some embodiment, receiving an indication of the position
of the viewer's pupil comprises:
[0102] receiving an indication of the location of the face of the
viewer; and
[0103] analyzing said indication to obtain an indication of the
position of the viewer's pupil.
[0104] Optionally, imaging the SLM comprises generating an image
that is larger than the SLM.
[0105] Optionally or additionally, the image of the CGH is larger
than the SLM.
[0106] In some exemplary embodiments, the method comprises:
[0107] (a) imaging the SLM responsive to the location of one of the
viewer's eyes; and thereafter
[0108] (b) imaging the SLM responsive to the location of the other
of the viewer's eyes; and
repeating (a) and (b) such that the viewer sees a continuous
image.
[0109] Optionally, a first CGH is projected to a first of the
viewer's eyes, and a second CGH is projected to a second of the
viewer's eyes.
[0110] In some embodiments, the first and second CGH are holograms
of the same scene, as would be seen by the first and second eye of
the viewer, had the scene was at said second location, on which the
CGH is imaged.
[0111] In an exemplary embodiment, the viewer is one of a plurality
of viewers having together a plurality of eyes, and the SLM is
sequentially imaged, each time responsive to the location of
another of said plurality of eyes, such that each viewer sees a
continuous view.
[0112] Optionally, one or more of the images imaged to the second
location is the same each time the SLM is imaged to overlap with an
eye of the same viewer, thus showing the viewer a still or a
similar (e.g., dynamic) image.
[0113] In some embodiments of the invention, the method of imaging
a paraxial image to be seen by a viewer having a pupil in a first
location and looking at a second location comprises:
[0114] projecting light from the SLM to a mirror; and
[0115] moving said mirror to follow movements of the viewer's
eye.
[0116] In some embodiments, the method comprises
[0117] imaging the SLM to a revolving mirror; and
[0118] revolving said revolving mirror such that the viewer sees a
continuous image.
[0119] Optionally, the SLM is one of a plurality of SLMs, and
images of said plurality of SLMs are projected to the same
revolving mirror.
[0120] Optionally, the mirror is at a focus of a focusing element
of said optical system.
[0121] Optionally, imaging is for at least 24 periods per second,
each of said periods being 1-20 microseconds long.
[0122] In some embodiments, the method comprises performing eye
tracking to receive an indication of the position of the viewer's
pupil.
[0123] In some embodiments, receiving an indication of the position
of the viewer's pupil comprises:
[0124] receiving light reflection from said viewer's eye; and
[0125] analyzing said reflection to estimate the location of said
viewer's eye.
[0126] Optionally, receiving an indication of the position of the
viewer's pupil comprises:
[0127] receiving an indication of the position where the viewer's
face is identified; and
[0128] processing said indication to obtain an indication of the
position of the viewer's pupil.
[0129] In some embodiments, the paraxial image is a paraxial
parallax barrier image.
[0130] In some embodiments, the paraxial image is a two-dimensional
image.
[0131] In some embodiments, the paraxial image is a
three-dimensional image.
[0132] Optionally, the image of the paraxial image is
volumetric.
There is also provided by an exemplary embodiment of the invention
a method of displaying a scene to a viewer looking towards a given
location, the method comprising:
[0133] estimating the position of the viewer's eye;
[0134] estimating which scene-portions the viewer would have seen
if the scene was at the given location in a given orientation;
and
[0135] imaging to the given location a computer generated hologram
of only a portion of the scene, said portion comprising said
estimated scene-portions, wherein imaging is such that the viewer
can see the hologram.
[0136] In preferred embodiments, the imaging is in a method
described above.
[0137] Optionally, the method comprises tracking the position of
the viewer's eye, and imaging the computer generated hologram such
that when the viewer moves he keeps seeing the hologram at said
given location. In some embodiments, this is so even when the
viewer moves, for example, one or two meter.
[0138] An aspect of some embodiments of the invention concerns a
system for imaging a computer-generated hologram (CGH), the system
comprising:
[0139] a hologram generating unit comprising a spatial light
modulator (SLM);
[0140] an optical system configured to image a hologram generated
by said unit to a first location and an image of the SLM to a
second location; and
[0141] a controller configured to control the imaging of said image
of the SLM to the second location, such that the CGH is imaged to a
location between the optical system and the image of the SLM.
Optionally, the controller controls at least one of the optical
system and the hologram generating unit.
[0142] Optionally, said controller is configured to control said
optical system to generate an image of the SLM at said desired
location at a desired orientation.
[0143] In some embodiments, the controller is configured to change
the desired location and/or orientation online.
[0144] In some embodiments, the system includes an input for online
receipt of indications of said desired location and/or
orientation.
[0145] Optionally, the input comprises a receiver for receiving,
from an eye-tracking unit, a signal indicative of the location of a
viewer's eye, and the controller controls said optical system to
project said image of the SLM to said desired location such that
the viewer's eye is within a viewability space comprising said
desired location.
[0146] Optionally, both eyes of the viewer are in the viewability
space at the same time.
[0147] In some embodiments, the optical system comprises: an
objective; an eyepiece, and an optical-length adjusting unit,
controllable to adjust the optical length between the objective and
the eyepiece responsive to a distance between said desired location
and one or more of the eyepiece and the objective.
[0148] Optionally, the optical-length adjusting unit is configured
to adjust said optical length online.
[0149] In some embodiments, the system comprises a mirror that
reflects light reaching the mirror from the objective to a portion
of the eyepiece, wherein the mirror is controllable to reflect said
light to different portions of the eyepiece.
[0150] Optionally, the eyepiece comprises a hollow body or a
transparent solid or fluid.
[0151] In some embodiments, the hollow body is shaped as a solid of
revolution obtainable by revolving a parabola around an axis that
passes through a center of the image.
[0152] Optionally, the hollow cylindrical body is a portion of a
sphere.
[0153] Optionally, the hologram generating unit is inside said
eyepiece.
[0154] In some embodiments, the system comprises a revolving
mirror, revolving around a central axis of said eyepiece, at the
center of the eyepiece.
[0155] Optionally, light coming to said revolving mirror from the
objective is reflected towards the eyepiece.
[0156] Optionally, the light reaches the revolving mirror from the
objective through one or more optical elements.
[0157] Optionally, the one or more optical elements comprises an
optical-length adjusting element, controllable to adjust the
optical length between the objective and the eyepiece.
[0158] Optionally, the one or more optical elements comprises an
optical-light adjusting element, controllable to adjust the optical
light direction with regard to elevation per specific azimuth
toward the eyepiece.
[0159] An aspect of some embodiments of the invention concerns a
system for imaging holograms, the system comprising:
[0160] a hollow eyepiece having an inner wall, the inner wall
defining a central cavity;
[0161] a hologram generating unit, residing inside said cavity;
and
[0162] an objective on an optical path going from a hologram
generated by said hologram generating unit to said inner wall.
[0163] Optionally, the system comprises a plurality of hologram
generating units, each associated with an objective.
[0164] Optionally or alternatively, the system comprises a
plurality of unit cells which optionally rotate with the viewpoint.
Exemplary such cells include hologram generating unit cells and
tracking cells, for example, for tracking an eye or finger or input
element.
[0165] Optionally, the system is configured to create, using said
eyepiece, images of holograms produced by different ones of the
computer generating units at a single location.
[0166] Optionally, the single location is inside said eyepiece,
optionally in the center of rotation of the eyepiece.
[0167] Optionally, the hollow eyepiece has an inner reflective
surface.
[0168] In some embodiments, the shape of the inner surface is
obtainable by revolving a curve residing on a first plane around an
axis of revolution residing in the same plane.
[0169] Optionally, the axis of revolution is perpendicular to the
middle of the stage. Alternatively, the axis is at an angle to the
stage or wobbles with the rotation.
[0170] Optionally, the inner surface is shaped as a portion of a
spherical shell.
[0171] In some embodiments, each of the plurality of hologram
generating units comprises a spatial light modulator (SLM), and
each of the plurality of unit cells comprises a light-converging
objective placed with a focal point between the SLM and the
hologram produced by said SLM.
[0172] Optionally, each of the plurality of unit cells has an
objective, and an optical-length determining element configured to
determine the optical length between said objective and the
eyepiece.
[0173] Optionally, each of the optical length determining elements
is controllable independently of the other optical length
determining elements.
[0174] Optionally, the system comprises a revolving mirror at the
center of said eyepiece, configured to receive light from the
plurality of unit cells and reflect said light onto the
eyepiece.
[0175] Optionally, one or more optical elements comprises an
optical-light adjusting element, controllable to adjust the optical
light direction with regard to elevation per specific azimuth
toward the eyepiece.
[0176] According to an aspect of some embodiments of the present
invention there is provided a method of floating-in-the-air image
display, including providing a floating-in-the-air display device
at a location, and projecting from the device one or more
floating-in-the-air computer generated images, viewable over a
range of angles including at least 200 arc degrees surrounding the
location.
[0177] According to some embodiments of the invention, projecting
includes selectively projecting over a small angle where a viewer
is expected. According to some embodiments of the invention,
projecting includes selectively projecting using a plurality of
image generation modules. According to some embodiments of the
invention, projecting includes projecting different images with a
same coordinate system, in different directions. According to some
embodiments of the invention, projecting includes projecting a 2D
image.
[0178] According to some embodiments of the invention, projecting
includes projecting image of an object such that the object's
presentation angle changes with a viewing angle thereof to match an
effect of moving around the object. According to some embodiments
of the invention, projecting includes projecting a 3D image.
[0179] According to some embodiments of the invention, projecting
includes projecting a hologram. According to some embodiments of
the invention, projecting includes adjusting a projection distance
of the image. According to some embodiments of the invention,
projecting includes adjusting a focal distance of the image.
[0180] According to some embodiments of the invention, projecting
includes projecting different images for different eyes of a same
viewer. According to some embodiments of the invention, projecting
includes projecting from a single point in the device. According to
some embodiments of the invention, projecting includes projecting
images with a shared coordinate system. According to some
embodiments of the invention, projecting includes imaging the
images at a location not occupied by a display substrate.
[0181] According to an aspect of some embodiments of the present
invention there is provided a hologram display device projecting a
floating in the air computer generated hologram simultaneously
viewable over a range of viewing angles of at least 180
degrees.
[0182] According to some embodiments of the invention, the hologram
shares a same coordinate set from viewing points separated by at
least 20 degrees. According to some embodiments of the invention,
including a hologram generating unit and at least one lens for
projecting the hologram. According to some embodiments of the
invention, including at least one distance control unit. According
to some embodiments of the invention, including at least one
hologram aiming mechanism.
[0183] According to an aspect of some embodiments of the present
invention there is provided a method of displaying content to
multiple viewers, the method including forming multiple volumetric
images, each of at least a portion of the content and each viewable
from its own viewability space, and overlapping a portion of one or
more of the viewability spaces with a pupil of each of the
viewers.
[0184] According to some embodiments of the invention, the
viewability spaces can cover over 90 degrees.
[0185] According to some embodiments of the invention, the content
is a single scene, and each of the volumetric images is of a face
of the single scene as viewable from a different viewing point.
[0186] According to some embodiments of the invention, the viewers
are at different azimuthal angles around the space occupied by one
of the volumetric images. According to some embodiments of the
invention, the different azimuthal angles span an entire circle.
According to some embodiments of the invention, the different
azimuthal angles span at least half a circle.
[0187] According to some embodiments of the invention, two of the
viewers are distanced from each other at least 1 meter.
[0188] According to some embodiments of the invention, the viewers
see the images at a same time.
[0189] According to some embodiments of the invention, a
viewability space overlaps with an eye of a viewer only for a
sequence of short periods, and the short periods are timely spaced
such that the viewer sees a continuous view.
[0190] According to an aspect of some embodiments of the present
invention there is provided a system for displaying content to
multiple viewers, the system including a unit for generating
volumetric images each of at least a portion of the content and
each viewable from its own viewability space, and an optical
system, controlling a portion of one or more of the viewability
spaces with a pupil of each of the viewers.
[0191] According to an aspect of some embodiments of the present
invention there is provided a system according to claim 29, wherein
the optical system includes an azimuth determining element that
determines the azimuth of at least one of the viewability spaces in
respect of the volumetric image viewable from the viewability
space.
[0192] According to an aspect of some embodiments of the present
invention there is provided a system including an image producing
unit, producing a paraxial image, and an optical system defining a
stage and imaging the paraxial image to the stage, such that the
image on the stage is viewable from a viewability space, wherein
the optical system includes an eyepiece and a mirror, the mirror
being configured to direct light to the eyepiece in a plurality of
different azimuthal angles, and wherein each of the azimuthal
angles determines a different location for the viewability space,
and for each of the azimuthal angles the location of the stage is
the same.
[0193] According to some embodiments of the invention, for at least
two different azimuthal angles, two different elevations are
provided.
[0194] According to an aspect of some embodiments of the present
invention there is provided a method of imaging a paraxial image to
be seen by a viewer having a pupil in a first location and looking
at a second location, the method including generating the paraxial
image, imaging the paraxial image to the location at which the
viewer looks, such that an image of the paraxial image is viewable
from a viewability space having a broadest portion and narrower
portions, selecting, responsive to the location of the viewer's
pupil, a third location, and imaging the broadest portion of the
viewability space to the selected third location.
[0195] According to an aspect of some embodiments of the present
invention there is provided a method of displaying a scene to a
viewer looking towards a given location, the method including
estimating the position of the viewer's eye, estimating which
scene-portions the viewer would have seen if the scene was at the
given location in a given orientation, and imaging to the given
location a computer generated hologram of only a portion of the
scene, the portion including the estimated scene-portions, wherein
imaging is such that the viewer can see the hologram.
[0196] According to some embodiments of the invention, including
tracking the position of the viewer's eye, and imaging the computer
generated hologram such that when the viewer moves he keeps seeing
the hologram at the given location.
[0197] According to an aspect of some embodiments of the present
invention there is provided a system for imaging a
computer-generated hologram (CGH), the system including a hologram
generating unit including a spatial light modulator (SLM), an
optical system configured to image a hologram generated by the unit
to a first location and an image of the SLM to a second location,
and a controller configured to control the imaging of the image of
the SLM to the second location, such that the CGH is imaged to a
location between the optical system and the image of the SLM.
[0198] According to some embodiments of the invention, the
controller is configured to control the optical system to generate
the image of the SLM at the desired location at a desired
orientation.
[0199] According to some embodiments of the invention, the
controller is configured to change the desired location and/or
orientation online.
[0200] According to some embodiments of the invention, including an
input for online receipt of indications of the desired location
and/or orientation.
[0201] According to some embodiments of the invention, the input
includes a receiver for receiving, from an eye-tracking unit, a
signal indicative of the location of a viewer's eye, and the
controller controls the optical system to project the image of the
SLM to the desired location such that the viewer's eye is within a
viewability space including the desired location.
[0202] According to some embodiments of the invention, the optical
system includes an objective, an eyepiece, and an optical-length
adjusting unit, controllable to adjust the optical length between
the objective and the eyepiece responsive to a distance between the
desired location and one or more of the eyepiece and the
objective.
[0203] According to some embodiments of the invention, the
optical-length adjusting unit is configured to adjust the optical
length online.
[0204] According to some embodiments of the invention, including a
mirror that reflects light reaching the mirror from the objective
to a portion of the eyepiece, wherein the mirror is controllable to
reflect the light to different portions of the eyepiece.
[0205] According to some embodiments of the invention, the eyepiece
includes a hollow body. According to some embodiments of the
invention, the hollow body is shaped as a solid of revolution
obtainable by revolving a parabola around an axis that is not on
the same plane as the parabola. According to some embodiments of
the invention, the hollow cylindrical body is a portion of a
sphere.
[0206] According to some embodiments of the invention, the hologram
generating unit is inside the eyepiece.
[0207] According to some embodiments of the invention, including a
revolving mirror, revolving around a central axis of the
eyepiece.
[0208] According to some embodiments of the invention, light coming
to the revolving mirror from the objective is reflected towards the
eyepiece.
[0209] According to some embodiments of the invention, the light
reaches the revolving mirror from the objective through one or more
optical elements.
[0210] According to some embodiments of the invention, the one or
more optical elements includes an optical-length adjusting element,
controllable to adjust the optical length between the objective and
the eyepiece.
[0211] According to an aspect of some embodiments of the present
invention there is provided a system for imaging, the system
including an eyepiece having an inner reflecting wall covering at
least 90 degrees arc angle, and an image generating unit, residing
inside the cavity, wherein the eye piece is configured to project
an image from the image generation unit to one or more viewers.
[0212] According to some embodiments of the invention, the image
generation unit includes at least one element which moves so as to
project the image over a range of angles.
[0213] According to some embodiments of the invention, the system
is configured so that the viewer is surrounded by the wall.
[0214] According to some embodiments of the invention, the eye
piece is hollow, defining a cavity and wherein image generating
unit includes a hologram generating unit, residing inside the
cavity, and including an objective on an optical path going from a
hologram generated by the hologram generating unit to the inner
wall.
[0215] According to some embodiments of the invention, including a
plurality of hologram generating units, each associated with an
objective.
[0216] According to some embodiments of the invention, configured
to create, using the eyepiece, images of holograms produced by
different ones of the computer generating units at a single
location.
[0217] According to some embodiments of the invention, the single
location is inside the eyepiece. According to some embodiments of
the invention, the hollow eyepiece has an inner reflective
surface.
[0218] According to some embodiments of the invention, the shape of
the inner surface is obtainable by revolving a curve residing on a
first plane around an axis of revolution residing on a second
plane, other than the first plane.
[0219] According to some embodiments of the invention, the axis of
revolution is perpendicular to the first plane.
[0220] According to some embodiments of the invention, the inner
surface is shaped as a portion of a spherical shell.
[0221] According to some embodiments of the invention, each of the
plurality of hologram generating units includes a spatial light
modulator (SLM), and each of the plurality of unit cells includes a
light-converging objective placed with a focal point between the
SLM and the hologram produced by the SLM.
[0222] According to some embodiments of the invention, each of the
plurality of unit cells has an objective, and an optical-length
determining element configured to determine the optical length
between the objective and the eyepiece.
[0223] According to some embodiments of the invention, each of the
optical length determining elements is controllable independently
of the other optical length determining elements.
[0224] According to some embodiments of the invention, including a
revolving mirror at the center of the eyepiece, configured to
receive light from the plurality of unit cells and reflect the
light onto the eyepiece.
[0225] According to some embodiments of the invention, the
controller controls the optical system. According to some
embodiments of the invention, the controller controls the hologram
generating unit.
[0226] According to an aspect of some embodiments of the present
invention there is provided a method for implementing a
floating-in-the-air user interface, including displaying a first
image in a display space of a first floating-in-the-air display,
inserting a real object into the display space of the first
floating-in-the-air display, locating a location of the real object
within the display space of the first floating-in-the-air display,
locating the real object in the display space, and providing the
location as input to the floating-in-the-air user interface.
[0227] According to some embodiments of the invention, further
including displaying a second image in the display space of the
first floating-in-the-air display, based, at least in part, on the
location.
[0228] According to some embodiments of the invention, the
floating-in-the-air display is a volumetric display.
[0229] According to some embodiments of the invention, the second
image is displayed at near real-time after the inserting the real
object into the display space. According to some embodiments of the
invention, the time is less than 1/24 of a second.
[0230] According to some embodiments of the invention, the first
image is a blank image, and the second image includes a display of
the location.
[0231] According to some embodiments of the invention, the real
object is a finger.
[0232] According to some embodiments of the invention, further
including displaying an actuator in the first image, moving the
location of the real object substantially close to the actuator,
and interpreting the location input as the real object actuating
the actuator.
[0233] According to some embodiments of the invention, further
including moving the location of the real object, tracking the
location of the real object over time, interpreting the location
input as the real object manipulating at least a part of the first
image.
[0234] According to some embodiments of the invention, further
including sending control commands to a robotic device based, at
least in part, on the interpreting.
[0235] According to some embodiments of the invention, the real
object further includes a plurality of real objects, and using the
locations of each of the real objects as location inputs for the
volumetric user interface.
[0236] According to some embodiments of the invention, the second
image is different from the first image.
[0237] According to some embodiments of the invention, the second
image is substantially equal to the first image plus an added
indicator of the location input.
[0238] According to some embodiments of the invention, the location
includes a location of what is substantially a point on the real
object.
[0239] According to some embodiments of the invention, further
including capturing a sub-image based, at least in part, on the
location. According to some embodiments of the invention, the
sub-image includes a voxel.
[0240] According to some embodiments of the invention, the location
further includes a plurality of locations based, at least in part,
on a plurality of locations of the point on the real object.
[0241] According to some embodiments of the invention, a path
connecting the plurality of locations is displayed by the first
floating-in-the-air display.
[0242] According to some embodiments of the invention, the
plurality of locations includes two locations, and further
including defining a line in three dimensions based, at least in
part, on the two locations.
[0243] According to some embodiments of the invention, the
plurality of locations includes three locations not in a straight
line, and further including defining a plane in three dimensions
based, at least in part, on the three locations.
[0244] According to some embodiments of the invention, the
plurality of locations includes four locations not in a one plane,
and further including defining a volume in three dimensions based,
at least in part, on the four locations.
[0245] According to some embodiments of the invention, further
including implementing, based, at least in part, on the plurality
of locations, one function of the group of functions: zooming in on
the first image, zooming out of the first image, cropping the first
image, rotating the first image, slicing the first image, measuring
a length within the first image, measuring an area within the first
image, and measuring a volume within the first image.
[0246] According to some embodiments of the invention, further
including implementing a sub-image capture based, at least in part,
on the plurality of locations.
[0247] According to some embodiments of the invention, further
including marking the point so as to substantially contrast with a
rest of the real object.
[0248] According to some embodiments of the invention, further
including the marking to be marking by a substantially compact
light source.
[0249] According to some embodiments of the invention, the location
includes a line defined by a long axis of the real object.
[0250] According to some embodiments of the invention, the location
includes a frame corresponding to a shape of the real object.
[0251] According to some embodiments of the invention, further
including the first floating-in-the-air display displaying a second
image at substantially the same time as the first
floating-in-the-air display displays the first image, and wherein
the first image is displayed to a first user, and the second image
is displayed to a second user.
[0252] According to some embodiments of the invention, the first
image and the second image appear to be in a same location in
space.
[0253] According to some embodiments of the invention, further
including a second floating-in-the-air display displaying a second
image at substantially the same time as the first
floating-in-the-air display displays the first image, and wherein
the first image is displayed to a first user, and the second image
is displayed to a second user.
[0254] According to some embodiments of the invention, the first
floating-in-the-air display is substantially remote from the second
floating-in-the-air display, and further including a communication
channel between the first floating-in-the-air display and the
second floating-in-the-air display.
[0255] According to some embodiments of the invention, the first
display and the second display are used to implement a remote
medicine interaction between the first user and the second
user.
[0256] According to some embodiments of the invention, the first
display and the second display are used to implement a
whiteboard-like collaborative sharing between the first display and
the second display.
[0257] According to some embodiments of the invention, the first
display and the second display are used to implement remote
teaching by a user at the first floating-in-the-air display.
[0258] According to some embodiments of the invention, the first
display and the second display are used to implement a game in
which the first user and the second user participate.
[0259] According to some embodiments of the invention, the first
display differs from the second display. According to some
embodiments of the invention, the first display displays more
content than the second display.
[0260] According to an aspect of some embodiments of the present
invention there is provided a method for enabling a viewing of
dynamically generated a floating-in-the-air displayed object and a
real object in a same display space, including displaying the
volumetrically displayed object on a first floating-in-the-air
display, and inserting the real object into the display space of
the first floating-in-the-air display.
[0261] According to some embodiments of the invention, the
floating-in-the-air display is a volumetric display.
[0262] According to some embodiments of the invention, dynamically
generated includes computer generated.
[0263] According to some embodiments of the invention, further
including comparing the real object to at least a portion of the
floating-in-the-air displayed object.
[0264] According to some embodiments of the invention, the real
object includes a standard against which objects are measured, and
the comparing enables a judgment of conformance to standard.
[0265] According to some embodiments of the invention, the real
object is a medical device for inserting into a body, and the at
least a portion of the floating-in-the-air displayed object is at
least a portion of the body, generated from a three-dimensional
dataset.
[0266] According to some embodiments of the invention, the
comparing further includes measuring a difference in size between
the real object and the at least a portion of the
floating-in-the-air displayed object.
[0267] According to some embodiments of the invention, the
measuring the difference in size includes at least one of the group
consisting of: a difference in length, a difference in planar area,
a difference in surface area, and a difference in volume.
[0268] According to an aspect of some embodiments of the present
invention there is provided a method for enabling a viewing of a
floating-in-the-air displayed body part from a three dimensional
data set of the body and a volumetrically displayed virtual object
from a three dimensional data set of one or more virtual objects,
including displaying the floating-in-the-air displayed body part on
a first floating-in-the-air display, and overlaying the virtual
object into the display space of the first floating-in-the-air
display.
[0269] According to some embodiments of the invention, the virtual
object and the floating-in-the-air displayed body part are moved
relative to each other in the display space of the first
floating-in-the-air display.
[0270] According to some embodiments of the invention, further
including comparing the virtual object to at least a portion of the
body part.
[0271] According to an aspect of some embodiments of the present
invention there is provided a user interface including a first
floating-in-the-air display, and a first input unit adapted to
accept input from a first location within a first display space,
the first display space being a volume within which objects are
displayed by the first floating-in-the-air display.
[0272] According to some embodiments of the invention, the
floating-in-the-air display is a volumetric display. According to
some embodiments of the invention, the floating-in-the-air display
is a two-dimensional floating-in-the-air display.
[0273] According to some embodiments of the invention, the first
floating-in-the-air display is adapted to display the first
location.
[0274] According to some embodiments of the invention, further
including a second floating-in-the-air display, wherein the second
floating-in-the-air display displays an identical display as the
first floating-in-the-air display, including displaying the first
location.
[0275] According to some embodiments of the invention, further
including a second input unit adapted to accept input from a second
location within a second display space, the second display space
being a volume within which objects displayed by the second
floating-in-the-air display appear, and wherein the first
floating-in-the-air display is adapted to display an identical
display as the second floating-in-the-air display, including
displaying the second location.
[0276] According to some embodiments of the invention, the input
from the first location and the input from the second location are
both displayed.
[0277] According to some embodiments of the invention, the first
floating-in-the-air display is located in a different room than the
second floating-in-the-air display. According to some embodiments
of the invention, the first floating-in-the-air display is at least
100 meters away from the second floating-in-the-air display.
[0278] According to some embodiments of the invention, the first
floating-in-the-air volumetric display is adapted to provide
sensory feedback based, at least in part, on the location and on
what is being displayed in the location.
[0279] According to some embodiments of the invention, the first
floating-in-the-air volumetric display is adapted to display a
hologram.
[0280] According to an aspect of some embodiments of the present
invention there is provided a method for implementing a
floating-in-the-air user interface, including displaying a first
image in a display space of a first floating-in-the-air display,
inserting a real object into the display space, detecting location
of the real object within the display space, using the location as
an input for the user interface, and highlighting the location in
the display space.
[0281] According to an aspect of some embodiments of the present
invention there is provided a user interface including means for
displaying a floating-in-the-air display, means for accepting an
input from a location within the display space, the display space
being a volume within which objects displayed by the
floating-in-the-air display appear.
[0282] Unless otherwise defined, all technical and/or scientific
terms used herein have the same meaning as commonly understood by
one of ordinary skill in the art to which the invention pertains.
Although methods and materials similar or equivalent to those
described herein can be used in the practice or testing of
embodiments of the invention, exemplary methods and/or materials
are described below. In case of conflict, the patent specification,
including definitions, will control. In addition, the materials,
methods, and examples are illustrative only and are not intended to
be necessarily limiting.
[0283] Although methods and materials similar or equivalent to
those described herein can be used in the practice or testing of
embodiments of the invention, exemplary methods and/or materials
are described below. In addition, the materials, methods, and
examples are illustrative only and are not intended to be
necessarily limiting.
[0284] Implementation of the method and/or system of embodiments of
the invention can involve performing or completing selected tasks
manually, automatically, or a combination thereof. Moreover,
according to actual instrumentation and equipment of embodiments of
the method and/or system of the invention, several selected tasks
could be implemented by hardware, by software or by firmware or by
a combination thereof using an operating system.
[0285] For example, hardware for performing selected tasks
according to embodiments of the invention could be implemented as a
chip or a circuit. As software, selected tasks according to
embodiments of the invention could be implemented as a plurality of
software instructions being executed by a computer using any
suitable operating system. In an exemplary embodiment of the
invention, one or more tasks according to exemplary embodiments of
method and/or system as described herein are performed by a data
processor, such as a computing platform for executing a plurality
of instructions. Optionally, the data processor includes a volatile
memory for storing instructions and/or data and/or a non-volatile
storage, for example, a magnetic hard-disk and/or removable media,
for storing instructions and/or data. Optionally, a network
connection is provided as well. A display and/or a user input
device such as a keyboard or mouse are optionally provided as
well.
BRIEF DESCRIPTION OF THE DRAWINGS
[0286] Some embodiments of the invention are herein described, by
way of example only, with reference to the accompanying drawings.
With specific reference now to the drawings in detail, it is
stressed that the particulars shown are by way of example and for
purposes of illustrative discussion of embodiments of the
invention. In this regard, the description taken with the drawings
makes apparent to those skilled in the art how embodiments of the
invention may be practiced.
[0287] In the drawings:
[0288] FIG. 1A is a schematic illustration of a unit for generating
holograms;
[0289] FIG. 1B is a schematic illustration of a viewability space
of some points in a hologram;
[0290] FIG. 2A is a schematic illustration of a system for imaging
a hologram according to an embodiment of the invention;
[0291] FIG. 2B is a schematic illustration of a viewability space
of some points in an image hologram produced by the system of FIG.
2A;
[0292] FIG. 3A is a ray tracing diagram explaining image hologram
producing by a projection system according to an embodiment of the
invention;
[0293] FIG. 3B is a ray tracing diagram explaining image SLM
producing by the same projection system referred to in FIG. 3A;
[0294] FIG. 4 is a schematic illustration of an optical system
designed to allow tuning of the projections of the SLM and the
hologram to desired locations according to an embodiment of the
invention;
[0295] FIG. 5 is schematic illustration of a system useful for
projecting a hologram to, such that the hologram has a broad
viewability space in accordance with an embodiment of the
invention;
[0296] FIG. 6A is a pictorial presentation of a 360.degree.
walk-around image projection system according to an embodiment of
the invention;
[0297] FIG. 6B is a schematic presentation of the system
pictorially presented in FIG. 6A;
[0298] FIG. 6C is a pictorial presentation of a 360.degree.
walk-around image projection system with tilted mirror according to
an embodiment of the invention;
[0299] FIG. 7 is a pictorial presentation of a projection system
with two optical systems having a common eyepiece 320 and a common
revolving mirror according to an embodiment of the invention;
[0300] FIG. 8 is a schematic illustration of a mirror operable as a
low-pass filter;
[0301] FIG. 9 is a schematic illustration of a tracking unit
according to an embodiment of the invention;
[0302] FIG. 10 is a simplified block diagram of a system according
to an embodiment of the invention;
[0303] FIG. 11 is a flowchart of actions taken in performing a
method according to an embodiment of the invention;
[0304] FIG. 12 is a flowchart of actions taken in a method of
producing a hologram to be viewed by a viewer looking at the
hologram from a given angle according to an embodiment of the
invention;
[0305] FIG. 13A is a simplified illustration of a user appearing to
touch a finger to an object being displayed by a user interface
constructed and operational according to an embodiment of the
invention;
[0306] FIG. 13B is a simplified illustration of a user appearing to
touch a pointer to an object being displayed by a user interface
constructed and operational according to an embodiment of the
invention;
[0307] FIG. 13C is a simplified illustration of a user inserting a
frame into a display space of a user interface constructed and
operational according to an embodiment of the invention;
[0308] FIG. 14 is a simplified illustration of two users
interacting with the same object being displayed by a user
interface constructed and operational according to an embodiment of
the invention;
[0309] FIG. 15 is a simplified illustration of two users
interacting with the same object being displayed by a user
interface constructed and operational according to an embodiment of
the invention; and
[0310] FIG. 16 is a simplified illustration of a user comparing a
real object to an object being displayed by a user interface
constructed and operational according to an embodiment of the
invention.
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
[0311] Overview
[0312] The present invention, in some embodiments thereof, relates
to methods and devices for displaying images and, more
particularly, but not exclusively, to such methods and devices that
allow viewing three-dimensional images from a broad viewing angle.
Some embodiments of the invention also allow viewing
two-dimensional images from a broad viewing angle.
[0313] The present invention, in some embodiments thereof, relates
to a computer generated user interface and, more particularly, but
not exclusively, to a floating-in-the-air user interface.
[0314] In some embodiments, a viewer can walk around a stage, and
watch different faces of a scene projected on the stage, each face
viewable from another viewing angle, just as when looking at the
real, original, scene. For instance, a viewer walking around a
hologram of the globe can see Europe, when watching the hologram
from one point, America when watching the hologram from another,
and so on. In some embodiments, different viewers see different
data, possibly aligned to same coordinates.
[0315] Additionally or alternatively, the viewer can go back and
forth, coming closer to the stage or more distant from the stage,
adjusting the eye to the distance from the image similarly to focus
adjustment required when looking at a real object and changing the
distance thereto. In an exemplary embodiment of the invention, the
display device can adjust a distance of projection according to a
viewer distance, for example, over a range factor of 1.2, 2, 3, 4,
or intermediate or greater factors, for example, moving a
projection point distance 5 cm, 10 cm, 20 cm, 30 cm or intermediate
amounts more, as needed.
[0316] Optionally, the viewer can freely walk and see the hologram
from wherever he is, as long as he is looking at the stage within
the systems viewing zone.
[0317] Optionally, the stage is a physical construct.
Alternatively, the stage is an imagery volume in space, on which
the hologram is projected and towards which the viewer is looking.
A hologram on an imagery stage looks like it is floating in the
air.
[0318] In some embodiments, a viewer can touch the hologram. Such
viewer will not necessarily feel that his finger is touching
anything, but will see his finger touching the hologram.
Optionally, artificial touching feeling is evoked in the viewer, as
known in the art of artificial reality, for example, by the user
wearing a vibrating ring or glove or by projecting a beam of light
form the system or a different location at the finger so that the
finger is heated and feels the heating. Optionally or
alternatively, a beam of sound, for example, ultrasound, is
projected and/or modulated to cause sensation.
[0319] Optionally, only viewers that are in a viewability space of
the hologram can actually view the hologram. Others, who are not
within the viewability space of the hologram but are watching the
stage, will see the viewer's finger, but not the hologram, which
the viewer is touching. Optionally, a hologram of the same scene,
as viewable from the viewing angle of each of a plurality of
viewers is displayed to each of the viewers, and when one of the
viewers touches the hologram all the other viewers see the finger
of the first viewer touching the hologram. Optionally, all viewers
see a same hologram (or other image type). Alternatively, different
viewers see different images, for example, a same structure with
different data thereon.
[0320] Optionally, all the viewers that see the finger touching the
hologram see the finger touching the hologram at the same place
(for instance, the hologram is of a doll, and all viewers see a
finger touching the left eyelid of the doll).
[0321] In some embodiments, a viewer can go around a hologram and
see the hologram from all sides, as if the viewer walked around a
physical object. In some such embodiments, as long as the viewer's
eye is within a certain space, referred to as a first viewing
porthole, a first hologram is imaged to the stage, showing the
scene as would have been seen from points in the first viewing
porthole. Thanks to the holographic nature of the image of the
first hologram, a viewer moving his eyes within a viewing porthole
can detect different features of the scene from different points
within the viewing porthole. Optionally, when the viewer's eye is
out of the first porthole, a second porthole covering the viewer's
eye is defined, and a second hologram of the scene is projected to
the stage, showing the scene as would have been seen from points in
the second porthole. In some embodiments, a hologram of the scene
as seen from every possible porthole is continuously projected;
however, these embodiments require much harder computational effort
and do not necessarily improve the viewer's watching experience. In
some more computation-efficient embodiments, the position of the
viewer's eye (or both eyes) is estimated, a porthole is defined
around it, and only a hologram of a portion of the scene that would
be viewable from within the estimated porthole is projected to the
stage.
[0322] In some embodiments, the viewer can manipulate the image.
For example the viewer can move, rotate, scale, or otherwise
manipulate the image. In some embodiments, the viewer can move the
stage, instead of or in addition to moving around the stage.
Additionally or alternatively, the viewer can change the portion of
the scene shown on the stage. For instance, a viewer looking at a
hologram of the globe can rotate the globe around an axis going
through the two poles or around any other axis. Additionally or
alternatively, the viewer can translate the hologram and/or the
stage, zoom into the image, zoom out, and so on. In some
embodiments, zoom in is not accompanied by resolution loss, because
a larger hologram of the watched smaller portion is imaged at the
same resolution, at which the larger portion was imaged before the
viewer zoomed in.
[0323] In some embodiments of the invention, holograms of the scene
as viewable from different viewing portholes are projected
simultaneously, such that different viewers, looking at the stage
from different viewing portholes, can simultaneously watch
holograms of the scene, each from his own viewing point.
Optionally, each of the viewers can walk around the stage
independently of the other. Optionally, each viewer is identified,
for example, based on an image of his face, based on a marker
(e.g., infra-red readable marker on the face) and/or based on other
identification technology, such as an RFID. Optionally, each user
is shown data and/or viewing parameters personalized for the
viewer, for example, distance or size may be set for a user's
eyesight acuity and accommodation ability and data content per user
preference (e.g., external view of object or inner view of
object).
[0324] In some embodiments, different holograms, each viewable from
a different viewing porthole, are sequentially projected to a
single stage in a high enough frequency to allow viewers, each
looking through one of the viewing portholes, to see each a
different and continuous image hologram. This way, different
viewers can see simultaneously and continuously different
holograms, or, in some embodiments, different holographic movies,
or, in some embodiments, different viewers can see simultaneously
and continuously different 2D content, for instance, video or TV,
on a single display, or different 3D content (non-hologramic,
focal-length controlled or hologramic) optionally with a shared
coordinate system.
[0325] An aspect of some embodiments of the invention relates to a
method of displaying paraxial images, for example, holograms or
paraxial parallax barrier images.
[0326] A paraxial image or object is one wherein each point emits
light rays that span a cone with a small solid angle, typically of
about 3.degree., and the heights of these cones are approximately
parallel to each other (e.g., or otherwise matched to a rotation
from vertical of a viewer's eyes). In general, an entire paraxial
object or image can be seen only by a viewer that has an eye pupil
that overlaps with all of these cones. If the pupil overlaps with
only some of the cones, only some of the points on the paraxial
image or object are viewed, those at which the overlapping cones
originate. Therefore, paraxial images or objects are viewable from
a relatively narrow space, referred herein as a viewability
space.
[0327] In the following description, there is sometimes a reference
to a semi-paraxial image. This term refers to an image wherein each
point emits light rays that span a cone with a small solid angle,
but the axes of these cones are not parallel to each other. In some
embodiments of the invention, the cones converge at the viewabilty
space, thus at the viewabilty space the entire image can be
seen.
[0328] In some embodiments of the invention, a hologram is a
reproduction of a light field produced by a scene. In these
embodiments, a hologram of a scene seems to a human spectator
identical to the scene itself. Optionally, the reproduced light
field is identical to produced by the scene. Optionally, the
resemblance between the original light field and the one reproduced
by the hologram is in the field's phase and intensity, forming a
monochromatic hologram. Alternatively, the wavelength of the
emanated light is also reproduced, forming a colored hologram.
[0329] In some embodiments, a hologram is a reproduction of a
Fourier transform of the scene. When such a hologram is viewed
through a lens, the scene appears in the Fourier plane of the
lens.
[0330] In some embodiments, a hologram is composed by a light beam,
which interacts with a spatial light modulator (SLM). A spatial
light modulator is a medium that has different optical behavior at
different points. The term SLM is used herein to denote both a
static medium with different optical behavior at different points,
such as a slotted film, and a dynamic medium, having different
points with controllable optical behavior. SLMs of latter kind are
routinely used in the field of computerized generation of
holograms. The spatial light modulator (SLM) is designed or
controlled, such that a light beam interacting with the SLM (for
instance, reflected from the SLM or goes through the SLM), creates
a holographic reconstruction of the scene. Many ways of producing
SLMs for scenes are known in the field of holography, and each of
these may be used for creating a hologram to be projected or imaged
according to various embodiments of the present invention. It is
noted that when a non-holographic image is shown, an SLM which does
not modify phase may be used (e.g., a DMD or LCD). Optionally or
alternatively, a non-coherent light source may be used.
[0331] In the following, reference is made mainly to computer
controlled SLMs; nevertheless, in some embodiments other SLMs, for
instance, plates or films slotted as to form a static hologram, may
also be utilized.
[0332] A computer controlled SLM is made of a plurality of pixels
(for instance, 500.times.500 pixels) and the optical behavior of
each pixel of the SLM may be controlled by a computer,
independently of the other pixels. Such SLMs are currently
commercially available from variety of sources, for instance, from
Fourth Dimension Displays, London. Some commercially available SLMs
are transmission based, that is, light should be transmitted
through them to create the object hologram, and some reflective,
that is, the light should be reflected from them to form the object
hologram. One kind of reflective SLM is known in the field as
LCoS.
[0333] Some embodiments are limited to dealing with still scenes.
In some embodiments, the scene changes over time, like in a video
motion picture. In these embodiments, the hologram optionally
changes at a rate suitable for providing a sense of continuous
motion. As well known from the art of cinema, this rate is about 16
or 24 views per second or higher.
[0334] In some embodiments of the invention holograms are paraxial.
That is, each point in the hologram emits light rays spanning a
cone having a small solid angle, typically of about 3.degree., and
all heights of these cones are approximately parallel to each other
and to the optical axis of the system that created the hologram.
Paraxial holograms can be seen only by a viewer that faces them
directly, looking at the direction of the optical axis. Therefore,
paraxial holograms, as well as paraxial images in general, are
viewable from a relatively narrow viewability space, as illustrated
in FIG. 1B.
[0335] As stated above, an aspect of some embodiments of the
invention relates to displaying holograms. In some embodiments of
the invention, displaying a hologram comprises generating a
hologram (hereinafter referred to as an object hologram) and
optically creating an image of the created hologram (hereinafter
referred to as an image hologram). At least some embodiments of the
invention relate to displaying paraxial objects, including, but not
necessarily limited to holograms. For convenience, mainly holograms
are referred to below, but unless otherwise stated explicitly,
other paraxial objects may be treated similarly. Images formed in
embodiments of the invention from paraxial images or objects and
viewed by the viewer, such as the image hologram, are optionally
semi-paraxial.
[0336] An aspect of some embodiments of the invention relates to
displaying images to be shown from a wide angle around the display.
In some embodiments, the angle is larger than 180.degree., for
instance, 270.degree., or even 360.degree., or intermediate angle.
Optionally, the image viewed from the broad viewing angle is a
hologram. Examples of images that are displayed to viewers
positioned around the display include: holograms, auto-stereoscopic
images, stereoscopic images, focus controlled 3D or other images
(e.g., using optics to set a perceived focal distance to the image)
and 2D images.
[0337] In some embodiments, displaying an object hologram comprises
creating an image hologram that is different from the object
hologram. For example, the image hologram may be larger than the
object hologram and/or may be seen from a broader viewing angle
than the object hologram.
[0338] In an exemplary embodiment, creating an image hologram
viewable from a broad viewing angle involves imaging, with a single
optical system, the hologram and the SLM, such that the image SLM
is broader than the object SLM. Projecting the hologram and the SLM
with a single optical system ensures that the image hologram is
viewable from any point in the image SLM.
[0339] The image SLM does not necessarily cover the entire space
from which the entire image hologram is viewable, referred herein
as a viewability space.
[0340] It may be noted that a paraxial hologram is a specific kind
of a paraxial object, and that other paraxial objects may also be
displayed similarly. Accordingly, in some exemplary embodiments, an
image of a paraxial object is created in a process comprising
imaging, with a single optical system, the paraxial image, and at
least a portion of a space, from which the paraxial object is
viewable. Optionally, the image of the viewability space is broader
than the viewability space itself. Imaging the paraxial image and
its viewability space with a single optical system ensures that the
image hologram is viewable from any point in the image of the
viewability space.
[0341] In some embodiments, to ensure that the paraxial image is
viewable by a particular viewer, it is sufficient that the viewer's
pupil will overlap with a portion of the image viewability
space.
[0342] In some embodiments, the viewers see and touch
non-holographic three-dimensional images of a scene, for instance,
3-D parallax barrier images. However, at least in some
non-holographic embodiments, each viewer has to choose between
focusing on the finger and focusing on the touched point, because
the finger and the touched point are not necessarily at the same
focus.
[0343] In some embodiments, the viewer's eyes are tracked, and only
holograms that represent the original scene as viewable from the
viewing point of the viewer are projected to the stage, while
images of the SLM are constantly projected to the viewer's
eyes.
[0344] In some embodiments, the viewer's eyes are tracked to
facilitate projecting the SLM onto the viewer's eye. At least in
the embodiments where a partial overlap between the image SLM and
the viewer's eye is sufficient to allow the viewer to see a full
hologram, projecting larger images of the SLM allows tracking the
viewer's eye less accurately. Thus, projecting a large SLM image
may help in loosening the demands from the tracking system. It
should be noted that while the system as a whole optionally ensures
that the eye of the viewer overlaps with the image SLM, tracking is
not necessarily of the eye itself. Optionally, tracking is of the
face center of the viewer, and the position of the eye is deduced
from the position of the face-center. Optionally, the viewer wears
headphones, and headphones transmit a signal (or include a marker)
indicative of the headphone's position, and the eye position is
determined responsive to the headphone position. Optionally, the
viewer's face is identified in an image of the space around the
display, and the eye position is determined responsive to face
identification. Thus, the term eye tracking as used herein means
tracking any signal that is indicative of the eye's position, not
necessarily tracking the eye itself. It should be noted that in
some embodiments tracking signals indicative of the eye's position
is much easier than tracking the eye itself, and may simplify the
tracking system considerably.
[0345] In some embodiments, the image viewing space is large enough
to cover both eyes of a viewer. In some embodiments, two portholes
are defined, each around each eye, and a different SLM is imaged to
each of the eyes. Optionally, the two different SLMs are two
portions of a single SLM. Optionally, the two SLMs overlapping with
the viewer's eyes create an identical image hologram, optionally,
an image hologram that would be viewable from between the two eyes.
Color hologram can either be projected by the same SLM with
sequential lighting by red, green and blue light. Optionally, red,
green and blue light can be projected in parallel to three
different SLMs all synchronized mechanically to the same
porthole.
[0346] Optionally, portholes overlap or abut, such that switching
from one porthole to another is as smooth as going from one point
to another within a single viewing porthole.
[0347] In some embodiments, the different holograms are produced
sequentially in a rate that is rapid enough to allow each viewer
the impression that he views a continuous image. For example, each
of 3 viewers sees 30 images each second, each for a period of 1/180
seconds or shorter. When producing for two eyes and 3 viewers,
3.times.2 eyes=6.
[0348] In some embodiments, each viewer sees a hologram produced by
a different optical system. In some embodiments, two or more
viewers watch holograms produced by the same optical system.
Optionally, the optical system and the SLM are repeatedly tuned to
the needs of different viewers. The SLM is tuned to create
holograms of the scenes currently watched by the viewers; and the
optical system is tuned to project the image of the SLM to the eyes
of the different viewers at their current position.
[0349] In some embodiments of particular applicative opportunities,
a hologram is an image for viewing by a viewer such that the viewer
can touch, for instance with his finger or a Man Machine Interface
(MMI) tool, a portion of the hologram. Optionally, the hologram
includes portions activated by touching.
[0350] In some embodiments, a plurality of viewers can each touch
the hologram each is viewing. For instance, two viewers watch a
hologram of the same house from different viewing portholes, and a
finger of one of them touches the handle of the main door. In some
embodiments, if the second viewer is simultaneously touching the
same place (say, the same spot on the handle), each of the two
viewers see the two fingers touching the holograms. Optionally, the
two fingers that touch the holograms also touch each other.
Optionally, image manipulation by one user is conveyed to the
viewing of another user, so, for example, zoom and orientation are
shared, if desired.
[0351] In some embodiments, one viewer can touch a hologram, and
another viewer can walk around the hologram (or move the hologram
around). This way, the walking viewer can see the hologram,
together with the touching finger, from different angles. For
example, a tutor can touch an arterial valve in a hologram of a
heart model, and a student can walk around it and see the touched
valve from different angles.
[0352] An aspect of some embodiments of the invention relates to a
design of a projection system, in which an inner element generates
an image or a hologram, which is then projected on the inside of an
imaging mirror, which magnifies and/or aims the image or hologram
to a user. Optionally, the imaging mirror is generally cylindrical
and is optionally curved to provide magnification. In some
embodiments, a viewer is outside the imaging mirror. In other
embodiments, a viewer is inside the imaging mirror, which may be,
for example, mounted on the walls of a room.
[0353] An aspect of some embodiments of the invention relates to a
design of a projection system in which one or more modules generate
images and are rotated to help aim the images at a viewer's eye.
Optionally, the one or more modules rotate or aim at a rotating
mirror. Optionally, the mirror rotates at a substantially fixed
speed or oscillates at a fixed rate.
[0354] An aspect of some embodiments of the invention relates to
the design of a projection system having a modular design such that
each of a plurality of modules can have a line of sight to a same
viewer's eye. Optionally, this shared line of sight is provided by
a rotating mirror. Optionally or alternatively, the shared line of
sight is provided by rotating the modules, and taking note of their
position so that they can act as if they share a same coordinate
system. Optionally, the modules include a plurality of image or
hologram generating modules. Optionally or alternatively, the
modules include at least one viewer tracking and/or user
interaction module. In an exemplary embodiment of the invention,
system abilities are enhanced or reduced by replacing, adding or
removing modules.
[0355] Before explaining at least one embodiment of the invention
in detail, it is to be understood that the invention is not
necessarily limited in its application to the details of
construction and the arrangement of the components and/or methods
set forth in the following description and/or illustrated in the
drawings and/or the Examples. The invention is capable of other
embodiments or of being practiced or carried out in various
ways.
[0356] Reference is now made to the construction and operation of a
hologram generation unit as illustrated in FIG. 1A and to
viewability space of a paraxial object as illustrated in FIG.
1B.
Exemplary Computerized Hologram Generation Unit
[0357] In an exemplary embodiment of the invention, hologram
generation unit 10' comprises a light source 15, and a spatial
light modulator (SLM) 20.
[0358] SLM 20 is connected to a computerized control unit 22, which
controls the optical behavior of each pixel of the SLM
independently of the others, such that light reflected from the SLM
will reproduce the light field front emanated from a scene (24, in
the illustrated case, a house). For that, light received from scene
24 is detected and data representing it is inputted into
computerized unit 22, which processes the input data to obtain
required optical behavior of the different pixels, and controls the
SLM accordingly.
[0359] In an exemplary embodiment of the invention, light going
from source 15 is deflected with polarizing beam splitter 25,
passes through a quarter wavelength plate (40), goes to SLM 20, and
reflected from the SLM to create hologram 35. On the way to
polarizing beam splitter 25 the beam passes once again through
quarter wavelength plate 40 and continues without deflection
through the polarizing beam splitter.
[0360] Optionally, unit 10' also includes an optical element 70
that changes the wave front of light source 15, such that upon
interaction with SLM 20 hologram 35 is larger. Optionally,
lens-less magnification is used. In lens-less magnification, a
spherical wave-front beam illuminates an SLM configured for
producing an image from planar wave-front beam illumination. The
image produced with the spherical wave front beam is scaled in
relation to that produced with a planar wave-front beam.
Optionally, the image produced with the spherical wave front beam
is larger than that produced with a planar wave-front beam.
Optionally, the system includes several lenses, and the one in use
is selected to produce an object hologram of a desired size and
location. Optionally, selection of the lens is part of the tuning
of the optical system. Optionally, selection of the lens is part of
the design of the optical system, and the selection is
permanent.
[0361] The technique of lens-less magnification is described in
detail, for instance, in the book "Introduction to Fourier optics"
by J.W. Goodmann, published by McGraw-Hill.
[0362] Unit 10' is only one possible arrangement suitable for
creating a hologram using a coherent light source and a spatial
light modulator. Many other arrangements are known in the art and
may be used in accordance with various embodiments of the
invention. Furthermore, unit 10' may be replaced, at least in some
embodiments, with units for generating non-holographic paraxial
images or other image types. For easier understanding, in the
following description, a unit for generating a paraxial object will
be generally referred to as unit 10 to illustrate that unit 10'
described above is only one possible construction of such a unit.
In an exemplary embodiment of the invention, all units 10, however,
have a light source and a paraxial (or other type of) image forming
unit, such as SLM or Liquid Crystal (LC) panel. In embodiments
where a non-hologram image is used, the display panel may be, for
example, light emitting.
[0363] In one exemplary variation, if the SLM is polarization
sensitive, for example, then the design may be varied, for example,
such that polarized light going from source 15 is deflected by a
beam splitter 25 to hit the SLM 20, and reflected from the SLM to
create hologram 35. Quarter wavelength plate 40 is optionally
omitted.
[0364] In a further alternative design, the light is aimed at the
SLM at a slight of axis angle, so it is reflected away for the SLM
at a different axis and no beam splitter is used.
[0365] In some embodiments, a transmissive SLM is used, and the
light is also not reflected by a beam splitter.
Viewability Space of a Paraxial Image
[0366] FIG. 1B illustrates some principles in viewing paraxial
images, such as, for instance, an object hologram 35.
[0367] As object hologram 35 is a paraxial image, each point (for
example, 36, 37) in the paraxial object emits light in a single
direction (h.sub.36, h.sub.37) and in some narrow angle (.alpha.)
around it, creating a cone (C.sub.36, C.sub.37). Point 36 is seen
from every point within cone C.sub.36, and point 37 is seen from
every point within cone C.sub.37. Thus, cones C.sub.36 and C.sub.37
are referred herein as the viewability spaces of points 36 and 37,
respectively.
[0368] The two points 36 and 37 can be seen simultaneously from
every point that forms part of both cone 36 and cone 37, which
forms a viewability space 60, from which both points are viewable.
Similarly, a space from which the entire hologram is viewable may
be determined, and denoted the viewability space of hologram
35.
[0369] Thus, eye 52, which overlaps with a portion of space 60 can
see both points 36 and 37, eye 54 can see point 37 but not 36, and
eye 56 cannot see any of points 36 and 37.
Exemplary Optical System
[0370] FIG. 2A illustrates a basic system 200 for projecting an
object hologram 35 according to an embodiment of the invention.
[0371] System 200 comprises a paraxial object generating unit 10,
which is optionally a hologram generating unit, and an optical
system 210. Hologram generating unit 10 generates hologram 35 (the
object hologram), and optical system 210 images object hologram 35
to be seen as an image hologram 235 (the image hologram) standing
on stage 237, which optionally is empty space. Optical system 210
also projects SLM 20 (the object SLM, shown in FIG. 1) to provide
an image SLM 220. Image SLM 220 is optionally larger than SLM 20,
and a viewer looking at image hologram 235 from any point along
image 220 can see image hologram 235.
[0372] FIG. 2B illustrates some principles in viewing a
semi-paraxial image, produced by optical system 210 from a paraxial
object.
[0373] Similar to the paraxial object 35 of FIG. 1B, each point
(for instance, 236, 237) of semi-paraxial image 235 is viewable
from a cone-like space (C.sub.236, C.sub.237), and both points are
viewable 252 in a space 260 where cones 238 and 239 overlap.
However, in difference from the paraxial object 35, the viewability
spaces of the different points composing together image 235 are not
parallel to each other. It is a role of optical system 210 to break
the parallelism between viewability spaces of the different points,
thus providing a larger and optionally closer viewability space
260. Consequently, in some embodiments, system 210 brings image
hologram 235 closer to the viewer, like a telescope, but also
broadens the viewability space of the hologram, from a relatively
narrow space 60 illustrated in FIG. 1B to a larger viewability
space 260 illustrated in FIG. 2B.
[0374] Viewability space 260 encompasses image SLM 220 (FIG. 2A);
therefore, in some embodiments of the invention, optical system 210
is tuned to form image SLM 220 to overlap with an eye of the
viewer. This way it may be ascertained that the image 235 is
viewable to the viewer. Optionally or additionally, other portions
of viewability space 260 are imaged to overlap an eye of the
viewer.
[0375] FIGS. 3A and 3B illustrate an optional construction of an
optical system (300) that allows projecting an enlarged image of
the SLM (20) and an enlarged image of the hologram (35), thereby
enlarging the hologram and/or broadening the space from which the
hologram is viewable.
[0376] FIG. 3A shows ray tracing of light rays producing the image
hologram; and FIG. 3B shows ray tracing of light rays producing the
image SLM.
[0377] It should be noted, that the only requirements from system
210, in some embodiments of the invention, are: (i) to image object
hologram 35 to stage 237 (ii) to image SLM 20 to a plane (or
volume) out of stage 237, and (iii) to allow the exact position of
said plane to be tunable. A large variety of optical architectures
may accomplish this task, and a person of ordinary skill in the art
may easily come up with many alternatives to the construction
illustrated in FIG. 3A or 3B, once faced with the above
requirements.
[0378] In both FIGS. 3A and 3B shown are hologram generating unit
10, including an SLM 20; and optical system 210, comprising an
objective 310 and an eyepiece 320.
[0379] In the embodiment shown, objective 310 has two focal points:
311 and 312; and eyepiece 320 has two focal points: 321 and
322.
[0380] In the embodiment shown, objective 310 and hologram
generating unit 10 are positioned such that the object hologram is
produced by unit 10 between objective 310 and its focal point 311.
Optical units 310 and 320 are positioned distanced from each other
in more than the sum of their focal lengths, such that focal point
321 of element 320 is between element 320 and focal point 312 of
element 310.
[0381] Optionally, objective 310 comprises a lens and/or a curved
mirror. Optionally, eyepiece 320 comprises a lens and/or a curved
mirror.
[0382] Each of objective 310 and 320 can independently be a light
converging element (for instance, a concave mirror) or a light
diverging element (for instance, a convex mirror).
[0383] As shown in FIG. 3A, an image hologram 235 is formed in
front of viewer's eye 250, on stage 237.
[0384] As shown in FIG. 3B, an image of SLM 20 is formed at the
viewer's eye 250.
[0385] Thus, FIGS. 3A and 3B together show that optical system 210
images hologram 35 to stage 237 and SLM 20 to partially overlap
with the viewer's eye 250.
[0386] In an exemplary embodiment of the invention, optical system
210 is tunable to change the position at which image SLM is formed,
for instance, by changing the distance between the two optical
elements 310 and 320. Such a tuning might also change the position
at which image hologram 235 appears. If desired, this may be
compensated for by computing unit 22 (FIG. 1A), that may drive SLM
20 to form object hologram 35 in a different place without moving
SLM 20.
[0387] In the embodiment shown the eyepiece is a hollow mirror,
however, the eyepiece can also be a transmissive element (e.g., a
lens), optionally which also changes the angle of light (e.g., a
prism), so that the hologram does not overlay on the image
generation system.
Optional Shapes for Objective 310
[0388] Objective 310 is optionally a mirror having a form of a
paraboloid of revolution, where the axis of the revolution is a
symmetry axis of the parabola. Another optional shape is a
paraboloid of revolution, where the axis of the revolution is
perpendicular to the symmetry axis of the parabola. Optionally, the
shape of objective mirror 310 is of a spherical cap. Optionally,
the shape of objective mirror 310 is of a slice of any of the
above-suggested shapes. A spherical cap is optionally preferred
because it is easier to manufacture, and spherical aberrations do
not play a significant role in the system, because hologram 35 is
paraxial.
[0389] Optionally, eyepiece 320 has any of the above shapes that
the objective may have. An eyepiece with a particularly useful
shape is described below, under the heading "exemplary
eyepiece".
[0390] In some embodiments, objective 310 is a cylindrical mirror
or an arcuate portion thereof. As noted, such a mirror may be
paraboloid, rather than flat.
Exemplary Dimensions of an Optical System
[0391] In an exemplary embodiment, the setup illustrated in FIGS.
3A and 3B is used to provide an enlarged image SLM using a lens 310
having a focus length of -50 cm; and an eyepiece 320 placed 100 cm
from lens 310 at a first side (in the figure, right) of lens 310.
The SLM stands a few millimeters or so from lens 310, at a second
side of lens 310.
[0392] The SLM receives input so as to generate the Fourier
transform of the object hologram, and therefore, an object hologram
is found at the focal plane of lens 310, 50 cm to the left of lens
310. The size of the object hologram is similar to that of the SLM
portion used for forming the hologram.
[0393] Two images are formed by the eyepiece:
[0394] An image of the object hologram is formed 150 cm to the
right of the eyepiece 320 and has the same size of the object
hologram; and an image of the SLM is formed 200 cm to the right of
eyepiece 320 and is three times larger than the SLM.
[0395] When the viewer's eye overlaps with the image SLM, the
viewer is 50 cm from the image hologram.
[0396] This example shows one setup that creates an image hologram
that has larger viewability space than that of the object hologram.
The image hologram is viewable at least from anywhere at the image
SLM, which is 3 times larger than the SLM, and 50 cm away of the
object hologram.
[0397] In some embodiments, generating an image SLM larger than the
image hologram, results in a viewability space which diverges at
larger distances from the image, until it reaches the image of the
SLM, and than it converges. In some such embodiments, the need to
precisely track the distance of the viewer from the image is
alleviated, and so is the need to project the image of the SLM
exactly to the viewer's eye. However, in some such embodiments
information on the distance between the viewer and the image is
still helpful to estimate the size of the viewability space and
determine when the viewer moves from one viewing porthole to
another. Ensuring that the image SLM is at the vicinity of the
viewer's eye, where the viewability space is broadest, optionally
relieves the requirements from the azimuthal tracking.
Exemplary Tuning of Image SLM Locations
[0398] For displaying an image with limited viewability space to a
moving viewer, the image SLM should follow the viewer's eye. Some
exemplary embodiments that provides such flowing are described
below.
[0399] In some of the described embodiments, changing the location
of the image SLM also changes the location of the image; however,
the image movement is small in comparison to the SLM movement, and
can be compensated using the limited optical power of the SLM.
[0400] FIG. 4 is a schematic illustration of one possible structure
of an optical system 400, designed to allow tuning of the
projections of the SLM and the hologram to the locations of the
stage and the viewer's eye, respectively.
[0401] System 400 comprises all the components of system 210,
including hologram generating unit 10, objective 310 and an
eyepiece 320. In the embodiment shown, objective 310 is a curved
mirror and eyepiece 320 is a convex lens.
[0402] System 210 further includes a tuning unit 410, for
facilitating the control of the locations to which the SLM and the
hologram are projected. Tuning unit 410 is illustrated in the
figure as a V-shaped mirror comprising mirroring surfaces 420 and
430, fixed at, for example, 60.degree. from each other, but many
other realizations are apparent for a skilled person.
[0403] As shown, a light ray 405 going from objective 310 to
eyepiece 320 through v-shaped mirror 410 is reflected first from
objective 310 to mirror 420, and from there to mirror 430, from
which the light is reflected towards eyepiece 320.
[0404] Moving the mirroring surfaces 420 and 430 back and forth in
the direction of arrow 425 changes the distance between eyepiece
320 and the SLM image. Thus, moving mirroring surfaces 420 and 430
along arrow 425 allows for following eyes of a viewer who moves
away from the hologram or comes near the hologram along the optical
axis.
[0405] Optionally, v-shaped mirror 410 is omitted, and elements 310
and/or 320 are moved in relation to each other to get a similar
effect. Any other known means for changing the location of the back
focal length of system 400 may be also useful to replace v-shaped
mirror 410 in moving the image SLM along the optical axis.
[0406] Rotating one of the mirrors making up the v-shaped mirror
410 in the directions shown by arrows 426 (i.e. around an axis
parallel to the line of contact between surfaces 420 and 430, and
on the V plane allows for following the tracked eyes of a viewer
looking up and down.
[0407] In an exemplary embodiment, v-shaped mirror 410 is mounted
on a motor (not shown) that moves the mirrors as shown in arrows
425 and/or 426.
[0408] To follow the eyes of the viewer when they move horizontally
out of the image hologram viewability space (in the figure: into
the page and out of the page), lens 320 is optionally moved to face
the viewer.
[0409] Optionally, controlling the position of the image hologram
comprises computing and generating the holographic object such that
the image hologram is generated exactly at a desired location, for
instance, exactly where it is seen by another viewer.
[0410] Optionally, the optical system is tuned to generate the
image SLM in the vicinity of the viewer and the computational unit
computes the object hologram to be formed such that the image
hologram is exactly at the desired place. Optionally, such
computation is omitted, and the accuracy of the image hologram
position is compromised.
[0411] FIG. 5 is a schematic presentation of a system 500, useful
for showing a different hologram for each eye of a viewer. System
500 is similar to system 400, only here, there is an additional
flat mirror, 510, which rotates or revolves around its axis
515.
[0412] In one embodiment, mirror 510 moves right and left in an
angle of, for instance, 3 degrees, and the SLM creates one hologram
in the first half of each movement and another in the second half.
This way, each eye sees a different hologram. Optionally, the
movement is at such a frequency, that each eye perceives the
hologram projection as if it was continuous. Optionally, the
central position around which the mirror moves is changed to follow
the center of the viewer's face.
[0413] In another embodiment, mirror 510 revolves around its axis
515 at a frequency of at least about 15 Hz, for example, 24 Hz, 30
Hz, 45 Hz, or 60 Hz, and the SLM creates one hologram at one half
of the revolution and another in the second half. Optionally, the
SLM creates one hologram at a first revolution and a second
hologram a second revolution. For instance, a mirror revolving at
30 Hz and an SLM updated twice per revolution may be updated at a
similar rate to a mirror revolving at 60 Hz and updated once each
revolution.
[0414] Switching points where the SLM changes from creating one
hologram to creating another is optionally when none of the eyes
overlaps with the image SLM. This optionally happens twice in each
revolution: once when the image SLM is projected to between the
eyes of the viewer, and once when the image SLM is projected away
from the viewer's eyes.
[0415] Optionally, each eye overlaps with an image of another SLM,
and each SLM changes the object hologram once per revolution,
before it overlaps with the viewer's eye.
[0416] Optionally, the image SLM overlaps with both of the viewer's
eyes at once.
[0417] Another difference between the embodiments shown in FIGS. 4
and 5 is that in FIG. 4 eyepiece 320 is a lens, and in FIG. 5 it is
a curved mirror. However, this difference is accidental, the
embodiment of FIG. 4 can work just as well with a mirror as element
320 and the embodiment of FIG. 5 can work just as well with a lens
as element 320.
[0418] Optionally, eyepiece 320 is mounted on a base (520), which
is rigidly connected to axis 515 with a bridge (525), such that
eyepiece 320 follows the movements of flat mirror 510. In this
option, eyepiece mirror 320 optionally has any of the forms
suggested above for objective mirror 310, independently of the
shape of object mirror 310. Another possible shape of eyepiece 320
is discussed in the context of FIGS. 6A and 6B, below.
[0419] Thus, in one embodiment, all the image forming components
move together to aim at a viewer's eye. Optionally, the objective
mirror moves as well and is thus less than a complete 360 degrees.
Moving the objective mirror and/or the image generating module at
such speeds may be energy inefficient or noisy and thus not
practiced in some embodiments.
[0420] In another embodiment, mirror 510 rotates at half the
angular rotation of eyepiece 320, to compensate for the doubling of
angular speed caused by reflection.
Single Viewer
[0421] In an exemplary embodiment of the invention, system 500 is
used for imaging a hologram to a single viewer, such that the
hologram has a large image and broad viewability space.
[0422] In one embodiment, the location to which the hologram image
is projected (the stage) is fixed, and the viewer is free to look
at the hologram from different places, but sees the hologram from
everywhere. In this embodiment, an eye of the viewer is tracked,
and the viewability space of the hologram is projected to follow
the eye.
[0423] Tracking the eye is, for instance, by an eye-tracking system
having good enough accuracy to tell when the eye is in the hologram
viewability space, and when the eye is out of said space. The
tracking system may be any commercially available eye-tracking
system having a suitable accuracy, for instance, TrackIR.TM. head
tracking system, available from Natural Point based in Corvallis,
Oreg., USA. Optionally, the tracking system has common parts with
system 500, as will be described below.
[0424] The position of the viewer's eye, as detected by the
tracking system is communicated to a computation unit, that
determines how the system should be tuned to project the image SLM
into the vicinity of the viewer's eye, for instance, the exact
position of v-shaped mirror 410, the exact point to which mirror
510 is facing, or any other tuning required in accordance with the
particular set up being used.
[0425] When the viewer's eye moves, its movement is tracked by the
tracking unit, and the optical system is controlled to keep
projecting the image SLM in the vicinity of the viewer's eye. This
way, the viewer sees the entire hologram from wherever he is, as
long as he is watching the stage.
[0426] In some embodiments, the computing unit controls the SLM to
create holograms that reproduce the scene as would be seen from the
viewing angle of the viewer. Optionally, the hologram is displayed
using the revolving mirror all around, such that all viewers see
the same hologram, no matter where they stand, and this hologram
changes responsive to the movements of one of them.
Single Viewer, a Different Hologram to Each Eye
[0427] In some embodiments of the invention, the viewability space
of a single hologram overlaps with the viewer's two eyes. In such
embodiments, the viewer sees a full 3-D hologram, since the
holographic nature of the image looked at provides all the depth
cues of the original scene. These embodiments are based on image
SLM that are about 6-8 cm wide, so as to cover two eyes of an adult
human viewer.
[0428] In many embodiments, however, the image SLM is smaller, and
has a width of about 5 to about 20 mm. In these embodiments, a
single hologram can be seen only by a single eye, and presenting
holograms to the two eyes requires presenting two holograms, one to
each eye.
[0429] In some exemplary embodiments of the invention, the two
holograms are of two different aspects of the same scene: one that
would have been seen by a right eye of the viewer and the other
that would have been seen by the left eye of the viewer, had the
scene been on the stage. This way, the viewer can have a better
depth feeling of the scene.
Single Hologram
[0430] In some embodiments, the system projects in all directions,
for instance, 360.degree. around, a single hologram. Such a system
may be made simpler and less expensive than a walk-around system as
described below. The system can fully function without eye-tracking
and without being tuned to a particular viewer. Additional
advantage of such a system is that the computation required for
controlling the SLM is very simple in comparison to the computation
required for providing each of a plurality of users a full
holographic view.
[0431] In some embodiments, the computing unit controls the SLM to
create a single hologram, which is made of two overlaid holograms.
Optionally, the viewer is equipped with spectacles that filter out
a different image from each eye. One such embodiment uses 2-color
anaglyph, which for itself is well known in the art.
[0432] In some embodiments, a single hologram is projected such
that it can be viewed from different heights. In some such
embodiments, the same hologram is projected to different heights,
and in each height 360.degree. around. For example, the hologram is
multiplied with prisms, such that at several different heights
viewers see the same hologram.
Multiple Viewers and/or 360.degree. Walk Around
[0433] The embodiment of FIG. 5 allows showing holograms to viewers
that look at the direction of eyepiece 320. To allow the viewer to
walk around the stage, or to present holograms to different
viewers, viewing the stage from different places, eyepiece 320 may
be made rounded as shown pictorially in FIG. 6A.
[0434] FIG. 6A is a pictorial presentation of a system 600 showing
an image hologram (635) of a heart shown floating in the air. The
figure shows mainly eyepiece 320, which optionally has an inner
reflective surface shaped as a paraboloid of rotation. In this
option, each vertical cross-section in eyepiece 320 has a shape of
a parabola, and each horizontal cross-section has a shape of a
circle. Alternatively, eyepiece 320 is a slice of sphere.
Horizontal cross-sections are circles, and vertical cross-sections
are circular arcs.
[0435] FIG. 6B is a schematic illustration of system 600. Eyepiece
320 is represented by two opposite vertical cross-sections of the
rounded eyepiece pictorially depicted in FIG. 6A. The figure also
shows a cylindrical mirror 605, used to shift the light away from
the other optical elements and into eyepiece 320, such that the
other optical elements don't shade eyepiece 320 from receiving
reflections from the revolving mirror 510. The other optical
elements shown in FIG. 6B are similar in structure and function to
those shown in FIG. 4 or FIG. 5.
[0436] Minor 605 may be omitted, and the ray shifted by other
means. For example, mirror 510 may be inclined in respect of the
axis on which it revolves, as in FIG. 6C, which for convenience
does not show eyepiece 320.
[0437] Cylindrical mirror 605 may be replaced by one or more plane
mirrors.
[0438] Distortions introduced by cylindrical mirror 605 or by a
plane mirror replacing mirror 605 are optionally corrected by
pre-distortion of an image generated by the SLM.
[0439] It is noted that use of the mirror 605 optionally constrains
a practical length of the optical path from SLM to the viewer, and
omitting mirror 605 optionally removes the constraint and removes a
need for pre-distortion compensating for the mirror 605.
[0440] The eyepiece 320 may be replaced by one or more plane
mirrors.
[0441] Distortions introduced by eyepiece 320 are optionally
corrected by pre-distortion of an image generated by the SLM.
[0442] When using plane mirror replacements for the mirror 605,
and/or a plane eyepiece 320, locations where the plane mirrors abut
each other are optionally not used for projecting the image. An
optional encoder detects when the optical path crosses such
locations and an image is not projected during that time.
[0443] Gaps may optionally be present in spatial coverage of the
optical system, and an optional encoder optionally detects when the
optical path crosses such gaps, and an image is not projected
during that time.
[0444] Optionally, mirror 510 revolves around its axis 515 at a
frequency of at least about 24 Hz, and the SLM creates a different
hologram whenever the image SLM is projected to a different
location, be it a different eye of the same viewer, an eye of
another viewer, or the same eye of the same viewer after the viewer
moved.
[0445] In some embodiments of the invention, both faces of mirror
510 are reflecting, such that in each revolution of the mirror
images can be projected all around the mirror, spanning a
360.degree. projection angle, or a smaller angle, such as above
150.degree., above 170.degree., above 180.degree., above
220.degree., above 260.degree. or intermediate angles. Optionally,
at the center of the mirror 510 there is a non-reflective point, to
eliminate order 0 reflections (that is, reflections of the light
source). Order 0 reflections may be similarly omitted by blocking
light from reaching the center of mirror 510 or from being
reflected from the center of mirror 510 by any other way.
[0446] In an exemplary embodiment of the invention, the image being
viewed form the different angles is not distorted (e.g., as it
would be on a flat panel display or other standard 2D imaging
system).
[0447] It is noted that the methods and apparatus described herein
may also be used for smaller angle displays, especially floating in
the air, for example, between 10.degree. and 150.degree., for
example, less than 100.degree., less than 80.degree., less than
45.degree., less than 30.degree. or intermediate angles.
[0448] In some embodiments, different viewers and/or different eyes
are shown different types of images, for example, one eye may see a
2D image and one eye may see a 3D or hologram image. Optionally,
the different images are created by different image creation
modules in the system, which rotate in to place and/or whose line
of sight is provided in time by a rotating flat mirror (e.g.,
510).
Exemplary Light Source
[0449] In exemplary embodiments of the invention, the light source
provides collimated light (and in some embodiments coherent light),
so as to generate a paraxial object. Non limiting examples to
collimated light sources include lasers and LEDs.
[0450] In some embodiments, a light source supplying to the eye
10-100 .mu.W is used.
[0451] This light intensity is optionally chosen for an embodiments
where the distance between revolving mirror 510 and eyepiece 320 is
1 m and the mirror revolves at 30 Hz. The considerations for
choosing the above-cited light intensity may be summarized as
follows:
[0452] To display an image to a viewer with such a system in
lighted room, the light power at the viewer's pupil should be about
10 .mu.W in a dark room and up to 100 .mu.W in lighted room.
[0453] The estimation of 10 .mu.W is based on the following
considerations:
[0454] In the discussed embodiment, a light ray enters the pupil
for 5 .mu.s at each revolution. This estimation is based on the
assumption that in lighted room the pupil is about 1 mm in
diameter. Since the distant point of a light ray going from the
mirror to the eyepiece travels, in each revolution of the mirror,
6.28 m, (2.pi.R), and the mirror revolves 30 times per second, the
distant point travels approximately 30.times.6.28=190 m=190,000 mm
each second.
[0455] Therefore, 1 mm is scanned by the beam in 1/190,000 sec,
which is approximately 5 .mu.sec.
[0456] To provide a clear image, 1 nanoW of light should arrive at
the pupil for 50 msec.
[0457] Since the light swifts the eye during 5 .mu.s, the system
must provide all the light in 5 .mu.sec and not in 50 msec. Thus, a
power that is 10,000 larger than 1 nanoWat is required. 1 nanoWat
times 10,000=10 .mu.W.
[0458] The above estimation is suitable for displaying an image
under average indoor lighting conditions. If the room is more
lighted, higher light intensity is optionally supplied, for
instance, light intensity that is 2, 5, or 10 times larger.
Exemplary Eyepiece
[0459] In exemplary embodiments, eyepiece 320 is a hollow body,
having an optically active inner surface, for instance, a curved
reflective inner surface. Optionally, the optically active inner
surface is a reflective surface, for instance, of a curved mirror.
Optionally, the inner surface is a surface of a lens.
[0460] Optionally, the hollow eyepiece converges light arriving
from mirror 510. Optionally, hollow eyepiece 320 defines, at each
point, two finite radiuses of curvature. Optionally, the two
radiuses of curvature are equal to each other, as in a spherical
shell.
[0461] Optionally, the reflective inner surface of the eyepiece is
a closed surface. Optionally, it is an open surface, and allows
viewing images from a limited viewing angle. For example, in some
embodiment the eyepiece has a shape of a solid of revolution,
formed by revolving a 60.degree. arc half or 3/4 of a cycle around
an axis, such that the inside of the arc is generally aimed at the
axis. These embodiments may allow seeing the images only from
180.degree. or 270.degree. around the stage, since a viewer
watching the stage from a position where there is no eyepiece at
the background of the stage, does not see the image, unless the
eyepiece moves or is moved as it may, in some embodiments.
[0462] In some embodiments, the shape of the hollow eyepiece is of
a solid of revolution formed by revolving an arc around an axis at
which the concave side of the arc is generally aimed. Optionally,
the distance between the axis of revolution and the arc is equal to
the arc's radius. Alternatively, the distance between the axis of
revolution and arch is different from the arc's radius.
[0463] In some embodiments, the hollow eyepiece has a form of a
solid of revolution obtained by revolving an arc around an axis,
the distance of which from the arc is different from the arc's
radius.
[0464] In some embodiments, the hollow eyepiece is paraboloid, for
instance, forming a portion of a paraboloid of revolution, formed
by revolving a parabola around an axis perpendicular to the
parabola's axis of symmetry.
[0465] Spherical eyepieces may be easier to construct than
parabolic eyepieces. On the other hand, the parabolic eyepiece may
be less susceptible to aberrations. However, in some embodiments
the aberrations are small, or even negligible, due to the paraxial
nature of the objects and images treated by the system. Optionally
or alternatively, such aberrations or other aberrations in the
optical system and/or viewer visualizing ability are compensated
for by generating an adapted image, which provides
pre-compensation.
[0466] Optionally, revolving mirror 510 is at the center of the
eyepiece, directing light to different portions of the eyepiece as
it revolves.
[0467] Optionally, the stage is inside the eyepiece.
[0468] Optionally, the entire optical path is inside the
eyepiece.
[0469] In some embodiments, to see the image, the viewer has to
look at the stage having at least a portion of the eyepiece
reflective surface at site.
[0470] Optionally, the viewer looks at the eyepiece from
outside.
[0471] Optionally, the viewer is inside the eyepiece, for instance,
sits, stands, walks or lies down in a spherical room with a
reflecting wall and/or ceiling or with an eyepiece and/or portions
of an eyepiece mounted thereon. Optionally, a visual tracker images
the eye piece and uses the image to determine where an image may be
projected and where not and/or is used for adjusting imaging
parameters, such as distance and light level. Optionally, such an
eyepiece includes one or more markings, such as likes or crosses,
visible by such a tracker camera or other imaging module in the
display system.
Exemplary Modular Projection System
[0472] In some embodiments of the invention, two or more SLMs are
used to provide different holograms to different eyes, viewers,
and/or locations. For example, in one embodiment, there are two
different SLMs, each dedicated to creating object holograms for one
eye of a viewer. In some embodiments, each SLM creates several
object holograms sequentially, for instance, up to 6 different
holograms for three different viewers, one for each eye, and the
revolving mirror brings each of the holograms to the stage and each
of the SLM images to the eye, for view of which the hologram was
generated.
[0473] Optionally, each SLM has its own optical system 210, and all
the systems are co-tuned to provide the image holograms to the same
stage, optionally to the very same point on the stage. This option
may be advantageous, for instance, when the viewers are limited to
some predefined regions, such that a full coverage of 360.degree.
is not required.
[0474] In some embodiments of the invention, two or more SLMs are
used simultaneously, each with its own objective, and all with a
common eyepiece and revolving mirror. It is noted that this allows
overlaying the images generated by one SLM on the image generated
by another SLM, even if of different type and/or color, at the
viewer's eye.
[0475] One such arrangement is depicted in FIG. 7, which is a
pictorial presentation of a projection system 700, comprising two
SLMs (20' and 20'') having a common eyepiece 320 and a common
revolving mirror 510. In the following, the parts of the projection
system that are dedicated to a single SLM, including the SLM
itself, is referred to as a unit cell. In FIG. 7 each unit cell is
shown on its own base (710, 720, 730), allowing a modular construct
of system 700.
[0476] In some embodiments, each unit cell is dedicated to
generating holograms to be viewed by a different viewer or by a
different plurality of viewers, associated with the unit cell.
Optionally, the association of specific unit cell(s) to specific
viewer(s) does not change during operation.
[0477] Optionally, the viewers sit in predetermined places, for
example, in fixed chairs arranged in concentric circles around the
stage. In this case adjusting the optical length, for instance with
element 410, is only optional, and may sometimes be omitted.
Similarly, face detection/tracking may be omitted.
[0478] Optionally, the association of a unit cell to a viewer
changes in accordance with the location of the viewers. For
instance, it may be convenient if switching from one viewer
associated with a certain unit cell to another will not require
much movement of the v-shaped mirror. However, in some embodiments,
when the distance between a viewer and the stage changes, the
v-shaped mirror has to move. Therefore, in some embodiments, when
two viewers associated with one SLM move a part, such that one of
them is much closer to the stage than another, one of the viewers
is associated to another unit cell, to omit the need to repeatedly
move the v-shaped mirror large distances back and forth.
360.degree. Sit-Around Holographic TV
[0479] While the embodiments described in the previous section
allow each viewer (or even each eye of each viewer) to watch a
different hologram or video stream, some embodiments of the
invention allow all the viewers to watch with both eyes the same
hologram. This way, a lot of people can gather and have exactly the
same experience, like in watching a standard TV or movie, but with
holographic pictures. Although full depth feeling requires seeing a
different hologram with each eye, seeing the same hologram with
both eyes provides some depth feeling, which sometimes may be
better than obtainable with some auto-stereo displays.
[0480] Thus, in some embodiments of the invention, a single
hologram is projected for the entire duration of each cycle of the
revolving mirror, and people sitting all around the stage can watch
a same holographic video stream.
[0481] Such a system does not require angular tracking of the
viewers' faces around the stage; knowing the viewer's distance from
the stage may be sufficient.
[0482] Optionally, in a sit around configuration, the viewers sit
in concentric circles around the stage, such that each group of
viewers is at a different distance from the stage. Optionally, the
sits are fixed, such that the distances are predetermined.
[0483] In some such embodiments, there is one unit cell dedicated
to each circle of viewers, such that online optical path adjustment
may be omitted.
[0484] In the present description and claims, an action is said to
be taken online if it is taken when the viewer is watching.
Optionally, online actions change what the viewer is seeing, the
quality of the picture, and/or the orientation of the picture.
[0485] In some embodiments, the concentric sitting arrangement is
utilized to presenting different viewing streams to different
viewers. For instance, each circle can see a different movie. This
arrangement is particularly simple, since there may be one unit
cell showing each movie, and the system will show a different
movie, coming from a different SLM, to the viewers sitting in each
of the different circles.
[0486] Optionally, different areas of the same circle are shown
different movies. It should be noted, that different content shown
to different viewers can be of similar nature (for instance, two
movies) or of different nature (for instance, one viewer watching a
movie and another seeing a 2D stills image.
[0487] Optionally, the image projected is a 2D image, for example a
conventional TV show, and the system allows the TV to be watched
from up to 360 degrees around it. Optionally, different content can
be projected to different areas (e.g., 2, 3, 4, 5, 6 or more
different content/channels). For example, viewers watching the
display from an angle of between 0.degree. and 90.degree. can watch
the sports channel, viewers that watch the display from a viewing
angle of 91.degree. to 180.degree. can watch the news channel and
so forth. Optionally, 2D image streams are displayed, for example,
using paraxial LCD displays, which may require optical manipulation
as described herein, with conventionally control of the LCD. When
different viewers look at different media streams, it may be
preferable to provide the audio through personal headphones. It is
noted that angles smaller than 360.degree. may be supported as
well, for example, 100.degree., 160.degree., 180.degree.,
210.degree. or smaller or larger or intermediate angles. For
example, a viewing angle (effective, not necessarily
instantaneous), can be, for example, at least 10.degree.,
20.degree., 30.degree., 40.degree., 50.degree. or intermediate
angles. It is a particular feature of some embodiments of the
invention that a small viewing angle image generator is operated to
provide a wide viewing angle, for example, larger in area by a
factor of at least 5, 10, 30 or 100. It is a particular feature of
some embodiments of the invention, that rather than generate a
hologram for all parts of a space where a viewer might be, a
hologram (or other image) is calculated and/or displayed only for
portions of space where a user is expected to be.
[0488] In some embodiments of the invention, a same general content
(e.g., a house) is provided, but different layers of data (e.g.,
plumbing, electrical cabling) are provided at different angles
(e.g., changes in rotation and/or azimuth). Optionally, there is a
seamless change in transparency of one or more data types as the
viewing angle changes.
[0489] It is a particular feature of some embodiments of the
invention that a plurality of images are shown to one or more
viewers substantially simultaneously, for example, within less than
a second, within less than 0.1 seconds, at video rate or faster. In
an exemplary embodiment of the invention, the system generates (and
projects) at least 10, at least 20, at least 30, at least 40, at
least 80, at least 150 or intermediate numbers of different
images/holograms per second, using, for example, 1, 2, 3, 4, 5, 6
or more image generation modules.
Exemplary Dealing with Smearing
[0490] To appreciate possible smearing problems, it may be useful
to consider an embodiment, where the image SLM is about 2 cm wide,
the viewer is at about 1 m distance from the revolving mirror
(implying about 6 m in circumference), and the revolving mirror
revolves at 30 Hz. In this particular embodiment, if a single image
is constantly and continuously projected, the image SLM sweeps the
viewer's eye at a linear velocity of about 180 m/sec; and this
sweeping might cause smearing of the hologram.
[0491] One way of dealing with such possible smearing is by
activating the system only for a small portion of the time, at
which the mirror directs an image SLM to the vicinity of the
viewer's eye (hereinafter projection period). In the above example,
the projection period is about 2 cm/6 m= 1/300 of the mirror
revolution period. The mirror revolution period, at 30 Hz, is
1/30sec. Therefore, the projection period in this example is 1/9000
sec, which is about 100 .mu.s. Activating the laser for only a
small portion of this time, say, between about 1 and about 20 .mu.s
is usually reduces all the smearing. Optionally, the laser is
activated several times each projection period, for example, 5
times of 2 .mu.s activation with 18 .mu.s non-activation in
between. Optionally, the system is tuned such that in each mirror
revolution, each eye is exposed to the system only once. In the
above example this may be accomplished, for instance, by supplying
each 80 .mu.s a pulse with 20 .mu.s width.
[0492] In some embodiments, the laser is activated in pulses.
Additionally or alternatively, the laser light is continuous and a
chopper is used to chop the light into shorter flashes.
[0493] Alternatively or additionally, the light source is shaped as
a line, for instance, a vertical line that scans the SLM, for
instance, horizontally. In this embodiment, each vertical
illumination defines a sub-hologram, which describes the scene from
a different and extremely narrow angle. The sub-hologram reaches
the eye as vertical slit window. Optionally, the scanning covers
the entire SLM, thus presenting to the vicinity of the eye all the
SLM image, but the eye will see only one vertical slice of the SLM,
the same one that falls exactly into the eye.
[0494] One potential way to filter out the smearing is by using a
slit illumination source, using a slit 513 (see FIG. 8) in the
revolving mirror 510'. The revolving mirror 510' is optionally
partly reflective, and a slit 513 in the revolving mirror 510'
sends light out from the mirror axis 515 to the lens 310, from
there to the SLM, from there back to the lens 310, from there back
to the slit 513, from there reflecting to the eyepiece 320, and to
a viewer's eye.
[0495] Systems that follow a single viewer without revolving the
mirror are optionally strobed conventionally to overcome smearing
problems, as in such systems the smearing is less pronounced.
Exemplary Eye-Tracking Units
[0496] Eye tracking systems are well known in the art, and any such
known system may be suitable for use with embodiments of the
invention, as long as the tracking quality is compatible with the
size of the SLM image: the tracking should be good enough to allow
estimation of the position of the eye in each direction within a
tolerance smaller than the size of the image viewability space in
the same direction.
[0497] In an exemplary embodiment of the invention, the tracking
unit provides only the position of the center of the face of the
viewer and the positions of the eyes are calculated based on
knowledge of the face of the viewer, or using general information
on the distance between eyes of viewers. Such information may be
provided for different viewing groups, for instance, children,
grown-ups, etc. Knowledge may be acquired, for instance, by
measuring the face of the viewer prior to tracking. Optionally, the
measuring comprises standing at a known distance from the tracking
unit and looking into the tracking unit. Then, the tracking unit
detects the eyes of the viewer and the distance between the eyes as
seen from the known distance, and uses this information to compute,
during tracking, the distance between the tracking unit and the
viewer, responsive to detected distance between the viewer's
eyes.
[0498] In an exemplary embodiment of the invention, the tracking
unit is built to inherently have the same coordinate system as the
optical system. For example, one of the unit cells depicted in FIG.
7 can hold tracking unit optics.
[0499] FIG. 9 is a schematic illustration of an exemplary a
tracking unit 800, useful in a tracking unit cell. Tracking unit
800 comprises a light source 810 and a light detector 820,
positioned at the back focal length of tracking unit 800.
Optionally, tracking unit 800 also comprises a filter 805,
filtering out light of other wavelengths than that provided by
light source 810.
[0500] The light provided by light source 810 and detected by
detector 820 is of a kind selectively reflected by eyes. It is
known in the art to use for such purposes infra-red light.
[0501] In an exemplary embodiment of the invention, light from
Light source 810 is split by beam splitter 825, such that a portion
of the light goes to detector 820 and another portion goes to the
viewer's eye 830. Light reflected from eye 830 returns to the
detector, and detected. This may be the case if coherent light is
used for detection of interference between direct light and
reflected light or if reflected light is used to provide a baseline
of the instantaneous light level. In other embodiments, no light is
reflected directly back to the detector, rather, only light
reflected by the eyes or face (or artificial marker, such as on a
hat, sticker or glasses) is reflected back to the detector
[0502] In the embodiment shown, light from the beam splitter
arrives at the viewer's eye through optical element 310, revolving
mirror 510 and eyepiece 320, all of which were described above in
detail. On the way from the viewer's eye to the IR detector, the
light optionally travels through the same optical path, but in
reversed order.
[0503] The direction at which the revolving mirror faces at the
time reflected light is detected at detector 820 corresponds to the
direction of the eye. The vertical elevation of the viewer's eye is
optionally estimated based on the point at which the reflected
light hits detector 820. Optionally, the elevation of the image is
adjusted by optical means and/or by moving the image itself (e.g.,
shifting its encoding on the SLM). Optionally, different viewing
directions have different elevations (e.g., for different viewer
heights and/or distances).
[0504] Optionally, detecting two eyes within some predetermined
distance, say, about 6.4 cm.+-.1.5 cm, is interpreted as detection
of two eyes of the same viewer.
[0505] Optionally, the distance between the viewer's eyes is
measured before tracking begins, and the time difference between
the signals received from the two eyes of the viewer is used for
estimating the distance between the viewer and the system.
[0506] In an exemplary embodiment of the invention, detection of
the eyes is adjusted to be biased towards detection (e.g., with,
for example, a 10%, 30%, or 100% or intermediate or greater
percentages of false detections as compared to correct detections).
In an exemplary embodiment of the invention, sending an image where
there is no eye has only a computational cost, possibly
pre-compensated for by system component selection, while not
sending an image where there is an eye can prevent the display from
operating correctly.
Exemplary Adjustment of Holograms
[0507] When a viewer is watching an object while walking around the
object, the viewer sees different faces of the object from each
point.
[0508] A hologram displayed according to some embodiments of the
present invention provides a similar effect without any adjustment
of the optical system or the hologram generating unit. In other
words, display of a single holographic frame provides a full
walking around experience. However, this is limited to walking
around keeping the eye within the borders of the hologram
viewability space.
[0509] In some embodiments, when the viewer moves around, such that
the eye is outside the image viewability space, the optical system
is adjusted to follow the eyes, but the hologram displayed to the
viewer does not change. In these embodiments, a viewer walking
around a hologram of the globe, sees Europe, for instance, all the
time no matter where he is standing. Optionally, in such an
embodiment, instead of flowing the viewer's eyes, the system simply
makes available the same hologram all around.
[0510] In some exemplary embodiments, when a viewer moves, the
scene displayed to him is updated, such that when moving around a
holograph of the globe, the viewer can see different continents.
Optionally, this is achieved by updating the displayed hologram
whenever the viewer's eye is tracked to be moving outside of the
viewability space of the image hologram. The system estimates which
portion of the object the viewer would have seen if the real object
was on the stage, and presents on the stage a hologram of this
portion.
[0511] Projecting each time a hologram of only that part of the
scene viewable by the viewer allows saving a lot of computation
power, without compromising the quality of image seen by the
viewer.
[0512] In some embodiments of the invention, the scene itself
changes over time. In one particular example, the scene is a video
stream. In such case, the hologram is to be adjusted even if the
viewer's eye doesn't move. Optionally, such adjustments are carried
out about 24 times per second, as it is well known in the art that
in such a frequency, the human mind sees a fluent movement.
[0513] Optionally, when showing a hologram of a video stream, each
time the hologram is adjusted (i.e. a hologram of another "frame"
of the video stream is displayed), the system is updated with
regard to the eye position of the viewer, and projects only those
portions of the frame that would be viewable to the viewer from the
viewer's current viewing point. For example, this allows a viewer
watching a movie of a basketball game to change seats and see the
game from different angles.
Exemplary System and Control
[0514] FIG. 10 is a simplified block diagram of an exemplary system
900, showing some main units of a projection system and the
interactions between them.
[0515] System 900 includes: a projection unit 910 driven by driver
915 for generating and projecting holograms. Projection unit 910
comprises an SLM 920.
[0516] System 900 also includes a computation unit (930), for
computing the desired optical behavior of each pixel of SLM 920;
and SLM driver (940), for driving the optical behavior of each
pixel in SLM 920 according to the desired optical behavior, as
computed by computation unit 930.
[0517] Computing unit 930 receives as input, for example, a data
set, an image or a video stream, optionally 3 dimensional or
volumetric image, optionally, a stream of 3 dimensional or
volumetric images. The input is optionally in digital form.
Alternatively, input is in analogical form. In some embodiments,
only a surface of the 3D image is provided. Optionally, the data is
pre-computed for streaming to the SLM. Alternatively, unit 930
generates SLM data from the input, for example as described below.
Optionally, unit 930 generates data and/or renders the input only
for viewing direction where a user is or is expected to be at
(e.g., assuming a certain movement speed of human head).
Optionally, there is a lag (e.g., a few seconds or fractions of
seconds) between the detection of a new user by the eye tracker and
the presentation of an image (or complete image) thereto, for
example, due to delay in obtaining data and/or delay in rendering
such data.
[0518] Optionally, the volumetric 3D image stream data or any image
data is stored in the system's memory ahead of time, and during
projection, the stored data are approached and used for controlling
the system. Optionally, the data is received online, and stored
only temporarily, as required for controlling the system during
projection.
[0519] Computing unit 930 computes, based on the input, what should
be the optical behavior of each pixel of SLM 920, in order for a
hologram produced with the SLM to reproduce the wave-front
corresponding to the wave-front emanated from the scene. SLM driver
940 drives the pixels of the SLM into the computed optical
behavior.
[0520] Optionally, unit 930 modifies its received data and/or data
to be displayed to take into account the optical properties of the
system, viewers and/or a calibration process. Optionally, a
calibration process is visual and/or includes detecting a user's
pointing device. In one example, a grid is shown and the user
"touches" each point on the grid. In another example, the user is
shown a series of images and provides feedback, for example, on
color quality, shape, spatial distortion and/or other image
properties. Optionally, the input is provided via the image, or
using an input device (e.g., a mouse or button, not shown).
[0521] The shown system also includes a tracking unit (950), which
optionally provides computation unit 930 information on the
position of the viewer's eyes, thus allowing computation unit 930
to estimate which portions of the scene the viewer would see from
his viewing point, and compute the optical behavior of the pixels
to produce only the wave-front emanated from these portions.
[0522] Additionally or alternatively, tracking unit 950 provides
the information on the positions of the viewer's eyes to driver 915
of the projection system 910, allowing it to adjust the position of
the viewability space of the generated hologram to that of the
viewer's eyes.
[0523] In an exemplary operation sequence, a video stream is
inputted into computing unit 930, which computes the desired
optical behavior of the various pixels of SLM 920. Computing unit
930 communicates the computed values to SLM-driver 940, which
drives SLM 920 accordingly. Generation and projection unit 910
generates an object hologram using SLM 920, and projects the object
hologram.
[0524] In the same time, the tracking unit tracks the viewer's
position and orientation, and sends this information to the
computing unit. The computing unit uses this information to compute
a simplified hologram, that reproduces light emanated from the
scene only in the direction of the viewer. The tracking unit also
communicates the position and orientation of the viewer's eye to
driver 915 of generation and projection unit 910, and driver 915
drives the projection unit to project the hologram to be viewable
by the viewer.
[0525] FIG. 11 is a flowchart of actions taken in a method of
producing and projecting holograms to be seen by a viewer looking
at a stage according to an embodiment of the invention.
[0526] At 105, the position of a viewer's eye is estimated.
Optionally, the positions of both eyes are estimated. Optionally,
the direction at which the viewer is looking is also estimated.
Optionally, the system is configured to project the holograms to a
predefined stage, and the positions of the viewer's eye and the
stage determine the direction at which the viewer is looking.
[0527] At 110, the scene-portions viewable by the viewer if the
scene was at the stage in some given orientation are estimated
based on the viewer's eye location.
[0528] At 115, a hologram of only those portions of the scene,
estimated at 110 to be seen by the viewer, are generated.
Optionally, these holograms are generated by computing optical
behavior of various SLM's pixels required to generate the hologram,
and driving an SLM accordingly.
[0529] At 120, the holograms generated at 115 are projected such
that the viewer will see the hologram from his location, when
looking at the stage.
[0530] FIG. 12 is a flowchart of actions taken in a method 150 of
producing a hologram to be seen by a viewer looking at the hologram
from a given angle.
[0531] At 152 the viewer's position is captured. Capturing the
viewing position optionally comprises receiving input from a
tracking unit. Optionally, capturing also includes processing said
input. When capturing is accomplished, the system has determined
the position of the viewer, and the angle from which the viewer
would have seen the scene, if the scene was in fact on the
stage.
[0532] At 154, volumetric data are approached. Optionally, the
volumetric data are saved in advance in a memory of the system.
Optionally, the volumetric data is received online, i.e., while the
imaging process takes place, for instance, from a 3D imaging
device, for instance, a CT imager.
[0533] Out of the volumetric data, the system filters out the data
required for creating a hologram of that portion of the view that
was determined at 152 to be seen by the viewer.
[0534] At 156, the computing system transfers the volumetric data
to holographic data, comprising, for instance, setting a desired
refraction index to each of the active pixels of the SLM in order
to generate the hologram. Optionally, the SLM has non-active pixels
that are not going to take place in generating the hologram. These
pixels are optionally not illuminated at 160 (below). Optionally,
preventing illumination of these pixels is performed by an
additional transmission or reflective LCD or Digital Micro-mirror
Device (DMD) placed between the light source and the SLM. The
optional additional transmission or reflective LCD or DMD is not
depicted in the drawings.
[0535] At 158, the SLM is controlled such that each pixel has in
fact the refraction index set to it.
[0536] At 160 the SLM is illuminated to generate the object
hologram.
Exemplary Interactions with a Hologram
[0537] Some embodiments of the invention allow the viewer to
interact with the hologram. For example, the viewer can move his
hand, or any other body portion, or any object the viewer is
holding, for instance, a pointer, to touch the hologram.
Optionally, sensors detect the position of the viewer's hand, and
control an output device accordingly. In one example, a viewer
touching a bell results in a ringing of the output device.
[0538] In another example, the viewer can interact with the
hologram to manipulate the scene. For instance, a viewer touching a
car-hologram at the engine cover can cause the hologram to change
to a hologram of the inside of the car engine.
[0539] Alternatively or additionally, a viewer that touches some
portion of the hologram can cause the output device to control the
system to produce a hologram where the touched portion is in front
of the viewer. For instance, a viewer watching the globe facing
Switzerland can touch the globe at Spain, and the globe will rotate
to bring Spain to the front of the viewer.
[0540] Alternatively or additionally, the viewer can interact with
the hologram through a control panel, moving the hologram in space,
rotating it around some predetermined axis, ad-hoc defining for the
hologram an axis of rotation and rotating the hologram around the
ad hoc defined axis, and perform any other manipulation on the
orientation and/or the position of the hologram in space.
[0541] Optionally, two or more viewers can interact with holograms
of the same scene simultaneously. For instance, two viewers can
touch the same portion of the view in front of them, and, although
each is watching a different hologram (or even each eye of each of
them is watching a different hologram), when they two touch the
same portion of the scene, for instance, both touch the bell, they
also touch each other.
Colored Images
[0542] Various embodiments were described above in the context of
monochrome images. Nevertheless, multicolor images may be provided
by systems and methods according to embodiments of the invention
just as well.
[0543] In some embodiments, a colored hologram is projected by a
single SLM, which is sequentially lighted by red, green and blue
light. Optionally, a colored scene is processed into three
monochromatic scenes (one red, one green, and one blue), and the
computational unit provides to the SLM data for sequentially
generating monochromatic holograms that reproduce each
monochromatic scene in its turn. Optionally, the light sources are
synchronized with the computational unit such that each
monochromatic hologram is formed with the corresponding light (a
hologram reproducing a red scene is generated with red light,
etc).
[0544] Optionally, simultaneously with projecting the holograms
generated by the SLMs to the stage, images of the SLMs are
projected to overlap with the viewer's eye.
[0545] In some embodiments, red, green and blue light are projected
to three different SLMs, each reproducing one monochromatic scene.
In some embodiments, each of the three SLMs is contained in a
different unit cell, and in each revolution of the mirror images of
all the unit cells are sequentially projected to the stage, such
that the viewer sees a multicolor hologram.
[0546] In some embodiments, all three SLMs share a single optical
system, such that the synchronization between them is optical. For
instance, the three SLMs are three portions of a single SLM screen.
Optionally, SLMs that share optical systems are all contained in a
single unit cell.
[0547] A light source used for generating colored hologram images
includes, for example, three different lasers. Another example is a
light source comprising three different LEDs.
Exemplary Uses of Multiple SLMs
[0548] In some embodiments of the invention, creating the object
hologram does not require activating a full SLM unit. In such
cases, it is possible to use one SLM as a plurality of SLM units.
For instance, one SLM of 1000.times.1000 pixels can function as a
multiplicity of 4 SLMs, each 500.times.500 pixels, and obtain all
the advantages of using a multiplicity of SLMs discussed below.
[0549] Optionally, several SLMs are imaged at the same eye of the
same viewer. Such an arrangement may have several uses.
[0550] For example, as noted above, in some embodiments each SLM
provides a monochrome image (red, green or blue), and the viewer
who sees the three monochrome images, perceives them as a single,
multicolor, image. It is noted that if a non-hologram image is
shown, a color SLM may be used. Alternatively, different source
colors may be aimed at different parts of a hologram-producing SLM,
optionally with SLM elements for different colors being interlaced
or otherwise intermingled.
[0551] In another exemplary embodiment, a multiplicity of SLMs are
used for providing an image that is larger than an image provided
by a single SLM. The system is controlled to form two half-objects,
each by a single SLM, and image the half-objects one near the other
on the stage. Both the SLMs are imaged to overlap with the same eye
of the viewer. The viewer sees, therefore, with the same eye, an
image composed of two images, one of each half-object. The composed
image is optionally larger than any of the images composing it.
[0552] In another exemplary embodiment, a multiplicity of SLMs is
used for broadening the angle from which an image is seen. In one
such embodiment, two SLMs are imaged to the vicinity of the viewer,
one near the other, optionally, with some overlap between the two
SLM images. Each of the two SLMs optionally creates an object
hologram of the same scene, and the two holograms are imaged to the
stage. The viewer can see the image, no matter with which image SLM
his eye overlaps. This arrangement allows loosening the
requirements from the system's tracking mechanism, because only
relatively large movements of the viewer require adjustments of the
system.
In another exemplary embodiment, a multiplicity of SLMs is used for
broadening the angle from which an image is seen. In one such
embodiment, two SLMs are imaged to the vicinity of the viewer, one
near the other, optionally, with some gap between the two SLM
images, smaller then the viewer pupil. Each of the two SLMs
optionally creates an object hologram of the same scene, and the
two holograms are imaged to the stage. The viewer can see the
image, no matter with which image SLM his eye overlaps. This
arrangement allows loosening the requirements even more than the
previous option, from the system's tracking mechanism, because only
relatively large movements of the viewer require adjustments of the
system.
Exemplary Applications
Exemplary Private Content Applications
[0553] In some embodiments of the invention, the content is being
imaged to the eye of only one viewer, and other people in the
vicinity of the viewer cannot see the content.
[0554] Optionally the viewer can watch the content from any
desirable angle, and in some embodiments even move around the
display, while others near him cannot see the content.
[0555] In some embodiments, when the tracking system loses track of
the viewer, display is disconnected. Such embodiments may be
particularly useful for handling confidential material. For
instance, if a viewer reviews a confidential document with state of
the art laptop display, a neighbor sitting next to the viewer can
see the document. If the laptop is equipped with a display unit
according to an embodiment of the present invention, the
confidential document is displayed only to the viewer.
Nevertheless, the viewer is not necessarily confounded to looking
at the display from one angle. The viewer can go away from the
display, and his neighbor will take the viewer's place in front of
the display, and still will not be able to see anything, because
the tracking system lost track of the viewer, and display was
stopped.
Exemplary Medical Applications
[0556] In many medical applications physicians are provided with
information on the three-dimensional structure of tissue. In some
embodiments of the invention, this information is displayed to one
or more physicians as a hologram that the physicians can interact
with.
[0557] For instance, as a preparation for a minimal invasive
cardiac procedure, a team of physicians acquires a dynamic 3D image
of the heart using existing ultrasound techniques. Then, the team
members can view the acquired images from different perspectives,
for instance, each viewer from his own perspective, while having
the ability to point and mark specific areas within the image as
part of their discussion and preparation for the clinical
procedure.
[0558] In an exemplary embodiment the image hologram is of the same
size as the imaged scene (in the above example, the heart). Thus,
in case an external component, for instance a stent, is to be
inserted into the patient, the component may be fitted to the
holographic image before the operation starts, so as to minimize
the need to fit it to the patient during the operation. This
feature is optionally enhanced by having a virtual "floating in the
air" image.
Exemplary Computer Aided Design
[0559] In an exemplary embodiment of the invention, a computer
designed model is displayed to a team of designers, allowing one,
several, or each of the team members to walk around the model,
relate to it and/or manipulate it. For instance, in a model of a
mechanical part, such as the envelope of a new cellular phone,
where the display is a touch screen, one designer can suggest
modifying lighting of the display, and another will comment on the
results of the modification and suggest adding real buttons and
preset that instantly. Similarly, a single designer can look at the
same detail from different angles. While reviewing a design, a
member of the team can point to a specific part within the image
(with a dedicated pen or with his finger). Optionally, all team
members see the part pointed at, and the team members can discuss
the view of that part, as seen from different perspectives.
Optionally, the display comprises a system-human-interface,
allowing a team member to manipulate a specific component within
the full design, for example changing the color of a marked
surface. As the team is viewing the image precisely from all
angles, the described process eliminates some rapid prototyping
phases within the development process, thus lowering its total time
and cost.
Digital Advertising
[0560] Some embodiments of the invention may be utilized to gain
attention of people who are exposed to the display without looking
at it attentively. For instance, some embodiments of the invention
may be used as advertising displays at public locations and gain
considerably more attention than more traditional flat displays and
posters gain.
[0561] For example a display according to embodiments of the
invention can be posed in an exhibition, and the viewers going
around it will see a holographic image of an advertised product,
for instance, a cellular phone, or of a whole advertising promo.
The holographic display attracts more attention than a usual poster
or flat screen.
[0562] Optionally, in addition, one of the viewers manipulates the
image while the others are viewing. Manipulation is optionally, by
moving, rotating, or scaling the image, or interacting with the
image in any other way. This embodiment optionally enhances
viewers' attraction to the presented image, product or service.
[0563] Optionally, different views of the product are displayed to
be seen by viewers standing at different locations around the
display. It may be expected that people exposed to this kind of
advertisement would be encouraged to go around the display, and pay
the advertised product more and more attention.
[0564] Optionally, an advertising display according to an
embodiment of the invention allows a viewer to manipulate the
displayed scene as explained above under the heading "exemplary
image manipulation". Allowing viewers to manipulate the viewed
image may be expected to increase the viewer's attention and
involvement with the advertised product.
[0565] Optionally, an advertising display according to an
embodiment of the invention displays the same hologram 360.degree.
around it, and comprises an input device, allowing a user to
indicate that the user is interested in closer look at the
displayed scene. Responsive to receiving such indication, the
system starts tracking the viewer's face and allows this particular
viewer to see the advertised product from different angles as the
viewer goes around the display.
User Interfaces
[0566] 3D interaction occurs when users are able to move and
perform interaction in 3D space. Human-machine interaction requires
that both human and machine receive and process information, and
then present the output of that processing to each other. Users
perform an action or give a command to the machine in order to
achieve a goal. The machine takes the information provided by the
user, performs some processing, and then presents the results back
to the user.
[0567] Ideally, users could interact with a virtual image the same
as they could in reality. In contrast to standard input devices
like a keyboard or a 2D-mouse, ideal operations in full three
dimensions should also allow six degrees of freedom, which is
natural for the users. This type of 3D interaction device should
recognize and interpret human actions and gestures and transform
them into corresponding manipulations of the virtual scene or
information about the image. Some embodiments of the present
invention, go much closer to the ideal than the above-mentioned
standard input devices.
[0568] While some devices enable three dimensional interaction with
up to six degrees of freedom, none of them enable doing so on an
actual projected image, but only in another location in space,
termed herein input space, while the image is projected on a 2D
screen or some form of a 3D platform, termed herein display
space.
[0569] In some embodiments of the invention, using the display of
the invention described herein, an image provides a viewer with the
depth cues of a real object, making viewing the user interface feel
natural. Some embodiments of the invention enable users to actually
"touch" a projected 3D image, while providing suitable visual depth
cues, optionally having a wide viewing angle, optionally allowing
viewing precise locations in space, and optionally from different
viewing angle. Some embodiments of the invention project a
"floating-in-the-air" image so the image appears at a distance of
approximately 80 cm from the viewer, within reach of the viewer's
arm.
[0570] In some embodiments, the apparent distance from the viewer
to the image is such that the user may reach the image, that is,
approximately a length of a non-extended arm, approximately a
length of an arm, and/or approximately a length of an arm holding a
stick or as pointer.
[0571] In some embodiments, the size of the display space
corresponds to a range of motion of a user's arm, that is,
approximately 1-2 meters across. In some embodiments, the size of
the display space corresponds to a range of motion of a finger,
that is, approximately 10-20 centimeters across.
[0572] In some embodiments, the resolution of input location
corresponds to motion of a user's arm, that is, approximately 1
centimeter. In some embodiments, the resolution of input location
corresponds to motion of a user's finger, that is, approximately 1
millimeter. Resolutions more coarse and finer are also optically
possible, therefore some embodiments potentially operate at such
resolutions.
[0573] In some embodiments, the user interface displays a
floating-in-the-air object to one or more users, in one or more
locations. The floating-in-the-air object optionally does not
appear to change location with movement of a viewer. The
floating-in-the-air object also optionally appears at the same
location from different viewing directions.
[0574] In some embodiments, two user interfaces use two displays to
display the same floating-in-the-air image in two different
locations, enabling one or more users at a first location a partial
or a full walk-around, at one or more walk-around rates, and at the
same time one or more users at a second location a partial or a
full walk-around at one or more other walk-around rates.
[0575] In some embodiments, the user interface displays a different
floating-in-the-air image to different viewers. Displaying a
different image to different users optionally takes many forms:
displaying entirely different images, such as a red balloon and a
green Ferrari; displaying the same object using a different
coordinate system, such as a first image displaying a hammer placed
in the center of the display and a second image displaying a hammer
placed at a side of the display; displaying one part of an object
in a first image and another part of an object in a second image,
all the while the object being optionally located in the same
place, same coordinate system, same size; displaying different
colors for the same object, such as displaying an object to a first
user and displaying the same object, contrast-enhanced, or color
enhanced, or in a different range of hues, to another user; and
displaying the same object at different sizes.
[0576] It is noted that displaying the same object at different
sizes to different users raises a question: when a first user
points a pointer into the display space, "touching" a location on
the object which the first users sees, where should a display of
the pointer appear for the second user. One option is that the
image displayed to the second user should be displayed so that a
touching tip of the pointer should appear to touch the same
location in the second image as in the first image, even while the
second object appears to be of a different size than the first
object. The second image is displayed at coordinates such that the
tip of the pointer appears to be touching the same location in the
second image as in the first image.
[0577] In some embodiments, the user interface displays a different
floating-in-the-air image to different eyes of the same viewer.
[0578] In some embodiments, the user interface enables a partial or
a full walk-around the floating-in-the-air image, as described
earlier, displaying different sides of the floating-in-the-air
image as if the image is an actual object floating in the air.
[0579] In some embodiments, the user interface allows poking a
finger or some other object onto and into the floating-in-the-air
image.
[0580] In some applications of the user interface, the
floating-in-the-air display utilizes embodiments of the volumetric
display described herein. In other applications of the user
interface other volumetric displays are optionally used, provided
their properties support the specific application.
[0581] Reference is now made to FIG. 13A, which is a simplified
illustration of a user 1320 appearing to touch a finger 1330 to an
object 1315 being displayed by a user interface 1300 constructed
and operational according to an embodiment of the invention.
[0582] The user interface 1300 includes a volumetric display 1305,
which displays a first image in a floating-in-the-air three
dimensional display space 1310. The image, by way of a non-limiting
example, displays an object 1315, by way of a non-limiting example,
a heart.
[0583] It is noted with reference to FIGS. 13A, 13B, 13C, 14, 15,
and 16, where reference is made to a three dimensional display
and/or a floating-in-the-air display, the reference is meant to
include, by way of a non-limiting example, holographic image
display as described above; paraxial image display as described
above; and other image displays suitable for volumetric
display.
[0584] It is noted that a two-dimensional display such as, by way
of a non-limiting example, a television display and/or a computer
monitor display, is suitable for transforming into a volumetric
display by generating the two dimensional image as floating-in-air,
similarly to a three dimensional image.
[0585] The user 1320 views the object 1315, and extends a finger
1330 to apparently "touch" the object 1315. The volumetric display
1305 allows inserting a real object into the display space 1310,
since the volumetric display 1305 is a floating-in-the-air
volumetric display, displaying a floating-in-the-air image.
[0586] The user interface 1300 also optionally includes a computer
1335, which provides control and data 1340 to the volumetric
display 1305.
[0587] A location of the finger 1330 is located by a location
determination unit (not shown). The location determination unit
optionally determines the location of the finger by identifying
real objects placed into the display space 1310.
[0588] The location determination unit optionally comprises a unit
for locating objects (e.g. the finger 1330) in three dimensions,
such as, by way of some non-limiting examples, cameras mounted to
pick up images along different directions, and triangulate a
location in three dimensions, and/or distance measuring units
measuring distance to objects in the display space 1310.
[0589] In some embodiments of the user interface 1300, a variant of
a unit cell, such as depicted and described with reference to FIG.
7, operates as the location determination unit. The variant of the
unit cell is located such that a reverse optical path from the
display space 1310 leads to the unit cell. The unit cell optionally
measures location of an object (e.g. finger 1330) in the display
space 1310 by a combination of a rotational angle to the object,
relative to a unit cell base, and a distance to the object. The
rotational angle optionally takes into account a revolving of the
optical system.
[0590] In some embodiments of the invention distance to an object
(e.g. finger 1330) is measured by a distance measurement system
such as used in camera auto-focus systems. In some embodiments of
the invention distance to an object (e.g. finger 1330) is measured
by a conoscopic distance measuring system.
[0591] The location of the object is optionally used as an input
1345 to the computer 1335, and the computer 1335 optionally
computes control instructions and data for displaying a second
image, optionally with the location input highlighted on the image
displayed by the volumetric display 1305, e.g. a location
highlighted on the heart.
[0592] Optionally, in order to locate a specific location on an
object inserted into the display space 1310, a specific location is
selected on the object, and the specific location may further be
highlighted. By way of a non-limiting example, a finger tip may be
the location. By way of another non-limiting example, the finger
tip may be highlighted by marking with a dye. The dye may be
visible to a human eye, and/or the dye may be chosen to provide
high contrast to a machine vision system which locates the location
input.
[0593] Optionally, the interface 1300 tracks a location of the
finger 1330 by 3D camera, available today, for example by 3DV
Systems.
[0594] Optionally, the user interface 1300 tracks a location of the
finger 1330, or some other location indication tool, and interprets
a dynamic movement of the finger 1330 as command gestures to the
user interface 1300. The command gestures optionally cause
manipulations of the displayed image. Such use of the user
interface provides a user with a feeling of directly (virtually)
shaping of a displayed object and/or image and/or scene. The above
feeling is especially enhanced when sensory feedback is
provided.
[0595] Reference is now made to FIG. 13B, which is a simplified
illustration of a user 1370 appearing to touch a pointer 1380 to an
object 1365 being displayed by a user interface 1350 constructed
and operational according to an embodiment of the invention.
[0596] The user interface 1350 includes a volumetric display 1355,
which displays a first image in a floating-in-the-air three
dimensional display space 1360. The image, by way of a non-limiting
example, displays an object 1365, by way of a non-limiting example,
a heart.
[0597] The user 1370 views the object 1365, and extends a pointer
1380 to apparently "touch" the object 1365. The volumetric display
1355 allows inserting a real object such as the pointer 1380 into
the display space 1360.
[0598] The user interface 1350 also optionally includes a computer
1385, which provides control and data 1390 to the volumetric
display 1355. A location of the pointer 1380 is located by a
location determination unit (not shown) and optionally used as an
input 1395 to the computer 1385. The location determination unit
optionally determines the location of the pointer 1380 by
identifying real objects placed into the display space 1360.
[0599] The pointer 1380 optionally presents a better defined
location input than a finger. Optionally, it is easier to locate a
tip of the pointer 1380 than a tip of a finger of a hand.
[0600] The tip of the pointer 1380 may be highlighted by marking
with a dye or more than one dye.
[0601] In some embodiments the tip of the pointer includes a
substantially compact light source 1382. The light source 1382 may
be visible to a human eye, and/or the light source 1382 may be
chosen to provide high contrast to a machine vision system which
locates the location input.
[0602] In some embodiments of the user interface, the location
input causes the user interface to capture data corresponding to a
voxel located substantially near to the location input.
[0603] In some embodiments of the user interface, the location
input causes the user interface to capture data corresponding to a
sub-image located substantially near to the location input.
[0604] In some embodiments the user interface includes a display of
an "actuator", that is, a display of a device such a button, a
lever, or some such devices typically pressed, pushed, pulled, and
so on. The user interface enables a user to put a hand and/or a
pointer into the display space, a virtually "press the button",
"push or pull the lever", and so on. When the user interface senses
a hand and/or a pointer in the display space next to a location of
the display of the actuator, the user interface optionally
interprets the putting of the hand or pointer as an actuation of
the actuator.
[0605] The user interface optionally provides sensory feedback to
the user, such that the user feels somewhat as though
pressing/pulling/pushing the actuator.
[0606] The user interface optionally changes the first display to
move the image of the actuator corresponding to the actuating the
actuator.
[0607] The user interface optionally changes the first display,
showing an actuator that is actuated, a button that is pushed,
and/or other such changes indicating that the user "pushed the
button". It is noted that the display is controlled by the user
interface, and therefore optionally provides feedback upon
actuation. This is by way of contrast with, for example, existing
holographic displays, which may display a hologram of a button, but
cannot change the look of the holographic button since their
hologram is a static display projected from film.
[0608] In some embodiments of the invention, the user interface
displays an image of a robotic arm, and the computer optionally
sends control signals and data so that a real robotic arm moves
according to input provided by a user in the display space of the
volumetric display of the user interface.
[0609] In some embodiments of the invention, the user interface
optionally picks up more than one location input. The location
inputs are optionally provided by several fingers in the display
space of the volumetric display, and/or by several pointers, and/or
by pointing to several locations consecutively. The location inputs
are optionally provided by several points on one finger and/or
pointer. The several points are optionally marked on the finger
and/or on the pointer with a contrasting dye and/or sources of
light.
[0610] In some embodiments of the invention, the user interface
optionally picks up more than one location input. The location
inputs are optionally provided by computing and/or estimating
locations based on a shape of the object inserted into the display
space. By way of a non-limiting example, a line is optionally
computed based on a long axis of a substantially elongated object
inserted into the display space.
[0611] The user interface optionally tracks a movement of one or
more location inputs over time, and optionally displays one or more
paths tracking the movement in the display space, optionally
superposed upon an image displayed in the display space.
[0612] In an example application, the user interface optionally
starts with an empty display, tracks the movement of one or more
location inputs over time, and displays the one or more paths
tracking the movement in the display space, optionally in real-time
and/or near real-time.
[0613] In an example application, the user interface optionally
accepts two location inputs, and defines a line in the three
dimensions of the volumetric display passing through the two
location inputs. The line optionally serves for further
manipulating the image displayed by the user interface. Image
manipulations using a defined line include, by way of a
non-limiting example: rotations around the line; a measurement of
the line length; and a division of a displayed object into portions
on either side of the line.
[0614] In an example application, the user interface optionally
accepts three location inputs, not on the same line, and defines a
plane in the three dimensions of the volumetric display passing
through the three location inputs. The plane optionally serves for
further manipulating the image displayed by the user interface.
Image manipulations using a defined plane include, by way of a
non-limiting example: a measurement of an area of intersection of
the plane and a displayed object; and a division of a displayed
object into portions on either side of the plane.
[0615] In an example application, the user interface optionally
accepts four or more location inputs, not on the same plane, and
defines a volume in the three dimensions of the volumetric display
based on the four or more location inputs. The volume is optionally
defined as a volume contained within the four or more location
inputs, and/or a volume contained within some function of the four
or more location inputs, such a surface computed based on the four
or more location inputs.
[0616] The volume optionally serves for further manipulating the
image displayed by the user interface. Image manipulations using a
defined volume include, by way of a non-limiting example: a
measurement of the volume; and a division of a displayed object
into portions inside of and outside of the volume.
[0617] In some example applications the user interface optionally
picks up one or more location inputs. The location inputs are
optionally used as input to initiate image processing functions
such as, by way of some non-limiting examples: zooming in; zooming
out; cropping an image; rotating the image; and slicing the
image.
[0618] Reference is now made to FIG. 13C, which is a simplified
illustration of a user 1370 inserting a frame 1383 into a display
space 1360 of a user interface 1350 constructed and operational
according to an embodiment of the invention;
[0619] The user interface 1350 includes a volumetric display 1355,
which displays a first image in a floating-in-the-air three
dimensional display space 1360. The image, by way of a non-limiting
example, displays an object 1365, by way of a non-limiting example,
a heart.
[0620] The user 1370 views the object 1365, and extends a frame
1383 to apparently "surround" the object 1365. The volumetric
display 1355 allows inserting a real object such as the frame 1383
into the display space 1360.
[0621] The user interface 1350 also optionally includes a computer
1385, which provides control and data 1390 to the volumetric
display 1355. A location of the frame 1383 is located by a location
determination unit (not shown). The location of the frame 1383 is
optionally used as an input 1395 to the computer 1385. The location
determination unit optionally determines the location of the frame
1383 by identifying real objects placed into the display space
1360.
[0622] The frame 1383 optionally defines a location input which
defines a plane, and optionally a limited area within the plane.
Optionally, the plane defined by the interface is displayed to the
viewer by means of the volumetric display of the invention.
Optionally the defined plane is presented on a 2D display.
Optionally the defined plane is displayed in real-time.
[0623] In some embodiment of the invention the frame 1383 is
optionally a three-dimensional frame, for example a frame in a
shape of a wire-frame cube. The shape of the frame is not limited
to the examples of a rectangle, as depicted in FIG. 13C, or a cube,
but includes a variety of wire-frame-like shapes.
[0624] The frame 1383 optionally presents a location input which
defines a volume limited within the frame 1383.
[0625] Reference is now made to FIG. 14, which is a simplified
illustration of two users 1470 1472 interacting with a same object
1465 being displayed by a user interface constructed and
operational according to an embodiment of the invention.
[0626] The user interface, of which a display space 1460 is
depicted in FIG. 14, displays a floating-in-the-air image to a
first user 1470 and a second user 1472. The floating-in-the-air
image is of the object 1465. The object 1465 appears at the same
place and at substantially the same time to both of the users 1470
1472, each of the user 1470 1472 viewing the object 1465 from their
respective location.
[0627] The user interface optionally implements the same-location
same-time display using an embodiment of the volumetric display
described herein. If the first user 1470 places a real object (not
shown) in the display space 1460 of the volumetric display, such
as, by way of a non-limiting example, a hand, a pointer, or a
frame, the second user 1472 sees the real object at the same
location as the first user 1470. If, for example, the first user
1470 uses a pointer to point at a location on the displayed object
1465, the second user 1472 sees the pointer pointing to the same
location.
[0628] The term substantially at the same time is now explained,
with reference to the above statement, that the two users see the
object 1465 "at the same place and at substantially the same time".
The image of the object 1465 is optionally displayed to each of the
users 1470 1472 by being flashed at them for short periods of time,
the flashes repeating at a rate of several times a second. Both
users 1470 1472 get to see the object 1465 several times a second,
during the same second, thus "at substantially the same time".
[0629] In some embodiments of the user interface, the
floating-in-the-air display of the user interface displays a
different image to the first user 1470 than to the second user
1472, as described above with reference to embodiments of the
volumetric display of the present invention. (It is noted that FIG.
14 does not depict a first image displayed to the first user 1470
and a different, second, image displayed to the second user 1472).
In the presently described embodiments, if the first user 1470
points at a first object in the first image, the second user 1472
sees the first user 1470 pointing at the display space, yet the
second user 1472 does not see the first image, and the pointing
usually has no meaning for the second user.
[0630] Reference is now made to FIG. 15, which is a simplified
illustration of two users 1571 1572 interacting with the same
object being displayed by a user interface 1500 constructed and
operational according to an embodiment of the invention.
[0631] The user interface 1500 of FIG. 15 includes two volumetric
displays 1556 1557, optionally remote from each other. The two
volumetric displays 1556 1557 are optionally each connected 1591
1592 to a computer 1551 1552, and the two computers 1551 1552 are
optionally functionally connected to each other by a functional
connection 1554.
[0632] In a first example use for the embodiment of FIG. 15, a
first volumetric display 1556 displays a first object 1565 in the
display space 1561 of the first volumetric display 1556. A first
user 1571 uses a pointer 1581 to point at the first 1565 object. A
location indicated by the pointer 1581 is picked up and transferred
1596 to a first computer 1551. The first computer 1551 optionally
sends a signal and/or data to the first volumetric display 1556
indicating where the first user 1571 pointed, optionally providing
feedback to the first user 1571, and optionally sends data to a
second computer 1552 over the functional connection 1554.
[0633] The second computer 1552 optionally sends a signal and/or
data to a second volumetric display 1557, which optionally displays
an image of a second object 1566 in the display space 1562 of the
second volumetric display 1557.
[0634] The image of the second object 1562 optionally appears the
same as the image of the first object 1561. The image of the second
object 1562 optionally also includes an indication where the first
user 1571 pointed.
[0635] It is noted that the second user 1572 may point to the image
of the second object 1562, and the location indicated by the second
user 1572 may be picked up and transferred 1597 to the second
computer 1552. The second computer 1552 may optionally sends a
signal and/or data to the second volumetric display 1557 indicating
where the second user 1572 pointed, optionally providing feedback
to the second user 1572, and optionally sends data to the first
computer 1551 over the functional connection 1554. The first
computer 1551 optionally causes a display of an image and a
location pointed to.
[0636] The functional connection 1554 optionally includes a network
connection between the first computer 1551 and the second computer
1552.
[0637] In some embodiments the functional connection 1554 includes
white-boarding software.
[0638] In some embodiments of the invention the first volumetric
display 1556 and the second volumetric display 1557 do not
necessarily display the same image. Some non-limiting example
applications where the first volumetric display 1556 and the second
volumetric display 1557 display different images include remote
teaching, where a teacher and a student may view different images;
and gaming, where one user sees a different image than another
user, optionally one user sees more than another user and the game
uses the difference in viewing.
[0639] FIG. 15 describes using two sets of volumetric displays 1556
1557. It is noted that more than two volumetric displays may be
connected to function as described with reference to FIG. 15.
[0640] An embodiment of two or more volumetric displays connected
to each other yet remote from each other is especially useful for
medical and/or teaching purposes. A medical case may be presented
as a three dimensional volumetric image, and users at each of the
volumetric display sites can discuss the medical case, including by
pointing out locations on the image, and "touching" the image. An
implant or a prosthetic may be held up to the medical image, sizes
compared, even when the implant is at one site, and the source of
the medical image, that is, the patient, is at another site.
[0641] FIG. 15 describes using two computers 1551 1552, one for
each volumetric display 1556 1557. It is noted that one computer
may be used to power both volumetric displays 1556 1557, given
sufficient computational power and given sufficient communication
bandwidth through the functional connection 1554.
[0642] Reference is now made to FIG. 16, which is a simplified
illustration of a user 1620 comparing a real object 1680 to an
object 1615 being displayed by a user interface constructed and
operational according to an embodiment of the invention.
[0643] FIG. 16 depicts a display space 1610, which is portion of a
user interface constructed and operational according to an
embodiment of the invention. The user interface displays a
floating-in-the-air object 1615 in the display space 1610. The user
1620 places a real object 1680 in the display space 1610, and
compares the real object 1680 to the displayed object 1615.
[0644] A non-limiting example of applying the scenario of FIG. 16
to the real world includes displaying a floating-in-the-air object
1615 such as a heart or vessel structure using a three dimensional
medical data set. The heart or vessel structure is displayed at
actual size. A user, for example a doctor or a medical student,
holds a stent up to the heart or vessel structure, and compares the
size of the stent to a size of the heart blood vessel or vessel
structure for which it is intended. Another example can be holding
an artificial percutaneous heart valve up to the heart's displayed
anatomical structure in which it is intended for implant. The real
object 1680 being compared to the floating-in-the-air object is
optionally placed at an implant location, and/or next to the
implant location. In this non-limiting example, the user can
interpret the position and orientation of the stent or the valve,
providing good localization and an ability to select the specific
stent or valve, in terms of, for example: size, specific
manufacturer or specific technology.
[0645] The scenario of FIG. 16 enables teaching how an implant fits
in the body; research and/or development of new implants; and
pre-implant verification that an implant fits its purpose.
[0646] The user interface depicted in FIG. 15 enables a medical
data set from a first location to be displayed at a second, remote,
location, and optionally a session of remote medicine can be held,
in which the remote location provides advice, guidance, measures
sizes, compares implant and/or tool sizes, and so on.
[0647] Comparison of a first three dimensional displayed object
1615 in FIG. 16 is optionally performed with reference to a second
three dimensional displayed object (not shown in FIG. 16). The
first three dimensional displayed object 1615 is optionally
compared to one of a set of three dimensional representations of
objects, such as tools and/or implants, which are optionally kept
for comparison purposes.
[0648] In some scenarios, a first three dimensional object is
compared to a second three dimensional object by viewing the first
and the second objects in a user interface including a volumetric
display as described herein. The first and the second objects may
be shifted and/or rotated in space using the user interface of the
invention.
[0649] The scenario of FIG. 16 enables comparison of objects, not
necessarily in medical circumstances. By way of a non-limiting
example, a go/no-go gage may be held up to a floating-in-the-air
display of an object, and test conformance of the object to a
standard, provided the object may be displayed. Instead of bringing
a gage to an object, a three-dimensional display of the object is
brought to the gage.
[0650] The scenario in which a real object 1680 is compared to a
display of an object 1615, coupled with measurement of the real
object 1680 within the display space 1610, as described above with
reference to FIG. 13A, enables measuring differences between the
real object 1680 and the displayed object 1615. The differences
include one or more of differences in length, planar area, surface
area, and volume. The differences are optionally measured for
objects, and/or portions of objects.
[0651] Some details of the user interface described above are now
listed, with reference to four issues: a source of data for
display; a display device; an interface device; and supporting
software and communication devices.
Sources of Data for Display
[0652] Typically, for 3D representation, a cloud of XYZ points,
also known as voxels or volumetric data, are optionally input and
displayed. The input optionally comes from sources which generate
such information, optionally from computer-based data such as CAD
models, and/or externally acquired data such as CT or MRI scans in
medical imaging.
[0653] Optionally the data can be two dimensional, for example a 2D
image or image stream coming from a computer, a TV, cables,
satellite, and so on.
[0654] Optionally, the 2D/3D data origin is holographic, that is,
an interference pattern or a stream of interference patterns.
[0655] Optionally the data can come from the user interface of the
invention, as described herein, including specific location input
points in space marked by a user; a path drawn by the user; and/or
other images which the user optionally produces while interacting
with the user interface, and/or off-line, in the display space of
the user interface.
[0656] Optionally, software interprets the user interface and
produces 2D or 3D, including holographic, data, according to its
mission. For example, a place which a user "touches" with an
interface tool, the user interface optionally displays a
pre-defined indication, such as, by way of a non-limiting example,
a highlight and/or a specific shape such as a star or a cross.
[0657] By way of a non-limiting example, data optionally is input
from 3D medical imaging, also known as real-time 3D, 4D, or volume
rendering data, which provides volume and spatial renderings of the
human anatomy.
[0658] Optionally, input is a stream of 3D data images.
[0659] Optionally the input is provided in "real-time", that is, 24
frames per second, or more.
[0660] The 2D/3D data is optionally extracted from 3D imaging
modalities: CT; MRI; PET; 3D rotational Angiography; 3D ultrasound;
and future/emerging technologies.
[0661] The 2D/3D data optionally includes a combination of the
above modalities, super-imposing and/or fusing the data, also known
as "combined imaging" or "image fusion". Examples include: fusion
of CT and MRI results of a same patient; and MR guided ultrasound
treatment.
[0662] The 2D/3D data optionally includes pre-defined anatomical
models, as an anatomical library of various clinical cases, as well
as image sets of individual patients.
[0663] The 2D/3D data optionally includes 2D/3D data origin from
CAD tools, such as SolidWorks. The 2D/3D data may be still images
and/or image streams. Example standards for some data include:
IGES, 3DF, OBJ, and so on.
[0664] The 2D data optionally includes data from computers in a
VESA standard, for example, and/or analog and digital video
standards from TV related systems, such as composite video, DVI,
and so on.
[0665] In some applications of the invention, no data is passed to
the volumetric display. In such cases the user optionally draws
lines, objects, 2D images, volumetric images, and performs digital
volumetric sculpturing within the display space, optionally via an
interface tool. The drawings and/or "sculptings" are optionally
presented in near real-time by the user interface via the
volumetric display, in the display space.
[0666] Display Devices
[0667] In order to interact with a floating-in-air image, a
volumetric display device is optionally used. Generally, the device
presents an image produced from data, "floating in the air", for
the user to interact with the image.
[0668] The source of data for the image may be 2D, 3D, or
volumetric.
[0669] Optionally using the broad viewing angle display of the
present invention, data is optionally presented by an image having
a symmetry axis through the center of the image and passing through
the middle of the display device. 3D volumetric holographic data is
optionally displayed using absolute coordinates, viewable by one or
more viewers, and/or one viewer in different locations around the
display.
[0670] 2D information is optionally presented "floating in air" in
various orientations, presenting a flat surface in any direction in
a 360 degree circle. 2D Images optionally source from different
origins, and different images are optionally displayed to different
viewers.
[0671] Optionally the image is a re-image of a volumetric/2D
non-"in air" image, making the re-image "in air" by using
re-imaging optics.
[0672] 3D parallax barrier images, presenting two different images
to two eyes of the viewer are optionally displayed.
[0673] In some embodiments of the invention, for 3D images, the
"floating-in-air" display uses a projected 3D image with absolute
coordinates in a display space. A high quality, wide viewing angle,
3D display device is optionally used. Non-limiting examples of such
display devices include the broad viewing angle display of the
present invention and possibly future wide viewing angle 3D display
devices capable of displaying an image "floating in air".
[0674] Interface Devices
[0675] Tools supporting 2D/3D user input or manipulation within the
display space are considered for use as the interface medium. Such
tools include, by way of a non-limiting example, hand-held tools,
such as pen-like devices; gesture recognition interface units;
object recognition interface units, such as for recognizing a
finger and/or a tip of finger; and tracking systems with an ability
to track a position and/or orientation of the hand-held tool or a
finger.
[0676] Optionally, each separate finger or tool may be separately
detected, optionally separately identified, optionally being marked
differently.
[0677] An exemplary interface device is a stylus with an IR LED or
LEDs. IR camera/cameras are located in proximity to the display
space of the user interface, where interaction is to occur and an
image is presented. The IR cameras optionally receive an IR signal
from the LED, and a location and/or orientation, optionally at up
to six degrees of freedom, is calculated by a location calculation
unit. The location calculation unit may be implemented in hardware
and/or software. Location calculation is optionally performed using
image processing techniques. The location calculation unit
optionally passes the position and/or orientation of the tool or
finger to an optional computer for performing actions according to
a user interface program. Exemplary actions, by way of a
non-limiting example, are: marking points in space; drawing lines
or images in space; calculating distances in space; drawing a path;
calculating a path's absolute length; drawing a path; saving
coordinates of a path; and so on.
[0678] Other exemplary methods for tracking the tool/object
include: CCD camera/cameras and computational hardware to perform
image processing extracting the tools/object location/orientation
in space; tracking devices based on mechanical, magnetic,
ultrasonic, optical, and hybrid inertial, a combination of some or
all the above sensors, and/or other methods for locating a
tool/object in space.
[0679] An emerging method of tracking objects in space is based on
illumination of objects with coherent light or illumination
patterns onto the objects, and processing a resultant image to
interpret the location and orientation of the objects. The above is
optionally performed in real-time, that is, calculating an image
frame within the time it takes to capture an image frame. Real-time
in this context also means at a rate of at least cinematic rate,
such as 24 frames per second, or more, although optionally a rate
of 2 frames per second, or 10 frames per second may be used.
[0680] Example companies which are developing such methods include:
Prime Sense Inc., of 28 Habarzel Street, Tel-Aviv, Israel; and 3DV
Systems, of 3DV Systems, 2 Carmel Street, Industrial Park Building
1, P.O. Box 249, Yokneam, 20692, Israel.
[0681] An example IR tracking company is NaturalPoint Inc, P.O. BOX
2317 Corvallis, Oreg. 97339, USA.
[0682] An Example inertial/ultrasonic tracking company is
InterSense Inc., 4 Federal Street, Billerica, Mass. 01821 USA.
[0683] Supporting Software and Communication Devices
[0684] Supporting software and communication devices optionally
process, synchronize between, and transfer data between, the source
of data for display, the display device, and the interface
device.
[0685] The Supporting software and communication devices are
responsible for communication and data transfer between other units
of the user interface, such that presented information includes
original data, input data or interpreted actions produces as a
result of the interface devices, in such a fashion that the
combined data is optionally presented by the display device to the
user/users.
[0686] Optionally, the combined data is presented in real-time on a
still 3D image or a dynamic image.
[0687] Optionally, the image is 2D.
[0688] Optionally the image is holographic.
[0689] Optionally, the Supporting software and communication
devices can communicate with other systems such as a robot for
performing tasks in space, for example according to a path or other
indications received from a user via the 3D interface.
[0690] Optionally, the communication transfers data or part of the
data to a remote display device.
[0691] Optionally, the communication transfers data or part of the
data to remote other systems which use the data, allowing
interaction within the "in air" image to be exploited in remote
systems, whether distant or near by.
[0692] Optionally, data is sent via RF.
[0693] Optionally data is sent via wired physical layers.
[0694] Optionally, two (or more) different users interact with the
same volumetric or 2D "in air" image in the same location using the
same device (system and display).
[0695] Optionally, two (or more) different users interact with the
same volumetric or 2D "in air" image in different locations using
separate yet communicated devices (system and display).
Example Applications
[0696] Some example applications of the user interface of the
invention are described below.
[0697] "In Air" Marking of a Specific Point
[0698] For a projected volumetric image (optionally holographic) a
user points to specific locations. By means of auditory and/or
button clicking indications, the user marks specific points within
the display space. For example, a stylus with a tracking sensor is
used, and specific points are marked by the user in the volumetric
image. The user interface records the spatial position and/or
orientation of the stylus, and saves it in supporting hardware. The
saved points are interpreted to a specific voxel or voxels, and the
display device optionally presents the points in real time on the
volumetric image.
[0699] Optionally, the user interface starts with no initial image
projected in the display space, and only points subsequently marked
by the user/s appear in the display space.
[0700] While the image is projected, the user interface enables
capturing a point in display space by "touching" that specific
point, also termed voxel, in the image. Optionally a proximity to
the point enables the capture by a predefined "cloud" surrounding
the point, so that a user does not have to touch an exact location,
so for example using a human hand has some tolerance for
inaccuracy.
[0701] Optionally, the display device projects specific marked
points, which a user can optionally "grab" once the user points
close enough, then the user can press a button on a pointing tool
to "grab" the point.
[0702] Optionally, "marking" an image allows for "tagging" of a
marked area so that in case of a dynamic image, the marked area may
be isolated and movement of the marked area is optionally tracked
over time. An example is tracking mitral valve leaflets, which are
optionally tagged by the interface device. An untagged portion of
the displayed image is optionally removed, and the dynamic valve
leaflets are optionally tracked and studied specifically. Tagging
is optionally performed pre- and post-medical intervention, and
images are optionally overlaid to compare and evaluate efficacy of
the medical intervention. Such tagging is optionally applied to
other static and/or dynamic parts of an image including heart
muscle pre- and post-resynchronization, electrophysiological
pathways pre- and post-ablation, and so on.
[0703] Additional examples of "in air marking" include:
[0704] a tool for human-aided edge correction, optionally following
edge recognition performed by supporting software, which is useful
for medical imaging such as ultrasound-based diagnostics,
interventional cardiology procedures and so on.
[0705] marking a location in an organ;
[0706] marking a location of a tumor;
[0707] marking a location in a space in a body for proposed
implantation of a device, organ, and therapeutic intervention;
and
[0708] marking a position outside of a body, organ, which is
referenced to a fixed point in the space or organ, and which
maintains the relationship during movement of the body or organ or
movement outside the body or organ, such as ECG gating, or
respiration.
[0709] "In air" marking of a specific point enables "closing of the
loop": from eye to hand; from hand to user interface; from user
interface to display; and from display to eye, in an intuitive and
precise manner by a viewer of the spatial image. Later on, a marked
point or area may be transferred to and used by other systems
dealing with the spatial area of interest. Examples can be, focused
radiation on specific area for a tumor, robot or tool to perform
ablation on specific point marked on an organ and so on.
[0710] Volume and Length Quantification
[0711] For an image projected, for example a bone tissue, the user
optionally marks one end of a bone image, and then another end of
the bone image. The user interface system indicates the marked
points and by means of calculation via a software module the length
of the bone is optionally calculated.
[0712] If a length of a path is required, the user optionally marks
more than two points, and the path length is optionally calculated.
Optionally, a continuous path is drawn by the user, and the user
interface system calculates the length of the continuous path.
[0713] In order to quantify a volume, several points on the volume
contour, for example 10 points, are optionally marked by the user,
and the user interface software optionally calculates the volume
between the points. Optionally, the software extrapolates
measurements and displays a continuous volume, close in shape to an
object marked by the points, and calculate the volume. Optionally
the shape calculated is presented in real-time on the image,
allowing the user to perform edge-correction by moving the
calculated shape edges to edges of the real shape, allowing
fine-tuning of the volume and quantification of the volume.
[0714] Optionally the projected image is projected with a 1:1
scaling. Optionally the projected image is magnified or presented
smaller, optionally per user input. The user interface system
optionally defines the scaling, so measurements can be made on an
object displayed at various scales, and the user interface system
optionally outputs absolute measurements using the scaling
factor.
[0715] "In air" marking of a specific point enables in-air
quantification of lengths, distances and volumes. The user marks 2
points in order for the system to calculate lengths or distances,
or marks a plurality of points for the user interface system to
calculate volume.
[0716] In the medical field there are accepted normal surface areas
and volumes of different organs, sometimes calculated according to
height, weight, and/or body surface area. The areas or volumes are
expressed in ml, l, cm squared or cubed, and are typically
expressed in ranges with standard deviations and as Z values etc.
Optionally, the user interface system projects the normal areas or
volumes, static or dynamic, stand-alone and/or along with an actual
image of the organ, static or dynamic, taken from a CT, a MRI, or
other such modality for generating actual or calculated images,
such as lung volumes.
[0717] Some example uses for quantification are listed below.
[0718] In the field of cardiology, quantifying an ejection fraction
based on volume of a beating heart.
[0719] In the field of pulmonology, volumetric analyses of lung
function.
[0720] Comparing true volumes of organs to predicted volumes of the
organs, based on standard references.
[0721] In the field of obstetrics, diagnosing fetuses based on area
and/or volume quantification of fetus organs or the fetus body.
[0722] Other fields where volumetric quantification is useful
include Orthopedics, and Oncology, for example for mapping and
measuring tumors.
[0723] A Frame-Like Interface Device
[0724] Users of the user interface, such as physicians, often wish
to view specific planes within a volumetric image. The specific
planes to be viewed can be of various orientations and locations. A
method of a natural, for humans, selection of a specific plane, is
described with reference to the user interface of the
invention.
[0725] A volumetric image is projected, and a position and
orientation of an interface device optionally defines a plane in
the 3D volumetric image, as described below.
[0726] A plane frame, optionally with a diagonal length about the
length of the longer axis of the image, is used. The plane can be
made as a "wire-frame", that is, an outline, or as a piece of
transparent material, optionally glass or polycarbonate. Markers
may optionally be placed on frame edges, for example IR LEDs on
corners. A user literally inserts the frame in the projected image,
and indicates a specific plane in the projected image, or a frame
within the plane. Data included in the specific plane or frame is
processed and the specific plane or frame may optionally be
projected on the volumetric display and/or on a regular 2D
display.
[0727] Optionally, a continuous movement of the frame in the image
produces a continuous stream of plane images on the same display or
on corresponding 2D display/s.
[0728] Optionally, the volumetric display "crops" the image plane
at limits of the frame.
[0729] Optionally, the frame can be of any size and shape which
defines a plane. Optionally, the frame can be a three dimensional
frame defining a three dimensional shape within the image.
[0730] Optionally, the marking of the plane allows for "tagging" of
the marked plane. Optionally in case of a dynamic projected image,
the marked area is optionally isolated and movement of the marked
area is tracked over time.
[0731] A non-limiting example use of the user interface includes,
when using 3D rotational angiography (3DRA), during a
trans-catheter procedure, a physician optionally choosing a
specific plane for viewing on the 2D display, extracted from the
volumetric image "floating" in front of the physician, and defined
by the frame-like interface.
[0732] In-Air Image Manipulation Interface
[0733] For a presented volumetric image, a user optionally marks a
point on the image or on the image contour, and by a gesture or
some other point marking method, defines a rotation of the image in
a specific direction. Optionally, the user marks two points, and
causes the image to rotate such that the rotation axis is the axis
defined by a line defined by the two points. The above-described
manipulation is optionally performed based, at least partly, on
receipt of the marked points, interpretation of the marked points
by software and/or hardware, defining an image to be projected by
the display, and optionally based on a specific "command", such as
"rotate", provided by the user. The corresponding images are
optionally rendered and presented via the display. Optionally this
action is performed in real-time.
[0734] Optionally, the user defines a plane or a frame by marking
three or more points in a displayed object, and "slices" the object
so that only the plane or the frame are projected. Optionally, the
user chooses to crop the image on either side of the plane, and/or
outside the frame. Optionally, the user can repeat the cropping
action, thus defining a series of "cropping" planes.
[0735] Optionally, per an orientation of the interface device, a
real-time line is projected on the image, and the image is
optionally cropped according to the "line's" path.
[0736] Optionally, per the orientation of the interface device, a
real-time line is defined by the user, optionally acting as a
symmetry axis around which the image can be rotated.
[0737] Optionally, "in air" buttons with indications such as
zoom-in and/or zoom-out are displayed, and the user can "touch" the
buttons with the interface device.
[0738] Optionally, the user uses the frame-like interface to
"capture" a portion of the image, optionally when pressing a button
or some such form of command. In such a case, the user is
optionally able to move all the virtual image with his hand and/or
with the frame, "as if" the virtual image is physically connected
to the frame. The above-described capability is similar to moving
an object which is connected to a stick, and the user optionally
moves the object by moving the stick, like moving a popsicle.
[0739] "In Air" Navigation
[0740] The user interface system optionally receives a location of
a location indicator and presents the location in the display
space. The system optionally presents the image within which the
tool is navigating, for example an image from CT data or a
real-time ultrasound image. The system optionally super-imposes an
indication of the location of the tool on the image, optionally
after scale correlation, optionally from a variety of sources, such
a unit for tracking the tool. A user then optionally visually
checks if the tool is in the right location or on the right route.
In case the tool location is not right, the user can virtually
"touch" the tool location indicator within the projected volumetric
image, and drag the tool location indicator to the preferred
location and/or route. The new coordinates of the location and/or
route are recorded by the interface tool and optionally provided to
a tool navigation unit.
[0741] Optionally, the system which controls location indicator
navigation corrects the actual location indicator movement
according to the user's indication.
[0742] Optionally, a second doctor and/or user manually moves an
interface device according to a first user's indications. As the
interface device is optionally presented continuously on the image,
a control loop is "closed" via visual control by the user.
[0743] Optionally, the user draws a path in display space, using an
interface device. The system optionally records coordinates of
every point in the path. The path coordinates can be used by a
separate machine, such as a robot, to control the machine to follow
the drawn path.
[0744] Optionally, the machine is monitored by a physician and/or
automated machinery, optionally using the volumetric display to
project the image of the machine, and real-time corrections of the
path can be made.
[0745] 3D Navigation has become an important application in
electrophysiology-based cardiac procedures. "In air" navigation
optionally allows a user to view a static or a dynamic image, as
well as a location indicator and/or a path super-imposed on the
image, optionally in real-time. As another example, Electromagnetic
3D navigation is also implemented in Pulmonology/bronchoscopy to
provide minimally invasive access to lesions deep in lungs as well
as in mediastinal lymph nodes.
[0746] The above-mentioned machine which is tracked may also be a
tool or an implantable device or therapy, such as a drug; a stent;
a catheter; a valve; a combination of permanent or temporary tools;
a drug-eluting stent; chemotherapy attached to embolization
particles; a device or a sensor which is affected by force or
energy outside the body or organ such as radiofrequency or acoustic
energy, ultrasound or HIFU; radiofrequency catheters for ablation;
and catheters for cryo-ablation.
[0747] Remote Surgery
[0748] The above-mentioned manipulating of an image of a tool in
display space, causing a remote robotic manipulation of a real tool
somewhere, enables remote surgery and/or remote navigation through
a body.
[0749] Optionally, a user manipulates a real tool in a display
space of a first volumetric display, which also displays a human
body. The manipulation is tracked, and real manipulation is
effected onto the real tool at the remote location. Changes to a
body at the remote location are picked up by a three dimensional
imaging device, and sent to the first volumetric display. The user
thus sees images of results of the real manipulation of the real
tool.
[0750] Optionally, the user manipulates an image of a tool, that
is, a virtual tool, in a display space of a first volumetric
display, which also displays a human body. The manipulation is
tracked, and real manipulation is effected onto the real tool at
the remote location. Changes to a body at the remote location, and
also changes to the real tool, are picked up by a three dimensional
imaging device, and sent to the first volumetric display. The user
thus sees images of results of the real manipulation of the real
tool on the image of the body, and the image of the tool.
[0751] "In Air" Drawing
[0752] The present invention, in some embodiments thereof, provides
a tool for drawing a point, a line, and/or a path within a
volumetrically projected image, and enables a user to see the
drawing in real-time. The "in air" drawing optionally provides a
collaboration tool between users/physicians, allowing drawing a
point or marking a space to allow a discussion on a specific
anatomy or area of interest within the image.
[0753] Optionally, the "in air" drawing is computationally
translated to coordinates of the display space, optionally in
real-time. Optionally, the "in air" drawing is computationally
translated to coordinates of some other space, optionally to be
used by other instruments such as a robot.
[0754] A non-limiting example of medical use for "in-air" drawing
is a real-time location of specific markers for guidance
applications.
[0755] Localization of a Virtual Object Inside a Displayed
Image
[0756] The present invention, in some embodiments thereof, provides
a tool for combining an image of a virtual object in a displayed
image produced from data from an input source. One use for
localization of a virtual object inside a displayed image is
optionally simulating equipment selection through displaying
volumetric images.
[0757] Various tools/objects can be modeled in 3D, the modeling
optionally to include dynamic operation.
[0758] A user optionally picks a virtual tool/object, and moves the
virtual tool/object using a 3D interface within the display space,
locating the tool/object in a specific location and orientation.
For example, an image of a virtual cardiac valve is optionally
generated, similar in size and shape to a specific valve. The user
optionally drags the virtual valve onto an image of a patient's
heart. The user optionally marks interface points on the image of
the heart and corresponding points on the virtual valve. A display
computing unit optionally computes a combined image of the heart
and the virtual valve, and presents the combination to the user.
The users optionally assesses if the valve is in the right location
and orientation and, if needed, optionally performs another
measurement/indication. The user optionally also assesses if the
size of the valve fits.
[0759] Optionally, the virtual valve may be dynamic, and
superimposed over a dynamic or static image of the heart.
[0760] Optionally, simulation of blood flow and tissue movement is
predicted, calculated and displayed.
[0761] Optionally, the user calls up for display, within the
display space, instances from a library of virtual images, each
representing a specific actual tool or implant. In cases of
expanding tools, such as cardiac stents, the library optionally
includes the tools in unexpanded form, in expanded form, and a
dynamic representation of the expansion process.
[0762] Other non-limiting examples of uses for localization of a
virtual object inside a displayed image include valve localization;
fitting orthopedic prostheses; fitting intra-cardiac and
extra-cardiac prostheses, devices, implantable devices, stents,
aortic grafts, stent grafts; and intra-ventricular devices such as
ventricular assist devices
[0763] Localization of an Actual Object Inside an Image
[0764] The present invention, in some embodiments thereof, provides
a tool for combining real objects with an image produced from data
from an input source. The combining optionally serves for real
equipment selection, such as cardiac valve selection.
[0765] A user optionally places a real object, optionally a real
object which is later to be inserted to a body, into a displayed
image. The real object may be inserted by hand, and/or may be
inserted using a tool for holding the real object. The user
optionally locates the object within a static or a dynamic image of
an organ which is projected in the volumetric "in-air" display. The
user/physician optionally thus assesses a way which he wishes to
insert the real object into the body, a level of physical matching
of the real object with body organs, and so on.
[0766] Another non-limiting example of uses for localization of an
actual object inside an image includes valve localization.
[0767] Interactive Gaming
[0768] The insertion of a real object into the display space of a
volumetric display enables use of the user interface system for
gaming.
[0769] By way of a non-limiting example, a user optionally wields a
game prop, such as a sword, tool, or some such, in a game with
virtual reality display in three dimensions on the volumetric
display.
[0770] An non-limiting example of a type of game which is supported
is virtual sword fighting, with two or more users at two or more
different volumetric displays connected by communications, as
described above with reference to FIG. 15.
[0771] An interesting game which is especially supported by the
volumetric display of the present invention is virtual pinata. A
first user wields a "stick" within the display space of the
volumetric display, and does not see a virtual pinata in the
display space. Other users see the virtual pinata in the display
space, and see the "stick" which the first user wields. The virtual
pinata game may be played at one volumetric display with two or
more users around the game, or at two or more volumetric
displays.
[0772] Another interesting game is the game of "battleship" or
"battleships", where each user sees only their own battleships, on
a same volumetric display.
[0773] Another interesting class of game, based on the ability to
insert a real object into the display space, includes games of
hand-eye coordination such as pick-up-sticks, and Jenga. The games
optionally use virtual game pieces displayed in three dimensions in
the volumetric display, and users "grab" the game pieces while
reaching into the display space. A tracking device optionally
measures a gap between fingers to see when a user has hold of a
game piece.
GENERAL COMMENTS
[0774] It is expected that during the life of a patent maturing
from this application many relevant spatial light modulators,
hologram generating units, and volumetric displays will be
developed and the scope of the corresponding terms is intended to
include all such new technologies a priori.
[0775] The terms "imaging" and "projecting" are used herein
interchangeably.
[0776] The term exemplary is used in the meaning of serving as an
example, instance, or illustration.
[0777] The terms "comprises", "comprising", "includes",
"including", "having" and their conjugates mean "including but not
limited to".
[0778] The term "consisting of" means "including and limited
to".
[0779] The term "consisting essentially of" means that the
composition, method or structure may include additional
ingredients, steps and/or parts, but only if the additional
ingredients, steps and/or parts do not materially alter the basic
and novel characteristics of the claimed composition, method or
structure.
[0780] The word "optionally" is used herein to mean "is provided in
some embodiments and not provided in other embodiments". Any
particular embodiment of the invention may include a plurality of
"optional" features unless such features conflict.
[0781] As used herein, the singular form "a", "an" and "the" is
used to denote "at least one", thus explicitly disclosing both
"one" and "more than one", unless the context clearly dictates
otherwise. For example, the term "a mirror" may include a plurality
of mirrors.
[0782] As used herein the term "about" refers to .+-.10%.
[0783] Ranges are provided herein interchangeably in two equivalent
formats: "from X to Y" and "between X and Y", and in both case
cover X, Y, and any number between them.
[0784] It is appreciated that certain features of the invention,
which are, for clarity, described in the context of separate
embodiments, may also be provided in combination in a single
embodiment. Conversely, various features of the invention, which
are, for brevity, described in the context of a single embodiment,
may also be provided separately or in any suitable subcombination
or as suitable in any other described embodiment of the invention.
Certain features described in the context of various embodiments
are not to be considered essential features of those embodiments,
unless the embodiment is inoperative without those elements.
[0785] Although the invention has been described in conjunction
with specific embodiments thereof, it is evident that many
alternatives, modifications and variations will be apparent to
those skilled in the art. Accordingly, it is intended to embrace
all such alternatives, modifications and variations that fall
within the spirit and broad scope of the appended claims.
[0786] All publications, patents and patent applications mentioned
in this specification are herein incorporated in their entirety by
reference into the specification, to the same extent as if each
individual publication, patent or patent application was
specifically and individually indicated to be incorporated herein
by reference. In addition, citation or identification of any
reference in this application shall not be construed as an
admission that such reference is available as prior art to the
present invention. To the extent that section headings are used,
they should not be construed as necessarily limiting.
* * * * *