U.S. patent application number 15/579295 was filed with the patent office on 2018-06-28 for video display device and control method.
The applicant listed for this patent is MAXELL, LTD.. Invention is credited to Takahiro MATSUDA, Seiji MURATA, Satoshi OUCHI, Yoshiho SEO.
Application Number | 20180184042 15/579295 |
Document ID | / |
Family ID | 57440838 |
Filed Date | 2018-06-28 |
United States Patent
Application |
20180184042 |
Kind Code |
A1 |
MATSUDA; Takahiro ; et
al. |
June 28, 2018 |
VIDEO DISPLAY DEVICE AND CONTROL METHOD
Abstract
Provided is a video display device wearable on the head of a
user, wherein the video display device includes a video display
unit capable of switching two or more display methods, a control
unit for indicating a display method to the video display unit, a
first detection unit for detecting the motion of the head of a
user, a second detection unit for detecting the motion of the point
of view of the user, and a motion determination unit for
determining the motion state of the device user by using the output
from the first detection unit and the output from the second
detection unit. The control unit indicates a change of display
methods to the video display unit in accordance with the
determination result of the motion determination unit.
Inventors: |
MATSUDA; Takahiro; (Tokyo,
JP) ; OUCHI; Satoshi; (Tokyo, JP) ; SEO;
Yoshiho; (Tokyo, JP) ; MURATA; Seiji; (Tokyo,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
MAXELL, LTD. |
Kyoto |
|
JP |
|
|
Family ID: |
57440838 |
Appl. No.: |
15/579295 |
Filed: |
June 5, 2015 |
PCT Filed: |
June 5, 2015 |
PCT NO: |
PCT/JP2015/066383 |
371 Date: |
December 4, 2017 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G02B 2027/014 20130101;
G09G 2320/0242 20130101; G09G 2320/106 20130101; G09G 2310/0235
20130101; G02B 2027/0118 20130101; G06F 3/147 20130101; H04N 9/3111
20130101; G02B 27/0172 20130101; G09G 3/3413 20130101; H04N 9/3194
20130101; G02B 2027/0138 20130101; G09G 2340/0435 20130101; G09G
3/001 20130101; G09G 2370/16 20130101; G02B 2027/0187 20130101;
H04N 9/3182 20130101; G09G 2360/02 20130101; H04N 7/0132
20130101 |
International
Class: |
H04N 7/01 20060101
H04N007/01; H04N 9/31 20060101 H04N009/31; G02B 27/01 20060101
G02B027/01 |
Claims
1. A video display device wearable on a head of a user, comprising:
a video display unit capable of switching a display method between
two or more display methods; a control unit that designates the
display method for the video display unit; a first detection unit
that detects a movement of the head of the user; a second detection
unit that detects a movement of a viewpoint of the user; and a
motion determination unit that determines a motional state of the
user of the device based on an output from the first detection unit
and an output from the second detection unit, wherein the control
unit instructs the video display unit to change the display method
according to a result of the determination made by the motion
determination unit.
2. The video display device according to claim 1, wherein the video
display unit is a display using a liquid crystal element or a
mirror array element, and the two or more display methods are
different from each other in at least one of an interval for
updating information on pixels of the liquid crystal element or the
mirror array element and an interval for driving a light
source.
3. The video display device according to claim 1, comprising video
processing means capable of changing at least one of video
parameters which are contrast, sharpness, saturation, hue, and
image brightness, wherein the two or more display methods are
methods in which at least one of the contrast, the sharpness, the
saturation, the hue, and the image brightness is made different by
the video processing means.
4. The video display device according to claim 2, comprising video
processing means capable of changing at least one of video
parameters which are contrast, sharpness, saturation, hue, and
image brightness, wherein the two or more display methods are
methods in which at least one of the contrast, the sharpness, the
saturation, the hue, and the image brightness is made different by
the video processing means.
5. The video display device according to claim 2, wherein based on
the output from the first detection unit and the output from the
second detection unit, the motion determination unit determines, as
the motional state of the user, whether there is a movement of the
head of the user, whether there is a movement of a line of sight of
the user, whether the viewpoint of the user is on a video being
displayed by the video display unit, and whether the movement of
the head and the movement of the line of sight match each
other.
6. The video display device according to claim 5, wherein the
larger the movement of the line of sight of the user relative to
the movement of the head of the user, the control unit extends at
least one of the interval for updating information on the pixels of
the liquid crystal element or the mirror array element and the
interval for driving the light source for the video display unit,
and the control unit lowers at least one of the contrast and the
brightness of the video when the viewpoint of the user is not on
the video being displayed by the video display unit.
7. The video display device according to claim 1, further
comprising: a storage unit that stores a change of any of the video
parameters commanded by the control unit; and a frequency
determination unit that determines content of the change stored in
the storage unit and the number of times the change has been made
within a prescribed period of time, wherein the frequency
determination unit notifies a video information source of the
change of the video parameter that has been changed many times.
8. The video display device according to claim 5, wherein the video
display unit is a field sequential display, and can perform black
and white display by substantially synchronizing intervals of light
emissions of all of light sources for R, G, and B, and when the
motion determination unit determines that there is at least one of
the movement of the head of the user and the movement of the line
of sight of the user, the control unit instructs the video display
unit to perform the black and white display.
9. The video display device according to claim 5, wherein the
motion determination unit transforms an output vector from the
first detection unit and an output vector from the second detection
unit into two-dimensional vectors by orthographically projecting
the output vectors onto a video display plane, and determines that
the movement of the head and the movement of the line of sight
match in direction when an angle between the two transformed
two-dimensional vectors falls below a prescribed value.
10. The video display device according to claim 1, wherein the
first detection unit includes one of a gyro sensor, an acceleration
sensor, a geomagnetic sensor, a GPS, and a camera.
11. The video display device according to claim 1, wherein the
control unit acquires information on an amount of power remaining
in a power source and limits the change of the display method of
the video display unit.
12. A method for displaying a video on a video display unit
wearable on a head of a user and capable of switching a display
method between two or more display methods, the method comprising:
causing a first detection unit to detect a movement of the head of
the user; causing a second detection unit to detect a movement of a
viewpoint of the user; causing a motion determination unit to
determine a motional state of the user of the device based on an
output from the first detection unit and an output from the second
detection unit; and instructing the video display unit to change
the display method according to a result of the determination made
by the motion determination unit.
13. A method for detecting a movement of a line of sight or a
viewpoint, comprising: detecting a movement of a line of sight
using a first sensor element; and in response to detection of the
movement of the line of sight by the first sensor element,
switching a status of a second sensor element between an operation
state capable of detecting the line of sight and an idle state, the
second sensor element being capable of detecting a movement of the
line of sight more precisely than the first sensor element.
14. A method for viewpoint deviation correction performed by a
video display device wearable on a head of a user, comprising:
displaying a prescribed point on the video display device;
prompting the user to gaze at the prescribed point; determining a
position of a point of gaze of the user using a viewpoint detection
sensor; setting the thus-determined position of the point of gaze
as a reference point; and based on a distance between the reference
point and the viewpoint of the user detected by the viewpoint
detection sensor, determining whether the viewpoint of the user is
on a video being displayed.
15. A line-of-sight detector comprising: at least two light
emitters that irradiate different positions on an eye with light;
two light receivers that are located at different positions and
receive light reflected by the eye irradiated with the light; a
movement detector that detects a movement of an eyeball based on
outputs from the two light receivers; and a camera that captures an
image of at least one eye and operates while the movement detector
is detecting the movement of the eyeball.
Description
TECHNICAL FIELD
[0001] The invention of the present application relates to a video
display device which is wearable on the head of a user and displays
a video before the eyes of the user, and also to a method of
controlling the same.
BACKGROUND ART
[0002] User-wearable video display devices are getting lighter in
weight and smaller in size and are anticipated to become less
cumbersome to device users. Video display devices of such a type is
advantageous in its capability of allowing a user to obtain
information with both of their hands free, and are thus expected to
be used for various purposes.
CITATION LIST
Patent Literature
[0003] PATENT LITERATURE 1: JP5,228,305B
[0004] For example, as in the above-given patent literature, there
have been proposed devices that use a glasses-like or head-wearable
mount unit and display and end displaying a video on a display unit
placed right before the eyes of the user.
SUMMARY OF INVENTION
Technical Problem
[0005] With a wearable video display device, it is hard for the
device user to look away from a video, so it is desirable to, while
continuing displaying the video, lessen a discomfort that may
result from instability in the video, or the like. However, there
are cases where it is important to keep displaying a video
depending on the purpose of the device, and to end displaying the
video at the device's discretion as in the reference literature 1
may work against the device user's interests in such a case.
[0006] A conceivable cause for the instability in a video may lie
in the display scheme employed by the video display device. For
example, in a case of a video display device that employs liquid
crystals, the instability may be a flicker on the screen caused by
update of video information on the screen, and in a case of a video
display device employing a field sequential scheme (color
time-division scheme), the instability may be color breakup and the
like. With a wearable video display device, the device user may
sense such video instabilities as a greater discomfort when moving.
Further, a display method employed to reduce instability in a video
increases consumption of power in turn.
Solution to Problem
[0007] To solve the above problem, a video display device wearable
on a head of a user includes a video display unit capable of
switching a display method between two or more display methods, a
control unit that instructs the video display unit which of the
display methods to employ, a first detection unit that detects a
movement of the head of the user, a second detection unit that
detects a movement of a viewpoint of the user, and a motion
determination unit that determines a motional state of the user of
the device based on an output from the first detection unit and an
output from the second detection unit. The control unit instructs
the video display unit to change the display method according to a
result of the determination made by the motion determination
unit.
Advantageous Effects of Invention
[0008] The present invention can achieve, with low power
consumption, reduction in a discomfort that the user of a wearable
video display device may feel from a video.
[0009] The other objectives, characteristics, and advantages of the
present invention will become clear from the following descriptions
of embodiments of the present invention based on the accompanying
drawings.
BRIEF DESCRIPTION OF DRAWINGS
[0010] FIG. 1A is a block diagram of a video display device 10
according to Embodiment 1.
[0011] FIG. 1B is a diagram illustrating an example of the outer
appearance of the video display device 10.
[0012] FIG. 1C is a diagram illustrating the video display device
10 shown in FIG. 1B being worn.
[0013] FIG. 1D is a diagram illustrating another example of the
outer appearance of the video display device 10.
[0014] FIG. 1E is a diagram illustrating the video display device
10 shown in FIG. 1D being worn.
[0015] FIG. 1F is a diagram illustrating yet another example of the
video display device 10 being worn.
[0016] FIG. 2 is a block diagram illustrating an embodiment of a
video display unit 1001 in the video display device 10.
[0017] FIG. 3 is a circuitry diagram illustrating an embodiment of
a light source element 2003 in the video display unit 1001.
[0018] FIG. 4A is a timing chart illustrating an example of a
normal-speed display operation of the video display unit 1001 in
the video display device 10.
[0019] FIG. 4B is a timing chart illustrating an example of a
double-speed display operation of the video display unit 1001 in
the video display device 10.
[0020] FIG. 4C is a timing chart illustrating an example of a
double-speed operation of the video display unit 1001 in the video
display device 10, in which frame interpolation is performed.
[0021] FIG. 4D is a timing chart illustrating an example of a
triple-speed display operation of the video display unit 1001 in
the video display device 10.
[0022] FIG. 5 is a table illustrating determination processing
performed by a motion determination processing unit 1006.
[0023] FIG. 6 is a flowchart of determination processing performed
by a control unit 1003 and the motion determination processing unit
1006.
[0024] FIG. 7A is an example of the surroundings.
[0025] FIG. 7B is an example of a virtual video displayed by the
video display device 10.
[0026] FIG. 7C is an example of how a virtual video is superimposed
over the surroundings.
[0027] FIG. 8A is an example of a video displayed by the video
display device 10.
[0028] FIG. 8B is an example of a displayed video the contrast of
which has been lowered by a video processing unit 1008.
[0029] FIG. 8C is an example of a displayed video the sharpness of
which has been lowered by the video processing unit 1008.
[0030] FIG. 9A is an example of a video targeted by a video
determination processing unit 1009.
[0031] FIG. 9B is another example of a video targeted by the video
determination processing unit 1009.
[0032] FIG. 10A is a timing chart illustrating an example of a
light source control stop operation of the video display unit in
the video display device 10.
[0033] FIG. 10B is a timing chart illustrating an example of a
ferroelectric liquid crystal update stop operation of the video
display unit in the video display device 10.
[0034] FIG. 11A is a diagram illustrating an embodiment of a second
sensor.
[0035] FIG. 11B is a diagram illustrating the embodiment of the
second sensor in detail.
[0036] FIG. 12A is an example of a video displayed for
initialization of the video display device.
[0037] FIG. 12B is a diagram illustrating a range in which to
display a video for initialization of the video display device.
[0038] FIG. 13 is a block diagram illustrating a video display
device 130 according to Embodiment 2.
[0039] FIG. 14 is a block diagram illustrating a video display
device 140 according to Embodiment 2.
[0040] FIG. 15 is a block diagram illustrating a video display
device 150 according to Embodiment 2.
[0041] FIG. 16 is a block diagram illustrating a video display
device 160 according to Embodiment 3.
[0042] FIG. 17 is a block diagram illustrating an example of how
video display devices according to Embodiment 3 are controlled.
DESCRIPTION OF EMBODIMENTS
[0043] Embodiments of the present invention are described below
with reference to the drawings.
Embodiment 1
[0044] A description is given of an embodiment of the present
invention based on the accompanying drawings.
[0045] 1. Outline of the Device
[0046] FIG. 1A is a block diagram illustrating a video display
device which is wearable on a user and displays a video before the
eyes of the user.
[0047] A video display device 10 includes a video display unit
1001, a display control unit 1002, a control unit 1003, a first
sensor 1004, a second sensor 1005, a motion determination
processing unit 1006, a video information source 1007, a video
processing unit 1008, a video determination processing unit 1009, a
storage unit 1010, a frequency determination processing unit 1011,
and a power supply unit 1012.
[0048] The video display device 10 displays a video by transmitting
video information acquired from the video information source 1007
to the video display unit 1001 via the video processing unit 1008.
A representative example of the video display unit 1001 is a
display using a liquid crystal element or a mirror array
element.
[0049] The video information source 1007 selects appropriate video
information data stored in a storage device (not shown), performs
processing such as decryption or cancellation of encryption on the
data if necessary, and transmits the video information to 1008. The
video information source 1007 may transmit chronological moving
images chronologically, or may transmit still images
successively.
[0050] The control unit 1003 is connected to a controller 1020
outside the video display device 10 in a wired or wireless manner.
By operating the controller 1020, a device user 20 can turn on and
off the video display device 10 or make various video-related
settings. The controller 1020 may be a controller exclusive to the
video display device 10, or a smartphone in which a particular
application program is installed to enable the smartphone to be
used a controller. The power supply unit 1012 is provided with a
power on/off switch besides the one on the controller.
[0051] The display control unit 1002, the control unit 1003, the
motion determination processing unit 1006, the video information
source 1007, the video processing unit 1008, the video
determination processing unit 1009, and the frequency determination
processing unit 1011 are mounted on the video display device 10 as
independent pieces of hardware. Alternatively, they may be
implemented by one or more arithmetic processors or microprocessors
and software or firmware. They may be implemented as a functional
block in part of an integrated circuit, or implemented by a
programmable logic device such as an FPGA (Field-Programmable Gate
Array).
[0052] The storage unit 1010 also does not need to be implemented
as a separate component, and may be implemented as a functional
block in part of the integrated circuit.
[0053] FIG. 1B is a diagram illustrating an example of the outer
appearance of the video display device 10. Main components are
housed in the junction between a lens and a temple, and a video is
projected onto the lens parts which are half mirrors. The
controller 1020 and the main body of the device are connected to
each other with a cable.
[0054] FIG. 1C is a diagram illustrating the video display device
10 of FIG. 1B being worn.
[0055] FIG. 1D is a diagram illustrating an another example of the
outer appearance of the video display device 10. Main components
are housed in the junction between a lens and a temple, and a prism
serves as a projection unit (2007 in FIG. 2) of the video display
unit 1001.
[0056] FIG. 1E is a diagram illustrating the video display device
10 of FIG. 1D being worn.
[0057] FIG. 1F is a diagram illustrating yet another example of the
outer appearance of the video display device 10. A prism before an
eye serves as a projection unit (2007 in FIG. 2) of the video
display unit 1001, and other components are housed separately in a
helmet and in a temple.
[0058] 2. Displaying a Video
[0059] FIG. 2 illustrates an example configuration of the video
display unit 1001. The video display unit 1001 includes a video
signal processing unit 2001, a light source element power supply
control unit 2002, a light source element 2003, a light source
driver 2004, a modulator 2005, a modulator driver 2006, a
projection unit 2007, and a settings control unit 2008.
[0060] Video information from the video processing unit 1008 is
sent to the video signal processing unit 2001, and the video signal
processing unit 2001 determines, for the received video
information, the intensity of the light source, timing to drive the
light source, and a modulator driving pattern.
[0061] Information on the intensity of the light source is
transmitted to the light source element power supply control unit
2002. The light source element power supply control unit 2002
controls voltage to supply to the light source element 2003
according to the intensity information received.
[0062] The timing to drive the light source is sent to the light
source driver 2004. The light source driver 2004 controls the light
source element 2003 according to the timing to drive the light
source received.
[0063] The light source element power supply control unit 2002 and
the light source driver 2004 may be mounted on the same
element.
[0064] FIG. 3 is a diagram illustrating the configurations of the
light source element power supply control unit 2002, the light
source element 2003, and the light source driver 2004 in detail.
The light source element 2003 is formed by LEDs of the three
primary colors: a red LED 3001, a green LED 3002, and a blue LED
3003. The three LEDs are connected in series with current limiting
resistances 3004, 3005, and 3006, respectively, and receive
potentials VLEDr, VLEDg, and VLEDb, respectively, from the light
source element power supply control unit 2002. The potentials
VLEDr, VLEDg, and VLEDb can be set to any values by the light
source element power supply control unit 2002, and thereby light
emissions by the red LED 3001, the green LED 3002, and the blue LED
3003 can be controlled. The terminals of the current limiting
resistances 3004, 3005, and 3006 opposite from the LEDs are
connected to the light source driver 2004. By changing the
potentials at terminals CTRLr, CTRLg, and CTRLb, the light source
driver 2004 controls the amounts of light emission, the durations
of light emission, and the like of the red LED 3001, the green LED
3002, and the blue LED 3003, respectively, and the modulator 2005
thus performs modulation for all the pixels. The display control
unit 1002 sends the settings control unit 2008 of the video display
unit 1001 control signals specifying the intensities of the light
sources.
[0065] Although it has been described above that the light source
has LEDs of the three primary colors, the present invention is not
limited to such a configuration. The light source may have one or
more white LEDs. Also, the light source does not need to be LEDs.
With regard to the three primary colors, the light source may be
configured to, instead of emitting only the primary colors, extract
a particular color by filtering using, for example, a white light
source and a dichroic filter, a color wheel, or the like.
[0066] FIG. 2 is referenced again.
[0067] The modulator driving pattern is transmitted to the
modulator driver 2006. The modulator driver 2006 drives the
modulator 2005 according to the modulator driving pattern.
[0068] Examples of the modulator 2005 include a transmissive liquid
crystal element, an LCOS (Liquid Crystal On Silicon), a DMD
(Digital Mirror Device), and the like.
[0069] The modulator 2005 and the modulator driver 2006 may be
configured as a single element component.
[0070] The following description assumes that the modulator 2005 is
an LCOS type.
[0071] Light emitted from the light source element 2003 is
modulated by the modulator 2005 and projected onto the projection
unit 2007.
[0072] Examples of the projection unit 2007 is a reflective object
such as a mirror, a scattering object such as a screen, a prism, a
half mirror, a lens, and the like. Any of these may be used in
combination.
[0073] Depending on the structure of the projection unit 2007 and
on the addition of other component, the video display device 10 of
the present invention may be in the opaque, goggle-like form in
which the view of the device user 20 is covered, or in the
transparent form in which the device user 20 can see the
surroundings and recognize a video in part of their view. The
following description assumes the transparent form.
[0074] In the transparent form, the device user 20 sees a video as
illustrated in FIGS. 7A, 7B, and 7C. Specifically, the device user
20 can see a video like the one in FIG. 7C, in which a virtual
video 7001 like the one in FIG. 7B is superimposed over the
surroundings like the one in FIG. 7A.
[0075] The video display device 10 is in either a form in which a
video is projected to both of the eyes of the device user 20 or a
form in which a video is projected to only one of the eyes.
Although FIG. 2 illustrates a case where a video is projected to
only one of the eyes, it is possible to project a video to both of
the eyes by configuring the video display device 10 to include two
projection units 2007 so that light from the modulator 2005 may be
incident on both the left and right eyes. Further, the video
display device 10 may be configured with two video displays units
1001--one for the right eye and one for the left eye--to display a
three-dimensional video by projecting parallax images onto the
respective video displays units 1001.
[0076] By observing the light from the projection unit 2007, the
device user 20 recognizes light representing the input video
information, as a video.
[0077] The settings control unit 2008 can receive a control signal
and change the settings of the video signal processing unit
2001.
[0078] The video display unit 1001 has two or more display methods
for displaying a video. FIGS. 4A, 4B, 4C, and 4D illustrate display
methods by way of example. For example, video information is
handled as moving image information in which N still images per
second in average are arranged in a predetermined order (N is a
positive number of 1 or larger). In this regard, N is called a
frame rate, and the number of still images per second is expressed
by flames per second (fps). At present, a typical frame rate is,
for example, 30 fps and 60 fps. In the examples in FIG. 4, the
number of times of display by the video display unit 1001 is
changed with respect to the frame rate of video information.
[0079] FIG. 4A illustrates a first display method by way of
example. In the first display method, the video display unit 1001
changes a displayed video every 1/N second with respect to the
frame rate N. A displayed video is referred to as a frame.
[0080] The following description assumes as an example that the
present embodiment employs the field sequential scheme (color
time-division scheme). Specifically, for each frame to display, the
video display unit 1001 divides video information into primary
color components of red, green, and blue, and further divides the
1/N second into three time slots, and displays the videos of the
color components in the respective time slots separately.
[0081] For example, in the first time slot of a frame 1, a setting
(1R) is made such that the modulator 2005, which is an LCOS,
displays the red component of the divided video information on the
frame 1, causing the red LED 3001 to emit light for a predetermined
period of time (a period 1R) shorter than 1/(3N) second. Next, a
setting (1G) is made such that the modulator 2005 displays the
green component of the frame 1, causing the green LED 3002 to emit
light for a predetermined period of time (a period 1G) shorter than
1/(3N) second. Further, a setting (1B) is made such that the
modulator 2005 displays the blue component of the frame 1, causing
the blue LED 3003 to emit light for a predetermined period of time
(a period 1B) shorter than 1/(3N) second. By displaying the color
components sequentially at a high speed, the observer wearing the
device sees full-color images in which the three primary color
component are mixed together.
[0082] Such display of each color component is performed similarly
for the frames 2 and 3 of the display video. Although FIGS. 4A to
4D depict only up to the frame 3, similar processing is repeatedly
performed for the rest of the frames, as well.
[0083] FIG. 4B illustrates a second display method by way of
example. In the second display method, the period of time the
modulator 2005 displays each color component in one frame is
further halved compared to the first display method of FIG. 4A, and
therefore, one color component is displayed in a time slot which is
one sixth of a 1/N second of the frame rate N for a displayed
video. This second display method is called double-speed driving
because it drives the modulator 2005 at an update speed twice as
high as the first display method.
[0084] It is also possible to achieve a display method with
triple-speed, quadruple-speed, or higher driving by increasing the
driving speed of the modulator 2005 and the light source element
2003. To change the driving speed of the modulator 2005 and the
light source element 2003 is to change the intervals of updating
information on the pixels of the liquid crystal element or the
mirror array element. FIG. 4D shows a timing chart of triple-speed
driving.
[0085] FIG. 4C illustrates a modification regarding double-speed
driving. The intervals of driving the modulator 2005 and the light
source element 2003 are the same as those in the display method of
FIG. 4B. An intermediate frame 1.5 is generated from the video of
the frames 1 and 2, and other frames, of the original video
information, and the video is displayed in the order of the frame
1, the frame 1.5, and the frame 2, so that the observer sees the
video as smooth images. The intermediate frames are generated by
the video processing unit 1008.
[0086] Generation of intermediate frames is possible even when, for
example, the update speed is higher than double speed, such as
triple speed or quadruple speed like the case in FIG. 4D, and is
not limited to double speed.
[0087] By raising the speed from normal-speed display to
double-speed display or to triple-speed display, or by generating
intermediate frames, the motion of the images becomes smoother, and
the device user 20 is less likely to sense color breakup and
therefore feel less uncomfortable. However, more power is
consumed.
[0088] Via the settings control unit 2008 of the video display unit
1001, the display control unit 1002 sends a control signal
commanding a switch to a different display method.
[0089] The video processing unit 1008 is capable of making a change
to video information inputted from the video information source
1007 and outputting the changed video information to the video
display unit 1001.
[0090] For example, video information is H.times.V-pixel data
containing H pixels vertically (where H is an integer of 1 or
larger) and V pixels horizontally (where V is an integer of 1 or
larger) per frame.
[0091] Changing the contrast of an image involves processing to
change the differences between color tones, or in particular,
processing to obtain a pixel value to output to the video display
unit 1001 by multiplying a pixel value inputted from the video
information source 1007 by a proportionality coefficient larger
than 1.
[0092] Changing the brightness of an image involves processing to
obtain an output pixel value by increasing a pixel value by a
designated value, or in particular, processing to obtain a pixel
value to output to the video display unit 1001 by adding any value
to a pixel value inputted from the video information source
1007.
[0093] According to a signal from the control unit 1003, the video
processing unit 1008 performs processing such as increasing or
decreasing the contrast of an image and/or increasing or decreasing
the brightness of an image.
[0094] The video processing unit 1008 may change other
image-related parameters, such as sharpness, saturation, and hue,
according to a signal from the control unit.
[0095] Also, according to a signal from the control unit 1003, the
video processing unit 1008 may be switched to transmit video
information to the video display unit 1001 without subjecting the
video information to any of the above processing.
[0096] For example, decreasing the contrast of the video
illustrated in FIG. 8A yields the video in FIG. 8B. Meanwhile,
decreasing the sharpness of the video in FIG. 8A yields the video
in FIG. 8C. As will be described later, when the viewpoint of the
device user 20 is not positioned on an image video being displayed,
its contrast or sharpness may be decreased so that eye strain of
the device user 20 can be reduced.
[0097] 3. Movement of the Device User and Control of Video
[0098] The first sensor 1004 is a sensor for detecting the turning
of the head of the device user 20, and is for example a gyro
sensor. The first sensor 1004 outputs a three-dimensional motion
vector indicating a head movement over a predetermined period of
time. The second sensor 1005 is a line-of-sight sensor for
detecting the movement of the line of sight or the position of the
point of gaze, and outputs a two-dimensional or three-dimensional
vector indicating the movement of the line of sight of the device
user 20 over a predetermined period of time.
[0099] The motion determination processing unit 1006 determines the
motional state of the device user 20 based on the outputs from the
first sensor 1004 and the second sensor 1005. Specifically, the
motional state includes three items: a movement of the head, a
movement of the line of sight, and the directions of the movement
of the head and the movement of the line of sight. A description
will be given later as to the processing to determine these three
items from the sensor outputs. Further, the motion determination
processing unit 1006 determines whether the point of gaze is on the
virtual video 7001 being displayed by the video display device. A
detailed description for this processing will be given later, as
well.
[0100] Based on the determination results obtained by the motion
determination processing unit 1006, the control unit 1003
determines which processing to perform in accordance with FIG. 5
and commands a display speed for the video display unit 1001 to the
display control unit 1002, and commands processing related to video
parameters to the video processing unit 1008.
[0101] Pattern 1 in FIG. 5 is a case where there is no movement of
the head of the device user 20, there is no movement of the line of
sight, and the point of gaze is on the virtual video. In this case,
the device user 20 is still, and it is unlikely that the device is
shaken or the positional shift occurs between the device and the
device user 20. Also, there is no movement in the eyes. A
phenomenon such as color breakup that may give a discomfort to the
device user 20 is not likely to occur. In such a case, the control
unit 1003 commands processing A, since there is barely need to pay
consideration to the effect on discomfort.
[0102] In processing A, the display method of the video display
unit 1001 is normal-speed driving, and the video processing unit
1008 passes a video from the video information source to the video
display unit 1001 without making any change to the video by
performing video processing thereon. This processing consumes the
least power.
[0103] Pattern 2 in FIG. 5 differs from pattern 1 in that the point
of gaze detected by the second sensor 1005 is not on the virtual
video. In this case, the device user 20 is not in motion and not
viewing the virtual video 7001, and therefore, the importance in
visibility is low. Then, the control unit 1003 commands processing
B.
[0104] In processing B, the display method of the video display
unit 1001 is normal-speed driving, and the video processing unit
1008 lowers the contract and brightness of the video by a
prescribed amount.
[0105] Patterns 3, 5, and 8 in FIG. 5 are cases where the movement
of either the head or the line of sight of the device user 20 is
detected, and the point of gaze is on the virtual video.
[0106] In these patterns, the device user 20 is not still, and
there is an increased possibility that the device user 20 of the
video display device 10 feels uncomfortable, increasing the need to
pay consideration to discomfort. Thus, the control unit 1003
commands processing C to reduce discomfort.
[0107] In processing C, the display method of the video display
unit 1001 is set to a higher update speed, triple speed, and the
video processing performed by the video processing unit 1008 is
initialized.
[0108] Patterns 4, 6, and 9 are cases where the movement of either
the head or the line of sight of the device user 20 is detected,
and the point of gaze is not on the virtual video.
[0109] In these patterns, the device user 20 is not still, and
there is an increased possibility that the device user 20 of the
video display device 10 feels uncomfortable. However, since the
device user 20 is not viewing the virtual video 6001, the
importance in visibility is low. Thus, the control unit 1003
commands processing D.
[0110] In processing D, the display method of the video display
unit 1001 is set to a higher update speed, triple speed, and the
video processing unit 1008 lowers the contract and brightness of
the video by a prescribed amount.
[0111] In pattern 7 in FIG. 5, both the movement of the head and
the movement of the line of sight of the device user 20 are
detected. If the direction of the movement of the head
substantially matches the direction of the movement of the line of
sight, the device user 20 is likely making an eye movement called
saccades, and viewing neither the surroundings nor the virtual
video 7001. While consideration still needs to be paid to the
device user 20 for the discomfort that may be felt from the video,
the importance in visibility of the virtual video is low. Thus, the
control unit 1003 commands processing D described above.
[0112] FIG. 6 is a flowchart for implementing the pattern-based
control illustrated in FIG. 5, which is performed by the control
unit 1003 and the motion determination processing unit 1006.
[0113] Processing starts in Step S010 when the power switch on the
device main body is turned on, or when the device user 20 issues an
instruction.
[0114] In Step S020, the control unit 1003 performs processing for
initialization and processing for energization of the first sensor
1004 and the second sensor 1005. In the initialization processing,
the control unit 1003 sets the display method of the video display
unit 1001 to double speed, and initializes various parameters of
the video processing unit 1008 to prescribed default values.
[0115] In Step S030, the motion determination processing unit 1006
acquires a sensor output from the first sensor 1004.
[0116] In Step S040, the motion determination processing unit 1006
acquires a sensor output from the second sensor 1005.
[0117] In Step S050, the motion determination processing unit 1006
determines, based on the output from the first sensor, whether or
not the magnitude of the turning speed of the head is higher than
or equal to a prescribed value A1 (A1 is a positive value), and
handles the determination result as X. X is true (`1`) when the
detection result is higher than or equal to the prescribed value
A1, and false (`0`) when the detection result is lower than the
prescribed value A1.
[0118] In Step S060, the motion determination processing unit 1006
determines, based on the output from the second sensor, whether or
not the magnitude of the motional speed of the line of sight is
higher than or equal to a prescribed value S1 (S1 is a positive
value), and handles the determination result as Y. Y is true (`1`)
when the detection result is higher than or equal to the prescribed
value S1, and false (`0`) when the detection result is lower than
the prescribed value S1.
[0119] In Step S070, the motion determination processing unit 1006
determines, based on the output from the second sensor, whether the
two-dimensional coordinates of the position of the line of sight
(the point of gaze) is on the virtual image (inside a predetermined
range), and handles the determination result as Z. Z is true (`1`)
when the position of the point of gaze is inside the predetermined
range, and false (`0`) when the position of the point of gaze is
outside the predetermined range.
[0120] Refer to FIG. 5 for these X, Y, and Z and their values.
[0121] In Step S080, the motion determination processing unit 1006
performs conditional branching based on the logical AND of the
determination result X and the determination result Y. Processing
proceeds to Step S110 if XY=0, and proceeds to Step S090 if
XY=1.
[0122] In Step S090, the motion determination processing unit 1006
performs conditional branching based on the exclusive OR (XOR) of
the determination result X and the determination result Y.
Processing proceeds to Step S111 if X XOR Y=1, and proceeds to Step
S100 if X XOR Y=0.
[0123] In Step S110, if Z is true, processing proceeds to Step S120
in which the control unit 1003 commands processing A, and if Z is
false, processing proceeds to Step S130 in which the control unit
1003 commands processing B.
[0124] In Step S100, the motion determination processing unit 1006
compares the direction of the motional speed of the head, outputted
from the first sensor 1004, and the direction of the movement of
the line of sight, outputted from the second sensor 1005, with each
other, and determines whether the directions of motion vectors
substantially match each other. A method for this determination
will be described later.
[0125] In Step S111, the motion determination processing unit 1006
performs conditional branching based on the determination result Z.
If Z is true, processing proceeds to Step S140 in which the control
unit 1003 commands processing C, and if Z is false, processing
proceeds to Step S150 in which the control unit 1003 commands
processing D.
[0126] In Step S160, it is determined whether a setting has been
made to repeat the processing in this flowchart continuously. This
setting may be made or changed by the device user 20 or may be set
by default. If such a setting is enabled, the processing proceeds
to a standby step S170, and if such a setting is disabled, the
processing proceeds to a termination step S180.
[0127] In Step S170, the processing stands by for a predetermined
period of time (approximately 300 milliseconds to 10 seconds), and
then proceeds back to Step S030.
[0128] In Step S180, the processing performs, for the first sensor
1004 and the second sensor 1005, power-off processing or idle
setting.
[0129] The processing ends at an end step S190.
[0130] The video display device 10 has the storage unit 1010. After
changing the display method of the video display unit 1001 or the
processing method of the video processing unit 1008, the control
unit 1003 records the history, the time, and the like of the change
in the storage unit 1010.
[0131] When the number of changes to certain processing recorded in
the storage unit 1010 exceeds a predetermined number within a
predetermined period of time, e.g., when five changes are made in
three days, the frequency determination processing unit 1011
requests the video information source 1007 to change the video
information settings according to the change history. The request
to change video information is issued for example in the
termination processing in Step S180 of FIG. 6, and to change video
information is to change parameters such as image contrast,
sharpness, saturation, hue, image brightness, and the like.
[0132] 4. Line-of-Sight Sensor
[0133] An example of the second sensor 1005 as a line-of-sight
sensor is now described.
[0134] As illustrated in FIG. 11A, the second sensor 1005 includes
a sensor A element 1101, a sensor B element 1102, and a detection
controller 1103. When the sensor A element 1101 detects a movement,
the detection controller 1103 activates the sensor 2 element 1102
and instructs the sensor B element to output more precise data.
[0135] FIG. 11B illustrates the configuration more specifically.
This configuration is suitable to be mounted on a frame part of a
glasses-like device, like the ones in FIG. 1B and 1D. The
line-of-sight sensor is capable of detecting the movement of an eye
1111 of the device user 20, and includes a first light emitter 1112
and a second light emitter 1113 that emit infrared light, a first
light receiver 1114 and a second light receiver 1115, a comparator
1116, a first camera 1117 and a second camera 1118, a current
control unit 1119, a movement detection processing unit 1120, and
an idleness control unit 1121.
[0136] Infrared light emitted by the first light emitter 1112 and
the second light emitter 1113 is projected onto and reflected by
the eye 1111 of the device user 20. The reflected infrared light is
incident on the first light receiver 1114 and the second light
receiver 1115. The first light receiver 1114 and the second light
receiver 1115 are installed in different directions, the left side
and the right side, of the eye 1111, and receive varying amounts of
light depending on the position of the iris and the position of the
white part of the eye. Since the first light receiver 1114 and the
second light receiver 1115 are placed on the left side and the
right side of the eye 1111, a change in the amount of light
received due to a displacement of the eye 1111 is different for
each light receiver. A movement of the eye 1111 can be detected
when the comparator 1116 obtains the difference between the amount
of light received by the first light receiver 1114 and the amount
of light received by the second light receiver 1115. This detection
method is called a scleral reflection method.
[0137] When an output from the comparator 1116 is larger than or
equal to a predetermined value, the movement detector 1120
determines that a movement of the eye 1111 is detected, and outputs
a movement detected signal. When an output from the comparator 1116
is smaller than the predetermined value, the movement detector 1120
determines that the eye 1111 has not moved, and outputs a movement
undetected signal.
[0138] Upon receipt of a movement detected signal from the movement
detector 1120, the idleness control unit 1121 brings the first
camera 1117 and the second camera 1118 to an imaging state capable
of imaging videos, and these cameras image the eye 1111 using the
infrared light reflected by the eye 1111. An image processing unit
(not shown) performs image processing on the videos imaged by the
first camera 1117 and the second camera 1118, to estimate a
detailed movement of the line of sight and the position of the
viewpoint. A dark pupil method, a corneal reflection method, or the
like is used for the image processing.
[0139] Upon receipt of a movement undetected signal from the
movement detector 1120, the idleness control unit 1121 brings the
first camera and the second camera to an idle state in which part
of the functions of the first and second cameras are stopped to
reduce power consumption.
[0140] The movement detection processing unit 1120 changes power to
be supplied to the first light emitter 1112 and the second light
emitter 1113 by sending a movement detected signal or a movement
undetected signal to the current control unit 1119. The current
control unit 1119 performs control such that the amount of current
in the imaging state is larger than the amount of current in the
idle state.
[0141] Typically, camera elements consume more power than light
receivers and need more light for detection. Thus, this control
method reduces power consumption by the device by bringing the
camera elements to the imaging state only when the light receivers
have detected a rough movement, instead of keeping the camera
elements in the imaging state all the time.
[0142] Although two light emitters are used in the present
embodiment by way of example, the number of light emitters is not
limited to two. The line-of-sight sensor may have more light
emitters. Further, the line-of-sight sensor may be so configured
that the light emitters are controlled to emit light at different
timings, and that the light receivers or camera elements acquire
data to coincide with the light emission by the respective light
emitters.
[0143] Also, although two camera elements and two light receivers
are used in the above example, their numbers are not limited to
such numbers. Further, a light receiver and a camera element may be
configured as a single element, and for example, part of the pixels
of a camera element may be configured as a light receiver.
[0144] 5. Viewpoint Deviation Correction and Detection of the
Position of a Point of Gaze
[0145] In the initialization step S020 illustrated in FIG. 6,
initialization processing for the second sensor 1005 may be
performed. When the second sensor 1005 is a sensor that detects the
line of sight of the device user 20, deviation occurs between the
point of gaze of the device user 20 and the position of the second
sensor each time the device is used.
[0146] To correct this deviation, during the initialization
processing the motion determination processing unit 1006 displays a
virtual video 1201 of diagonal lines in a display region as
illustrated in FIG. 12A, and displays text prompting the device
user 20 to gaze at the intersection of the diagonal lines. When the
second sensor 1005 detects that the line of sight is steadily
located at the position of the point of gaze for predetermined
seconds, the motion determination processing unit 1006 determines
that the device user 20 is gazing at the intersection in the
virtual video 1201, and sets the position of the point of gaze as a
reference position p0 (h0, v0) for the second sensor 1005.
[0147] If the full video display range of the video display unit
1001 is, like the virtual video 1201, a square surrounded by P1
(Hmin, Vmin), P2 (Hmax, Vmin), P3 (Hmin, Vmax), and P4 (Hmax,
Vmax), then h0=(Hmax-Hmin)/2, and v0=(Vmax-Vmin)/2.
[0148] In the point-of-gaze determination step S070, it is
determined, using the reference position p0 (h0, v0) as the
reference, whether the detection result of the second sensor 1005
is on the video displayed on the video display unit 1001 (i.e.,
whether Z is `1`). Assume a case where the position of the point of
gaze obtained by the second sensor 1005 is p (h, v) when, as
illustrated in FIG. 12B, any virtual display video 1202 is
displayed in a region surrounded by Q1 (Hl, Vd), Q2 (Hl, Vu), Q3
(Hr, Vd), and Q4 (Hr, Vu) (Hmin<=Hl<=Hmax,
Hmin<=Hr<=Hmax, Vmin<=Vd<=Vmax, and
Vmin<=Vu<=Vmax). In this case, when p (h, v) is inside the
square Q1Q2Q3Q4, the motion determination processing unit 1006
determines that the position of the point of gaze is on the virtual
video and that Z in Step S070 is true.
[0149] The reference position may be detected not in the
initialization processing, but in a time designated by the device
user 20 through the controller 1020. The device user 20 may command
detection timing to the second sensor 1005 by purposely blinking
for a particular length or in a particular order before or after
gazing at a designated point.
[0150] The virtual display video 1202 does not have to be square,
but may be in other shapes such as a triangle or a circle.
[0151] 6. Directions of the Movement of the Head and the Movement
of the Line of Sight
[0152] The movement determination unit 1006 detects, based on an
motion vector output from the first sensor 1004 and a motion vector
output from the second sensor 1005, whether the movement of the
head and the movement of the line of sight match in direction.
[0153] Assume that an output from the first sensor 1004 can be
expressed by a three-dimensional vector A.sub.0. With the device
user 20 being within the range to recognize the virtual video 7001
and S denoting a virtual plane containing the four corners of the
virtual video 7001, an orthographic projection vector A of the
three-dimensional vector A.sub.0 with respect to the plane S is
obtained. When an output from the second sensor 1005 is a
three-dimensional vector, similar vector transformation processing
is performed.
[0154] The range in which the device user 20 recognizes the virtual
video 7001 is determined by the optical configuration of the
projection unit 2007 in the video display unit 1001 and the like,
and the device user 20 uses its focusing function of the eyeball to
recognize the virtual video 7001 at a location at a predetermined
distance.
[0155] If the second sensor 1005 outputs a two-dimensional vector
B0, the two-dimensional vector B0 is transformed into a
three-dimensional vector B.sub.1 on a plane T which is in
three-dimensional space and contains a detection axis of the second
sensor 1005, and an orthographic projection vector B of the
three-dimensional vector B.sub.i with respect to the plane S is
obtained. When an output from the first sensor 1004 is a
two-dimensional vector, similar vector transformation processing
may be performed.
[0156] When the directions of the movements from the detection
sensors are both expressed as vectors on a single plane with A
being the motion vector outputted from the first sensor 1004 and B
being the motion vector outputted from the second sensor 1005, it
is determined whether the directions of the movements substantially
match, based on a comparison between the absolute value of an angle
.theta. formed by these vectors, ANGLE.theta.=ANGLE(A-B), and any
value .alpha. (.alpha. is a positive value).
[0157] Specifically, it is determined that the directions of the
movements substantially match if .theta.<=.alpha., and do not
match if .theta.>.alpha..
[0158] 7. Modifications
[0159] (1) The first sensor 1004 and the second sensor 1005 may be
an acceleration sensor, a geomagnetic sensor, a GPS, a camera that
captures the user, a camera that captures a video of the
surroundings seen from the user, a sensor that measures user's
pulse, a sensor that measures user's blood flow, a watch, or the
like. Further, each of the first sensor 1004 and the second sensor
1005 may include a filter, an amplifier, a level shifter, and/or
the like. Also, each of the first sensor 1004 and the second sensor
1005 may include a comparator and be configured to transmit, along
with the vector value, a binary result indicating whether a
detection result is higher or lower than a threshold. Also, the
first sensor 1004 and the second sensor 1005 may be configured to
output a signal indicating that a movement is detected, when one of
the following conditions is met: when the duration time of a
movement, being a detection result, exceeds a predetermined period
of time, when the speed of a movement exceeds a predetermined
speed, and when the displacement of a movement exceeds a
predetermined displacement.
[0160] (2) Instead of the determination processing that the motion
determination processing unit 1006 performs using an output from
the first sensor 1004 or the second sensor 1005, the video
determination processing unit 1009 may determine image
features.
[0161] The video determination processing unit 1009 determines
whether video information can cause a display discomfort to the
device user 20. For example, in a case of a video that moves
continuously on the screen like the one illustrated in FIG. 9A, the
device user 20 is expected to follow the moving video by moving
their eyeballs. Also, for example, in a case of a video with an
on-screen content, such as text, that the device user 20 can see
its meaning by recognizing it vertically, horizontally, or
diagonally like the one illustrated in FIG. 9B, the device user 20
is expected to move their line of sight along the video by moving
their eyeballs.
[0162] For such an image, the video determination processing unit
1009 can detect a movement in advance by performing video analysis
on digital images and referring to the amount in difference data
between image frames.
[0163] For videos, like the ones in FIGS. 9A and 9B, for which the
device user 20 is expected in advance to move their line of sight,
the video determination processing unit 1009 outputs a control
signal to the control unit 1003 so that the processing C may be
employed.
[0164] When the projection unit 2007 of the video display device 10
is a transparent type, the video determination processing unit 1009
may determine whether a displayed video is a video related to the
surroundings, e.g., an augmented reality (AR) video. The video
determination processing unit 1009 can determine the type of a
video based on metadata on the video or additional information to
the video. For example, the virtual video 7001 in FIG. 7B displays
information related to the surroundings in FIG. 7A. When it is
determined that a video is related to the surroundings, the
importance in the visibility of a displayed image is high even if
the point of gaze is detected at a position outside the virtual
video 7001 (Z is false). Thus, the video determination processing
unit 1009 commands the control unit 1003 to employ the processing
C.
[0165] (3) The display method illustrated in FIG. 10A may be
employed as the display method of the video display unit 1001. In
the display method illustrated in FIG. 10A, the driving speed is
double-speed as in FIG. 4B, and light-source control is stopped so
that the light emission periods of the red LED 3001, the green LED
3002, and the blue LED 3003 of the light source element 2003 are
substantially synchronized. Thereby, light from the three LEDs are
mixed in color, and the device user 20 recognizes the video as a
black and white image, so that color breakup does not occur in
principle. In FIG. 5, this may be applied to the processing C or D
of FIG. 5 since the processing C and the processing D are employed
when there is a high need to pay consideration to the discomfort
that may be caused by the video.
[0166] Further, when images in video information in a plurality of
successive frames are substantially the same, frame update by the
modulator 2005 may be stopped. For example, as in FIG. 10B, when
frames 1, 2, and 3 are substantially the same images, the modulator
2005 keeps displaying the same frame 1 for the period corresponding
to these frames.
[0167] Cases where images are substantially the same include: a
case where, when video information can be represented as, for
example, H.times.V pieces of pixel information (both H and V are
positive integers), the number of pixels that are changed in
information between successive frames is sufficiently smaller than
the value H.times.V; and a case where, when color information on
each pixel can be represented by R, G, and B primary color
information (e.g., R, G, and B are all positive integers between 0
to 255), changes in R, G, and B values between successive frames
are sufficiently small.
[0168] Thereby, image flickers can be reduced. Further, when the
modulator 2005 uses ferroelectric liquid crystals, to display the
same frame a plurality of times consumes power for deletion and
re-display of information. Stopping frame update by the modulator
2005 leads to a further reduction of power consumption. This may be
applied to the processing C and the processing D in FIG. 5.
[0169] (4) The video information source 1007 may be configured to
externally acquire video information. For example, the video
information source 1007 may be a receiver conforming to video
transmission standards such as DVI, HDMI (registered trademark), or
Display Port, a receiver employing a general method for electric
signal transmission, such as SPI, I2C, RS232, or USB, a receiver of
a wired network such as Ethernet (registered trademark), or a
receiver of a wireless network such as a wireless LAN or Bluetooth
(registered trademark).
[0170] The video information source 1007 may include a decoder that
receives and expands compressed information to obtain video
information, or may include a function to receive and decrypt
encrypted video information.
[0171] (5) The power supply unit 1012 supplies power to the video
display device 10. As a power source, the power supply unit 1012
includes at least one of a rechargeable battery that can be charged
by an external power source, a power source circuit that takes a
desired amount of power out from a replaceable primary battery, a
converter that connects to an external power source such as an
electrical outlet to take a predetermined amount of power
therefrom, a power stabilization circuit. Further, the power supply
unit 1012 may include, in addition to the power source, an
integrated circuit for power control to control charging and
supplying power and to monitor the power source.
[0172] The control unit 1003 acquires information on the level of
power remaining in the power source from the power supply unit
1012, and performs control such that the video processing unit 1008
performs video processing only when the remaining power level
exceeds a predetermined value.
[0173] The control unit 1003 may also be configured to be able to
change the display method of the video display unit 1001 to shorten
the display intervals only when the level of power remaining in the
power supply unit 1012 exceeds a predetermined value.
[0174] The control unit 1003 may also be configured to change the
display method of the video display unit 1001 to extend the display
intervals when the level of power remaining in the power supply
unit 1012 falls below a predetermined value.
Embodiment 2
[0175] FIG. 13 is a diagram illustrating Embodiment 2. Only points
different from those in FIG. 1 are described.
[0176] A first sensor 1304 and a second sensor 1305 are provided
separately from the casing of a video display device 130. Each of
the sensors detects an action of the device user 20, as the sensors
in Embodiment 1 do.
[0177] In the present embodiment, the first sensor 1304 and the
video display device 130 exchange information via a communication
unit 1013. The communication unit 1013 and the first sensor 1304
may communicate with each other using electrical signals on a
conductor physically connecting them to each other, or may
communicate via a wireless communication such as a wireless LAN, a
Bluetooth (registered trademark), or Zigbee (registered trademark).
The first sensor 1304 may include a communication unit (not shown).
To use a wireless communication, the first sensor 1304 may be
supplied with power from a power source different from the one for
the video display device 130.
[0178] Like the first sensor 1305, the second sensor 1305 may
exchange information with the communication unit 1013 using the
wired or wireless communication described above. The second sensor
1305 may include a communication unit (not shown). Further, to use
a wireless communication, the second sensor 1305 may be supplied
with power from a power source different from the one for the video
display device 130.
[0179] The motion determination processing unit 1006 receives an
output from the first sensor 1304 and an output from the second
sensor 1305 via the communication unit 1013.
[0180] FIGS. 14 and 15 are modifications of the present embodiment.
In the modification illustrated in FIG. 14, a first sensor 1404, a
second sensor 1405, and a motion determination processing unit 1406
are provided separately from the casing of a video display device
140. The motion determination processing unit 1406 and the control
unit 1003 exchange information via the communication unit 1013
using the wired or wireless communication described above. The
first sensor 1404, the second sensor 1405, and the motion
determination processing unit 1406 may be contained in the same
casing. Further, the motion determination processing unit 1406 may
include a communication unit (not shown).
[0181] In the modification illustrated in FIG. 15, the second
sensor 1305 is provided separately from a video display device 150.
The second sensor 1305 and the motion determination processing unit
1006 may exchange information via the communication unit 1013 using
a wired or wireless communication as described above.
[0182] The separately-provided first sensor 1304 and second sensor
1305 do not need to be worn by the device user 20. The first sensor
1304 and the second sensor 1305 only have to detect a movement of
the head of the device user 20, a movement of an eye, and the like,
and may be, for example, sensors using a camera and image
processing. Such a case includes a situation where the device user
20 is at a fixed location and performs certain work by viewing a
video, with the first sensor 1304 and the second sensor 1305 in
camera forms being placed on the working table.
[0183] In a case where the device user 20 uses the video display
device 130, the video display device 140, or the video display
device 150 with the device user 20 standing up or sitting down in a
fixed position, a detection sensor such as a pressure distribution
measurement device that measures the displacement of the center of
gravity may be provided under the device user 20 and used as the
first sensor 1304 or the second sensor 1305.
Embodiment 3
[0184] FIG. 16 is a diagram illustrating Embodiment 3. Only points
different from those in FIG. 1 are described.
[0185] In the present embodiment, a storage unit 1610 and a
frequency determination processing unit 1611 are provided in a
server 1601 separately from a video display device 160. The storage
unit 1610 and the frequency determination processing unit 1611
operate in the same manners as the storage unit 1010 and the
frequency determination processing unit 1011 in Embodiment 1
do.
[0186] The storage unit 1310 and the control unit 1003 exchange
information via the communication unit 1013 of the video display
device 160 and a communication unit 1612 of the server 1601, and
the communication unit 1013 and the communication unit 1612 may
communicate using electric signals on a conductor physically
connecting them to each other, or may be performed via a wireless
communication such as a wireless LAN, Bluetooth (registered
trademark), or Zigbee (registered trademark).
[0187] Similarly, the frequency determination processing unit 1611
and the video information source 1007 may exchange information via
the communication unit 1013 and the communication unit 1612 using a
wired or wireless communication as described above.
[0188] FIG. 17 illustrates a control method for a system including
a plurality of video display devices 160. Each of a first video
display device 1711, a second video display device 1712, a third
video display device 1713, and a fourth video display device 1714
can communicate with the server 1601 via a network 1730.
[0189] The first video display device 1711 is used by a first user
1721, the second video display device 1712 is used by a second user
1722, the third video display device 1713 is used by a third user
1723, and the fourth video display device 1714 is used by a fourth
user 1724.
[0190] The first video display device 1711, the second video
display device 1712, the third video display device 1713, and the
fourth video display device 1714 have the same capabilities as the
video display device 160.
[0191] The server 1601 has the storage unit 1610 and the frequency
determination processing unit 1611, and when the display method of
the video display unit 1001 is changed or when the processing
method of the video processing unit 1008 is changed, receives a
history and a time of the change from the control unit 1003 of a
corresponding one of the first video display device 1711, the
second video display device 1712, the third video display device
1713, and the fourth video display device 1714 via the network
1730.
[0192] The server 1601 extracts information common in the pieces of
change information transmitted from the respective video display
devices, and the frequency determination processing unit 1611
requests the video information sources 1007 to change video
information when processing recorded in the storage units 1610 and
common in the video display devices exceeds a predetermined number
of times within a predetermined period of time. To change video
information is to change parameters such as image contrast,
sharpness, saturation, hue, or image brightness.
[0193] The number of video display devices 160 connected to the
network 1730 is not limited to the number shown in the present
embodiment. At least one video display device 160 only needs to be
connected.
[0194] Although the present invention has been described using the
embodiments, the present invention is not limited to those
embodiments and, as it is apparent to those skilled in the art, can
be changed and altered variously without departing from the spirit
of the present invention and the scope of the appended claims.
REFERENCE SIGNS LIST
[0195] 10 video display device [0196] 1001 video display unit
[0197] 1002 display control unit [0198] 1003 control unit [0199]
1004 first sensor [0200] 1005 second sensor [0201] 1006 motion
determination processing unit [0202] 1007 video information source
[0203] 1008 video processing unit [0204] 1009 video determination
processing unit [0205] 1010 storage unit [0206] 1011 frequency
determination processing unit [0207] 1012 power supply unit [0208]
1013 communication unit [0209] 1020 controller [0210] 20 device
user [0211] 2001 video signal processing unit [0212] 2002 light
source element power supply control unit [0213] 2003 light source
element [0214] 2004 light source driver [0215] 2005 modulator
[0216] 2006 modulator driver [0217] 2007 projection unit [0218]
2008 settings control unit [0219] 3001 red LED [0220] 3002 green
LED [0221] 3003 blue LED [0222] 3004,3005,3006 current limiting
resistance [0223] 7001 virtual video [0224] 1101 sensor A [0225]
1102 sensor B [0226] 1103 detection control unit [0227] 1111 eye
[0228] 1112, 1113 light emitter [0229] 1114, 1115 light receiver
[0230] 1116 comparator [0231] 1117, 1118 camera [0232] 1119 current
control unit [0233] 1120 movement detection processing unit [0234]
1121 idleness control unit [0235] 1201 virtual video [0236] 1202
virtual display video [0237] 130 video display device [0238] 1304
first sensor [0239] 1305 second sensor [0240] 140 video display
device [0241] 1404 first sensor [0242] 1405 second sensor [0243]
1406 motion determination processing unit [0244] 150 video display
device [0245] 160 video display device [0246] 1601 server [0247]
1610 storage unit [0248] 1611 frequency determination processing
unit [0249] 1612 communication unit [0250] 1711 first video display
device [0251] 1712 second video display device [0252] 1713 third
video display device [0253] 1714 fourth video display device [0254]
1721 first user [0255] 1722 second user [0256] 1723 third user
[0257] 1724 fourth user [0258] 1730 network
* * * * *