U.S. patent application number 13/890254 was filed with the patent office on 2013-11-28 for image capture device controlled according to image capture quality and related image capture method thereof.
This patent application is currently assigned to MEDIATEK INC.. The applicant listed for this patent is MEDIATEK INC.. Invention is credited to Ding-Yun Chen, Cheng-Tsai Ho, Chi-Cheng Ju.
Application Number | 20130314511 13/890254 |
Document ID | / |
Family ID | 49621289 |
Filed Date | 2013-11-28 |
United States Patent
Application |
20130314511 |
Kind Code |
A1 |
Chen; Ding-Yun ; et
al. |
November 28, 2013 |
IMAGE CAPTURE DEVICE CONTROLLED ACCORDING TO IMAGE CAPTURE QUALITY
AND RELATED IMAGE CAPTURE METHOD THEREOF
Abstract
An image capture device has an image capture module and a
controller. The image capture module is used for capturing a
plurality of consecutive preview images under an automatic shot
mode. In addition, the image capture module can be a multi-view
image capture module, which is used to capture a plurality of
multiple-angle preview images. The controller is used for analyzing
the preview images to identify an image capture quality metric
index, and determining if a target image capture condition is met
by referring to at least the image capture quality metric index. A
captured image for the automatic shot mode is stored when the
controller determines that the target image capture condition is
met.
Inventors: |
Chen; Ding-Yun; (Taipei
City, TW) ; Ju; Chi-Cheng; (Hsinchu City, TW)
; Ho; Cheng-Tsai; (Taichung City, TW) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
MEDIATEK INC. |
Hsin-Chu |
|
TW |
|
|
Assignee: |
MEDIATEK INC.
Hsin-Chu
TW
|
Family ID: |
49621289 |
Appl. No.: |
13/890254 |
Filed: |
May 9, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61651499 |
May 24, 2012 |
|
|
|
Current U.S.
Class: |
348/50 ;
348/208.99 |
Current CPC
Class: |
G06T 3/4053 20130101;
H04N 5/23254 20130101; G06T 2207/20192 20130101; H04N 5/23222
20130101; H04N 5/772 20130101; G06T 2207/30168 20130101; G06T
3/4023 20130101; H04N 5/23248 20130101; H04N 5/23264 20130101; G06T
2207/10016 20130101; G06T 5/003 20130101; G06T 3/40 20130101; G06T
2207/30201 20130101; G06T 3/4007 20130101; H04N 9/79 20130101; H04N
5/23293 20130101; G06T 7/0002 20130101 |
Class at
Publication: |
348/50 ;
348/208.99 |
International
Class: |
H04N 5/232 20060101
H04N005/232 |
Claims
1. An image capture device comprising: an image capture module,
arranged for capturing a plurality of consecutive preview images
under an automatic shot mode; and a controller, arranged for
analyzing the consecutive preview images to identify an image
capture quality metric index, and determining if a target image
capture condition is met by referring to at least the image capture
quality metric index; wherein a captured image for the automatic
shot mode is stored when the controller determines that the target
image capture condition is met.
2. The image capture device of claim 1, wherein the controller
analyzes each of the consecutive preview images to obtain inherent
image characteristic information, where the inherent image
characteristic information includes at least one of sharpness,
blur, brightness, contrast, and color; and the controller
determines the image capture quality metric index according to at
least the inherent image characteristic information.
3. The image capture device of claim 1, wherein the controller
identifies the image capture quality metric index by at least
performing a stable estimation for each preview image of the
consecutive preview images.
4. The image capture device of claim 1, wherein the controller
identifies the image capture quality metric index by at least
performing a blur value estimation for each preview image of the
consecutive preview images.
5. The image capture device of claim 1, wherein the controller
identifies the image capture quality metric index by analyzing at
least a portion of each preview image of the consecutive preview
images.
6. The image capture device of claim 5, wherein the controller
performs face detection upon each preview image to determine a face
region to act as at least the portion of each preview image.
7. The image capture device of claim 1, wherein the controller
further receives a sensor input which is indicative of a movement
status associated with the image capture module; and the controller
determines if the target image capture condition is met by
referring to the image capture quality metric index and the
movement status.
8. The image capture device of claim 1, wherein when the target
image capture condition is met, the controller directly selects one
of the consecutive preview images as the captured image.
9. The image capture device of claim 1, wherein after the target
image capture condition is met, the controller controls the image
capture module to capture a new image as the captured image.
10. An image capture method comprising: capturing a plurality of
consecutive preview images under an automatic shot mode; analyzing
the consecutive preview images to identify an image capture quality
metric index; determining if a target image capture condition is
met by referring to at least the image capture quality metric
index; and when the target image capture condition is met, storing
a captured image for the automatic shot mode.
11. The image capture method of claim 10, wherein the step of
identifying the image capture quality metric index comprises:
analyzing each of the consecutive preview images to obtain inherent
image characteristic information, where the inherent image
characteristic information includes at least one of sharpness,
blur, brightness, contrast, and color; and determining the image
capture quality metric index according to at least the inherent
image characteristic information.
12. The image capture method of claim 10, wherein the image capture
quality metric index is identified by at least performing a stable
movement estimation for each preview image of the consecutive
preview images.
13. The image capture method of claim 10, wherein the image capture
quality metric index is identified by at least performing a blur
value estimation for each preview image of the consecutive preview
images.
14. The image capture method of claim 10, wherein the image capture
quality metric index is identified by analyzing at least a portion
of each preview image of the consecutive preview images.
15. The image capture method of claim 14, wherein face detection is
performed upon each preview image to determine a face region to act
as at least the portion of each preview image.
16. The image capture method of claim 10, further comprising:
receiving a sensor input which is indicative of a movement status
associated with an image capture module which generates the
consecutive preview images; wherein the step of determining if the
target image capture condition is met comprises: determining if the
target image capture condition is met by referring to the image
capture quality metric index and the movement status.
17. The image capture method of claim 10, further comprising: after
the target image capture condition is met, directly selecting one
of the consecutive preview images as the captured image.
18. The image capture method of claim 10, further comprising: when
the target image capture condition is met, capturing a new image as
the captured image.
19. An image capture device comprising: a multi-view image capture
module, arranged for simultaneously generating a plurality of image
capture outputs respectively corresponding to a plurality of
different viewing angles; and a controller, arranged for
calculating an image capture quality metric index for each of the
image capture outputs; wherein a specific image capture output
generated from the multi-view image capture module is outputted by
the image capture device according to a plurality of image capture
quality metric indices of the image capture outputs.
20. The image capture device of claim 19, wherein each of the image
capture outputs is a single image or a video sequence.
21. The image capture device of claim 19, wherein the controller
refers to the image capture quality metric indices of the image
capture outputs to directly select one of the image capture outputs
as the specific image capture output.
22. The image capture device of claim 19, wherein the controller
refers to the image capture quality metric indices of the image
capture outputs to control the multi-view image capture module to
generate a new image capture output corresponding to a selected
viewing angle as the specific image capture output.
23. The image capture device of claim 19, wherein the controller
performs face detection upon each image capture output to obtain
face detection information, and determines the image capture
quality metric index according to at least the face detection
information.
24. The image capture device of claim 23, wherein the face
detection information includes at least one of a face angle, a face
number, a face size, a face position, a face symmetry, an eye
number, and an eye blink status.
25. The image capture device of claim 19, wherein the controller
receives auto-focus information of each image capture output from
the multi-view image capture module, and determines the image
capture quality metric index according to at least the auto-focus
information.
26. The image capture device of claim 19, wherein the controller
analyzes each of the image capture output to obtain inherent image
characteristic information, where the inherent image characteristic
information includes at least one of sharpness, blur, brightness,
contrast, and color; and the controller determines the image
capture quality metric index according to at least the inherent
image characteristic information.
27. An image capture method comprising: utilizing a multi-view
image capture module for simultaneously generating a plurality of
image capture outputs respectively corresponding to a plurality of
different viewing angles; calculating an image capture quality
metric index for each of the image capture outputs; and outputting
a specific image capture output generated from the multi-view image
capture module according to a plurality of image capture quality
metric indices of the image capture outputs.
28. The image capture method of claim 27, wherein each of the image
capture outputs is a single image or a video sequence.
29. The image capture method of claim 27, wherein the step of
outputting the specific image capture output comprises: referring
to the image capture quality metric indices of the image capture
outputs to directly select one of the image capture outputs as the
specific image capture output.
30. The image capture method of claim 27, wherein the step of
outputting the specific image capture output comprises: referring
to the image capture quality metric indices of the image capture
outputs to control the multi-view image capture module to generate
a new image capture output corresponding to a selected viewing
angle as the specific image capture output.
31. The image capture method of claim 27, wherein the step of
calculating the image capture quality metric index comprises:
performing face detection upon each image capture output to obtain
face detection information; and determining the image capture
quality metric index according to at least the face detection
information.
32. The image capture method of claim 31, wherein the face
detection information includes at least one of a face angle, a face
number, a face size, a face position, a face symmetry, an eye
number, and an eye blink status.
33. The image capture method of claim 27, wherein the step of
calculating the image capture quality metric index comprises:
receiving auto-focus information of each image capture output from
the multi-view image capture module; and determining the image
capture quality metric index according to at least the auto-focus
information.
34. The image capture method of claim 27, wherein the step of
calculating the image capture quality metric index comprises:
analyzing each of the image capture output to obtain inherent image
characteristic information, where the inherent image characteristic
information includes at least one of sharpness, blur, brightness,
contrast, and color; and determining the image capture quality
metric index according to at least the inherent image
characteristic information.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. provisional
application No. 61/651,499, filed on May 24, 2012 and incorporated
herein by reference.
BACKGROUND
[0002] The disclosed embodiments of the present invention relate to
an automatic shot scheme, and more particularly, to an image
capture device controlled according to the image capture quality
and related image capture method thereof.
[0003] Camera modules have become popular elements used in a
variety of applications. For example, a smartphone is typically
equipped with a camera module, thus allowing a user to easily and
conveniently take pictures by using the smartphone. However, due to
inherent characteristics of the smartphone, the smartphone is prone
to generate blurred images. For example, the camera aperture and/or
sensor size of the smartphone is typically small, which leads to a
small amount of light arriving at each pixel in camera sensor. As a
result, the image quality may suffer from the small camera aperture
and/or sensor size.
[0004] Besides, due to lightweight and portability of the
smartphone, the smartphone tends to be affected by hand shake. In
general, the shake of the smartphone will last for a period of
time. Hence, any picture taken during this period of time would be
affected by the hand shake. An image deblurring algorithm may be
performed upon the blurred images. However, the computational
complexity of the image deblurring algorithm is very high,
resulting in considerable power consumption. Besides, artifact will
be introduced if the image deblurring algorithm is not perfect.
[0005] Moreover, a camera module with an optical image stabilizer
(OIS) is expensive. Hence, the conventional smartphone is generally
equipped with a digital image stabilizer (i.e., an electronic image
stabilizer (EIS)). The digital image stabilizer can counteract the
motion of images, but fails to prevent image blurring.
[0006] In addition to the camera shake, the movement of a target
object within a scene to be captured may cause the captured image
to have blurry image contents. For example, considering a case
where the user wants to use the smartphone to take a picture of a
child, the captured image may have a blurry image content of the
child if the child is still when the user is going to touch the
shutter/capture button and then suddenly moves when the user
actually touches the shutter/capture button.
[0007] With the development of science and technology, users are
pursing stereoscopic and more real image displays rather than high
quality images. Hence, an electronic device (e.g., a smartphone)
may be equipped with a stereo camera and a stereo display. The
captured image or preview image generated by the stereo camera of
the smartphone can be a stereo image (i.e., an image pair including
a left-view image and a right-view image) or a single-view image
(i.e., one of a left-view image and a right-view image). That is,
even though the smartphone is equipped with the stereo camera, the
user may use to the smartphone to capture a single-view image only,
or may send a single-view image selected from a stereo image
captured by the smartphone to a two-dimensional (2D) display or a
social network (e.g., Facebook). The conventional design simply
selects a single image with a fixed viewing angle from a stereo
image. However, the stereo images generated by the stereo camera
may have different image quality. Sometimes, one viewing angle is
better than the other viewing angle. Using a fixed viewing angle to
select a single image from a stereo image fails to generate a 2D
output with optimum image/video quality.
SUMMARY
[0008] In accordance with exemplary embodiments of the present
invention, an image capture device controlled according to the
image capture quality and related image capture method thereof are
proposed to solve the above-mentioned problem.
[0009] According to a first aspect of the present invention, an
exemplary image capture device is disclosed. The exemplary image
capture device includes an image capture module and a controller.
The image capture module is arranged for capturing a plurality of
consecutive preview images under an automatic shot mode. The
controller is arranged for analyzing the consecutive preview images
to identify an image capture quality metric index, and determining
if a target image capture condition is met by referring to at least
the image capture quality metric index, wherein a captured image
for the automatic shot mode is stored when the controller
determines that the target image capture condition is met.
[0010] According to a second aspect of the present invention, an
exemplary image capture method is disclosed. The exemplary image
capture method includes at least the following steps: capturing a
plurality of consecutive preview images under an automatic shot
mode; analyzing the consecutive preview images to identify an image
capture quality metric index; determining if a target image capture
condition is met by referring to at least the image capture quality
metric index; and when the target image capture condition is met,
storing a captured image for the automatic shot mode.
[0011] According to a third aspect of the present invention, an
exemplary image capture device is disclosed. The exemplary image
capture device includes a multi-view image capture module and a
controller. The multi-view image capture module is arranged for
simultaneously generating a plurality of image capture outputs
respectively corresponding to a plurality of different viewing
angles. The controller is arranged for calculating an image capture
quality metric index for each of the image capture outputs. A
specific image capture output generated from the multi-view image
capture module is outputted by the image capture device according
to a plurality of image capture quality metric indices of the image
capture outputs.
[0012] According to a fourth aspect of the present invention, an
exemplary image capture method is disclosed. The exemplary image
capture method includes at least the following steps: utilizing a
multi-view image capture module for simultaneously generating a
plurality of image capture outputs respectively corresponding to a
plurality of different viewing angles; calculating an image capture
quality metric index for each of the image capture outputs; and
outputting a specific image capture output generated from the
multi-view image capture module according to a plurality of image
capture quality metric indices of the image capture outputs.
[0013] These and other objectives of the present invention will no
doubt become obvious to those of ordinary skill in the art after
reading the following detailed description of the preferred
embodiment that is illustrated in the various figures and
drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a block diagram illustrating an image capture
device according to a first embodiment of the present
invention.
[0015] FIG. 2 is a diagram illustrating an example of generating a
captured image under the automatic shot mode according to an
embodiment of the present invention.
[0016] FIG. 3 is a flowchart illustrating an image capture method
according to an embodiment of the present invention.
[0017] FIG. 4 is a flowchart illustrating an image capture method
according to another embodiment of the present invention.
[0018] FIG. 5 is a block diagram illustrating an image capture
device according to a second embodiment of the present
invention.
[0019] FIG. 6 is an example illustrating an operation of obtaining
the specific image capture output according to an embodiment of the
present invention.
[0020] FIG. 7 is an example illustrating an operation of obtaining
the specific image capture output according to another embodiment
of the present invention.
[0021] FIG. 8 is an example illustrating an operation of obtaining
the specific image capture output according to yet another
embodiment of the present invention.
[0022] FIG. 9 is a flowchart illustrating an image capture method
according to another embodiment of the present invention.
DETAILED DESCRIPTION
[0023] Certain terms are used throughout the description and
following claims to refer to particular components. As one skilled
in the art will appreciate, manufacturers may refer to a component
by different names. This document does not intend to distinguish
between components that differ in name but not function. In the
following description and in the claims, the terms "include" and
"comprise" are used in an open-ended fashion, and thus should be
interpreted to mean "include, but not limited to . . . ". Also, the
term "couple" is intended to mean either an indirect or direct
electrical connection. Accordingly, if one device is coupled to
another device, that connection may be through a direct electrical
connection, or through an indirect electrical connection via other
devices and connections.
[0024] One technical feature of the present invention is to obtain
and store a captured image when a target image capture condition
(e.g., a stable image capture condition) is met under an automatic
shot mode. For example, it is determined that the target image
capture condition (e.g., the stable image capture condition) is met
when a region of stable (ROS) is found stable due to having no
movement/small movement, having a small blur value, and/or having a
better image quality metric index. In this way, a non-blurred image
(or better quality image) can be automatically obtained and stored
under the automatic shot mode by checking the stable image capture
condition. Another technical feature of the present invention is to
output a specific image capture output generated from a multi-view
image capture module according to a plurality of image capture
quality metric indices of a plurality of image capture outputs
(e.g., image outputs or video outputs) respectively corresponding
to a plurality of different viewing angles. In this way, a 2D
image/video output derived from image capture outputs of the
multi-view image capture module would have optimum image/video
quality. Further details are described as below.
[0025] Please refer to FIG. 1, which is a block diagram
illustrating an image capture device according to a first
embodiment of the present invention. The image capture device 100
may be at least a portion (i.e., part or all) of an electronic
device. For example, the image capture device 100 may be
implemented in a portable device such as a smartphone or a digital
camera. In this embodiment, the image capture device 100 includes,
but is not limited to, an image capture module 102, a controller
104, a storage device 106, and a shutter/capture button 108. The
shutter/capture button 108 may be a physical button installed on
the housing or a virtual button displayed on a touch screen. In
this embodiment, the user may touch/press the shutter/capture
button 108 to activate an automatic shot mode for enabling the
image capture device 100 to generate and store a captured image
automatically. The image capture module 102 has the image capture
capability, and may be used to generate a captured image when
triggered by touch/press of the shutter/capture button 108. As the
present invention focuses on the control scheme applied to the
image capture module 102 rather than an internal structure of the
image capture module 102, further description of the internal
structure of the image capture module 102 is omitted here for
brevity.
[0026] In this embodiment, when the shutter/capture button 108 is
touched/pressed to active the automatic shot mode, the image
capture module 102 captures a plurality of consecutive preview
images IMG_Pre under the automatic shot mode until the preview
images show that a stable image capture condition is met.
Specifically, the controller 104 is arranged for analyzing the
consecutive preview images IMG_Pre to identify an image capture
quality metric index, and determining if the stable image capture
condition is met by referring to at least the image capture quality
metric index. By way of example, but not limitation, the image
capture quality metric index may be indicative of an image blur
degree, and the controller 104 may identify the image capture
quality metric index by performing a predetermined processing
operation upon a region of stable (ROS) in each preview image of
the consecutive preview images IMG_Pre.
[0027] In one exemplary design, the ROS region in each preview
image is determined by the controller 104 automatically without
user intervention. For example, the controller 104 performs face
detection upon each preview image to determine a face region which
is used as the ROS region in each preview image. Each face region
may include one or more face images, each defined by a position (x,
y) and a size (w, h), where x and y represent the X-coordinate and
the Y-coordinate of a center (or a left-top corner) of a face
image, and w and h represent the width and the height of the face
image. It should be noted that a face region found in one preview
image may be identical to or different from a face region found in
another preview image. In other words, as a face region is
dynamically found in each preview image, the face region is not
necessarily a fixed image region in each of the consecutive preview
images IMG_Pre. Alternatively, the controller 104 may use a center
region, a focus region determined by auto-focus, a complex texture
region determined by edge detection, or an entire image to act as
the ROS region in each preview image. It should be noted that
position and size of the ROS region in each preview image are fixed
when the center region or the entire image is used as the ROS
region. However, position and size of the ROS region in each
preview image are not necessarily fixed when the focus region
(which is dynamically determined by auto-focus performed for
capturing each preview image) or the complex texture region (which
is dynamically determined by edge detection performed by the
controller 104 upon each preview image) is used as the ROS
region.
[0028] In another exemplary design, the ROS region in each preview
image is determined by the controller 104 in response to a user
input USER_IN. That is, the ROS region is manually selected by the
user. For example, before the image capture device 100 enters the
automatic shot mode, the use may determine a touch focus region by
entering the user input USER_IN through a touch screen (not shown).
After the image capture device 100 enters the automatic shot mode,
the controller 104 uses the touch focus region selected by the user
input USER_IN to act as the ROS region in each preview image. It
should be noted that position and size of the ROS region in each
preview image may be fixed since the touch focus region is
determined before the automatic shot mode is activated.
Alternatively, the position and size of the ROS region in each
preview image may not be fixed since the ROS region can be tracked
using object tracking technology.
[0029] The image capture quality metric index may be identified by
performing one or more predetermined processing operations upon the
ROS region in each preview image of the consecutive preview images
IMG_Pre. For example, the controller 104 may identify the image
capture quality metric index by estimating the image blur degree.
In one exemplary design, the image blur degree can be estimated by
performing a stable estimation for the ROS region in each preview
image of the consecutive preview images IMG_Pre. Hence, the
controller 104 detects a zero image blur degree when the stable
estimation result indicates a completely stable state, e.g. no
movement, and detects a low image blur degree when the stable
estimation result indicates a nearly stable state, e.g. small
movement.
[0030] In a first exemplary embodiment, the stable estimation may
be implemented using motion estimation performed upon ROS regions
of the consecutive preview images IMG_Pre. Regarding an ROS region
in one preview image, when the motion vector obtained by the motion
estimation is zero, the stable estimation result indicates a
completely stable state, e.g. no movement; and when the motion
vector obtained by the motion estimation is close to zero, the
stable estimation result indicates a nearly stable state, e.g.
small movement.
[0031] In a second exemplary embodiment, the stable estimation may
be implemented by calculating a sum of absolute differences (SAD)
or a sum of squared differences (SSD) between ROS regions of two
consecutive preview images. Regarding ROS regions of two
consecutive preview images, when the SAD/SSD value is zero, the
stable estimation result indicates a completely stable state, e.g.
no movement; and when the SAD/SSD value is close to zero, the
stable estimation result indicates a nearly stable state, e.g.
small movement.
[0032] In a third exemplary embodiment, the stable estimation may
be implemented by calculating a difference between positions of ROS
regions of two consecutive preview images and calculating a
difference between sizes of the ROS regions of the two consecutive
preview images. For example, in a case where the ROS region in each
preview image is determined by face detection, the position
difference and the size difference between ROS regions of two
consecutive preview images may be used to determine the stable
estimation result. When the position difference and the size
difference are both zero, the stable estimation result indicates a
completely stable state, e.g. no movement. When the position
difference and the size difference are both close to zero, the
stable estimation result indicates a nearly stable state, e.g.
small movement. When one of the position difference and the size
difference is zero and the other of the position difference and the
size difference is close to zero, the movement estimation result
also indicates a nearly stable state, e.g. small movement.
[0033] In addition to the stable estimation, the controller 104 may
further perform another predetermined processing operation (e.g., a
blur value estimation) for each preview image of the consecutive
preview images IMG_Pre. In other words, the controller 104 may be
configured to identify the image blur degree by referring to both
of the stable estimation result and the blur value estimation
result. Hence, the controller 104 detects a zero image blur degree
when the stable estimation result indicates a completely stable
state (e.g. no movement) and the blur value indicates no blur, and
detects a low image blur degree when the stable estimation result
indicates a nearly stable state (e.g. small movement) and the blur
value is small.
[0034] In a first exemplary embodiment, the blur value estimation
may be implemented by performing edge detection upon the ROS region
in each preview image, and then calculating the edge magnitude
derived from the edge detection to act as a blur value of the ROS
region.
[0035] In a second exemplary embodiment, the blur value estimation
may be implemented by calculating the image visual quality
assessment metric of the ROS region in each preview image to act as
a blur value of the ROS region.
[0036] In a third exemplary embodiment, the blur value estimation
may be implemented by obtaining inherent image characteristic
information of each of the consecutive preview images by analyzing
the consecutive preview images, and determining a blur value
estimation result according to the inherent image characteristic
information, where the inherent image characteristic information
includes at least one of sharpness, blur, brightness, contrast, and
color. To put it another way, the image capture quality metric
index (e.g., the image blur degree) may be determined according to
at least the inherent image characteristic information.
[0037] When identifying either a zero image blur degree or a low
image blur degree (i.e., detecting that the image blur degree is
lower than a predetermined threshold) by checking ROS region(s) of
one or more preview images of the consecutive preview images
IMG_Pre, the controller 104 determines that the stable image
capture condition is met. In other words, the controller 104
determines that the stable image capture condition is met when an
ROS region is found stable without any change or with a small
change. However, this merely serves as one possible implementation
of the present invention. In an alternative design, the stable
image capture condition may be checked by referring to the image
blur degree identified using preview images and additional
indicator(s) provided by other circuit element(s). For example,
when the image capture device 100 is employed in a smartphone, the
controller 104 may further receive a sensor input SENSOR_IN from at
least one sensor 101 of the smartphone, where the sensor input
SENSOR_IN is indicative of a movement status associated with the
image capture device 100, especially a movement status of the image
capture module 102. For example, the sensor 101 may be a G-sensor
or a Gyro sensor. Hence, the controller 104 determines if the
stable image capture condition is met by referring to the image
blur degree and the movement status. In other words, the controller
104 determines that the stable image capture condition is met when
the ROS region is found stable due to zero image blur degree/low
image blur degree and the camera is found stable due to zero
movement/small movement of the image capture module 102.
[0038] When the stable image capture condition is met under the
automatic shot mode, the controller 104 stores a captured image IMG
into the storage device (e.g., a non-volatile memory) 106 as an
image capture result for the automatic shot mode activated by
user's touch/press of the shutter/capture button 108. In one
exemplary design, the controller 104 directly selects one of the
consecutive preview images IMG_Pre as the captured image IMG, where
the consecutive preview images IMG_Pre are obtained before the
stable image capture condition is met. For example, the last
preview image which has a stable ROS region and is captured under
the condition that the camera is stable may be selected as the
captured image IMG. In another exemplary design, when the stable
image capture condition is met, the controller 104 controls the
image capture module 102 to capture a new image IMG_New as the
captured image IMG. That is, none of the preview images generated
before the stable image capture condition is met is selected as the
captured image IMG, and an image captured immediately after the
stable image capture condition is met is the captured image
IMG.
[0039] For better understanding of technical features of the
present invention, please refer to FIG. 2, which is a diagram
illustrating an example of generating a captured image under the
automatic shot mode according to an embodiment of the present
invention. Suppose that face detection is used to select the ROS
region in each preview image. As shown in the sub-diagram (A) in
FIG. 2, the image capture device 100 is affected by hand shake when
capturing the preview image. Thus, besides the face region of a
target object (i.e., a person), the remaining parts of this preview
image generated under the automatic shot mode are blurry. The
controller 104 determines that the stable image capture condition
is not met because the ROS region (i.e., the face region) is found
unstable due to high image blur degree and the camera is found
unstable due to large movement of the image capture module 102.
[0040] As shown in the sub-diagram (B) in FIG. 2, the image capture
device 100 is not affected by hand shake, but the target object
(i.e., the person) moves his head when the image capture device 100
captures the preview image. Thus, the face region of the target
object is blurry, but the remaining parts of this preview image
generated under the automatic shot mode are clear. Though the
camera is found stable due to zero movement of the image capture
module 102, the controller 104 also determines that the stable
image capture condition is not met because the ROS region (i.e.,
the face region) is found unstable due to high image blur
degree.
[0041] As shown in the sub-diagram (C) in FIG. 2, the image capture
device 100 is not affected by hand shake, and the target object
(i.e., the person) is still when the image capture device 100
captures the preview image. Thus, the face region and the remaining
parts of this preview image generated under the automatic shot mode
are clear. At this moment, the controller 104 determines that the
stable image capture condition is met because the ROS region (i.e.,
the face region) is found stable due to zero image blur degree and
the camera is found stable due to zero movement of the image
capture module 102. In this way, the image capture device 100 can
successfully obtain a desired non-blurred image for the automatic
shot mode when the stable image capture condition is met.
[0042] The above-mentioned exemplary operation of checking an ROS
region (e.g., a face region) to determine if a stable image capture
condition is met is performed under an automatic shot mode, and is
therefore different from an auto-focus operation performed based on
the face region. Specifically, the auto-focus operation checks the
face region to adjust the lens position for automatic focus
adjustment. After the focus point is successfully set by the
auto-focus operation based on the face region, the automatic shot
mode is enabled. Thus, the consecutive preview images IMG_Pre are
captured under a fixed focus setting configured by the auto-focus
operation. In other words, during the procedure of checking the ROS
region (e.g., the face region) to determine if the stable image
capture condition is met, no focus adjustment is made to the
lens.
[0043] Please refer to FIG. 1 in conjunction with FIG. 3. FIG. 3 is
a flowchart illustrating an image capture method according to an
embodiment of the present invention. The image capture method may
be employed by the image capture device 100. Provided that the
result is substantially the same, the steps are not required to be
executed in the exact order shown in FIG. 3. The image capture
method may be briefly summarized by following steps.
[0044] Step 200: Start.
[0045] Step 202: Check if the shutter/capture button 108 is
touched/pressed to activate an automatic shot mode. If yes, go to
step 204; otherwise, perform step 202 again.
[0046] Step 204: Utilize the image capture module 102 to capture
preview images.
[0047] Step 206: Utilize the controller 104 to analyze consecutive
preview images to identify an image capture quality metric index
(e.g., an image blur degree).
[0048] Step 208: Receive a sensor input SENSOR_IN indicative of a
movement status associated with the image capture module 102.
[0049] Step 210: Determine if a target image capture condition
(e.g., a stable image capture condition) is met by referring to the
image capture quality metric index (e.g., the image blur degree)
and the movement status. If yes, go to step 212; otherwise, go to
step 204.
[0050] Step 212: Store a captured image for the automatic shot mode
into the storage device 106. For example, one of the consecutive
preview images obtained before the stable image capture condition
is met is directly selected as the captured image to be stored, or
a new image captured immediately after the stable image capture
condition is met is used as the captured image to be stored.
[0051] Step 214: End.
[0052] It should be noted that step 208 may be omitted, depending
upon actual design consideration/requirement. That is, in an
alternative design, a stable image capture condition may be checked
without referring to the sensor input SENSOR_IN. Please refer to
FIG. 4, which is a flowchart illustrating an image capture method
according to another embodiment of the present invention. The major
difference between the exemplary image capture methods shown in
FIG. 3 and FIG. 4 is that step 208 is omitted, and step 210 is
replaced by step 310 as below.
[0053] Step 310: Determine if a target image capture condition
(e.g., a stable image capture condition) is met by referring to the
image capture quality metric index (e.g., the image blur degree).
If yes, go to step 212; otherwise, go to step 204.
[0054] As a person skilled in the art can readily understand
details of each step shown in FIG. 3 and FIG. 4 after reading above
paragraphs directed to the image capture device 100 shown in FIG.
1, further description is omitted here for brevity.
[0055] FIG. 5 is a block diagram illustrating an image capture
device according to a second embodiment of the present invention.
The image capture device 500 may be at least a portion (i.e., part
or all) of an electronic device. For example, the image capture
device 500 may be implemented in a portable device such as a
smartphone or a digital camera. In this embodiment, the image
capture device 500 includes, but is not limited to, a multi-view
image capture module 502, a controller 504, a storage device 506, a
shutter/capture button 508, and an optional electronic image
stabilization (EIS) module 510. The shutter/capture button 508 may
be a physical button installed on the housing or a virtual button
displayed on a touch screen. In this embodiment, even though the
image capture device 500 is equipped with the multi-view image
capture module 502, the user may touch/press the shutter/capture
button 508 to enable the image capture device 500 to output a
single-view image or a single-view video sequence. The multi-view
image capture module 502 has the image capture capability, and is
capable of simultaneously generating a plurality of image capture
outputs respectively corresponding to a plurality of different
viewing angles, where each of the image capture outputs may be a
single image or a video sequence composed of consecutive images. In
this embodiment, the multi-view image capture device 502 may be
implemented using a camera array or a multi-lens camera, and thus
may be regarded as having a plurality of camera units for
generating image capture outputs respectively corresponding to
different viewing angles. By way of example, the multi-view image
capture device 502 shown in FIG. 5 may be a stereo camera
configured to have two camera units 512 and 514, where the camera
unit 512 is used to generate a right-view image capture output
S_OUT.sub.R, and the camera unit 514 is used to generate a
left-view image capture output S_OUT.sub.L. It should be noted that
the number of camera units is not meant to be a limitation of the
present invention. As the present invention focuses on the camera
selection scheme applied to the multi-view image capture module 502
and the output selection scheme applied to image capture outputs
generated from the multi-view image capture module 502, further
description of the internal structure of the multi-view image
capture module 502 is omitted here for brevity.
[0056] The controller 504 is arranged for calculating an image
capture quality metric index for each of the image capture outputs.
Regarding each of the image capture outputs, the image capture
quality metric index may be calculated based on a selected image
region (e.g., a face region having one or more face images) or an
entire image area of each image. Besides, the image capture quality
metric index is correlated with an image blur degree. For example,
the image capture quality metric index would indicate good image
capture quality when the image blur degree is low, and the image
capture quality metric index would indicate poor image capture
quality when the image blur degree is high.
[0057] In a case where each of the image capture outputs
S_OUT.sub.R and S_OUT.sub.L is a single image, the aforementioned
image capture quality metric index is an image quality metric
index. In a first exemplary embodiment, the controller 504 performs
face detection upon each image capture output (i.e., each of a
left-view image and a right-view image) to obtain face detection
result, and determines the image capture quality metric index
(i.e., the image quality metric index) according to the face
detection information. As mentioned above, the left-view image and
the right-view image generated from a stereo camera may have
different quality. For example, the face in one of the left-view
image and the right-view image is clear, but the same face in the
other of the left-view image and the right-view image may be
blurry. In other words, one of the left-view image and the
right-view image is clear, but the other of the left-view image and
the right-view image is blurry. Besides, due to different viewing
angles of the left-view image and the right-view image, one of the
left-view image and the right-view image may have more human faces,
and one of the left-view image and the right-view image may have a
better face angle. Thus, the face detection information is
indicative of the image quality of the left-view image and the
right-view image. By way of example, the obtained face detection
information may include at least one of a face angle, a face number
(i.e., the number of human faces detected in an entire image), a
face size (e.g., the size of a face region having one or more human
faces), a face position (e.g., the position of a face region having
one or more human faces), a face symmetry (e.g., a ratio of left
face and right face of a face region having one or more human
faces), an eye number (i.e., the number of human eyes detected in
an entire image), and an eye blink status (i.e., the number of
blinking human eyes detected in an entire image). In this
embodiment, the image capture quality metric index (i.e., the image
quality metric index) is set by a larger value when an image
capture output S_OUR.sub.R, S_OUT.sub.L (i.e., a left-view image or
a right-view image) has larger front faces, more front faces, a
larger eye number, and/or fewer blinking eyes.
[0058] In a second exemplary embodiment, the controller 504
receives auto-focus information INF_1 of each image capture output
(i.e., each of a left-view image and a right-view image) from the
multi-view image capture module 502, and determines the image
capture quality metric index according to the auto-focus
information INF_1. As the camera units 512 and 514 work
individually even though both use the same camera setting,
auto-focus functions of the camera units 512 and 514 may focus on
different objects. Thus, the auto-focus information INF_1 is
indicative of the image quality of the left-view image and the
right-view image. In this embodiment, the image capture quality
metric index (i.e., the image quality metric index) is set by a
larger value when an image capture output S_OUR.sub.R, S_OUT.sub.L
(i.e., a left-view image or a right-view image) has a better
auto-focus result.
[0059] In a third exemplary embodiment, the controller 504 analyzes
at least a portion (i.e., part or all) of each of the image capture
output (i.e., each of a left-view image and a right-view image) to
obtain inherent image characteristic information, and determines
the image capture quality metric index (i.e., the image quality
metric index) according to the inherent image characteristic
information. The inherent image characteristic information is
indicative of the image quality of the left-view image and the
right-view image. For example, the inherent image characteristic
information may include at least one of sharpness, blur,
brightness, contrast, and color. In this embodiment, the image
capture quality metric index (i.e., the image quality metric index)
is set by a larger value when an image capture output S_OUR.sub.R,
S_OUT.sub.L (i.e., a left-view image or a right-view image) is
sharper or has a more suitable brightness distribution (i.e., a
better white balance result).
[0060] In a fourth exemplary embodiment, the controller 504
determines the image capture quality metric index (i.e., the image
quality metric index) of each of the image capture output (i.e.,
each of a left-view image and a right-view image) according to
electronic image stabilization (EIS) information INF_2 given by the
optional EIS module 510. When the EIS module 510 is implemented in
the image capture device 500, the EIS information INF_2 is
indicative of the image quality of the left-view image and the
right-view image. In this embodiment, the image capture quality
metric index (i.e., the image quality metric index) is set by a
larger value when an image capture output S_OUR.sub.R, S_OUT.sub.L
(i.e., a left-view image or a right-view image) is given more image
stabilization.
[0061] In a fifth exemplary embodiment, the controller 504 may
employ any combination of above-mentioned face detection
information, auto-focus information, inherent image characteristic
information and EIS information to determine the image capture
quality metric index.
[0062] In another case where each of the image capture outputs
S_OUT.sub.R and S_OUT.sub.L is a video sequence composed of
consecutive images, the aforementioned image capture quality metric
index is a video quality metric index. Similarly, the controller
504 may employ face detection information, auto-focus information,
inherent image characteristic information, and/or EIS information
to determine the image capture quality metric index (i.e., the
video quality metric index) of each of the image capture outputs
S_OUT.sub.L and S_OUT.sub.R (i.e., a left-view video sequence and a
right-view video sequence). For example, the video quality metric
index may be derived from processing (e.g., summing or averaging)
image quality metric indices of images included in the same video
sequence. Alternatively, other video quality assessment methods may
be employed for determining the video quality metric index of each
video sequence.
[0063] Based on image capture quality metric indices of the image
capture outputs S_OUT.sub.R and S_OUT.sub.L, a specific image
capture output S_OUT generated from the multi-view image capture
module 502 may be saved as a file in the storage device 506 (e.g.,
a non-volatile memory), and then outputted by the image capture
device 500 to a display device (e.g., a 2D display screen) 516 or a
network (e.g., a social network) 518. For example, the specific
image capture output S_OUT may be used for further processing such
as face recognition or image enhancement.
[0064] In an embodiment of the present invention, when an image
capture quality metric index is assigned by a larger value, it
means that the image/video quality is better. Hence, based on
comparison of the image capture quality metric indices of the image
capture outputs, the controller 504 knows which one of the image
capture outputs has the best image/video quality.
[0065] In a case where each of the image capture outputs
S_OUT.sub.R, S_OUT.sub.L is a single image, the controller 504
refers to the image capture quality metric indices of the image
capture outputs S_OUT.sub.R, S_OUT.sub.L to directly select one of
the image capture outputs S_OUT.sub.R, S_OUT.sub.L as the specific
image capture output S_OUT. Please refer to FIG. 6, which is an
example illustrating an operation of obtaining the specific image
capture output S_OUT according to an embodiment of the present
invention. As shown in FIG. 6, the left-view image S_OUT.sub.L has
a blurry face region, and the right-view image S_OUT.sub.R is
clear. Hence, the image quality metric index of the right-view
image S_OUT.sub.R is larger than that of the left-view image
S_OUT.sub.L, which implies that the right-view image S_OUT.sub.R
has better image quality due to a stable face region in this
example. Based on the comparison result of the image quality metric
indices of the right-view image S_OUT.sub.R and the left-view image
S_OUT.sub.L, the controller 504 directly selects the right-view
image S_OUT.sub.R as the specific image S_OUT to be stored and
outputted.
[0066] Alternatively, the controller 504 may refer to the image
capture quality metric indices of the image capture outputs
S_OUT.sub.R, S_OUT.sub.L to control the multi-view image capture
module 502 to generate a new image capture output S_OUT.sub.N
corresponding to a selected viewing angle as the specific image
capture output S_OUT. Please refer to FIG. 7, which is an example
illustrating an operation of obtaining the specific image capture
output S_OUT according to another embodiment of the present
invention. As shown in FIG. 7, the left-view image S_OUT.sub.L has
a blurry face region, and the right-view image S_OUT.sub.R is
clear. Hence, the image quality metric index of the right-view
image S_OUT.sub.R is larger than that of the left-view image
S_OUT.sub.L, which implies that the right-view image S_OUT.sub.R
has better image quality due to a stable face region in this
example. Based on the comparison result of the image quality metric
indices of the right-view image S_OUT.sub.R and the left-view image
S_OUT.sub.L, the controller 504 selects the camera unit 512 such
that a new captured image S_OUT.sub.N corresponding to a selected
viewing angle is generated as the specific image S_OUT to be stored
and outputted.
[0067] In another case where each of the image capture outputs
S_OUT.sub.R, S_OUT.sub.L is a video sequence composed of
consecutive images, the controller 504 refers to the image capture
quality metric indices of the image capture outputs S_OUT.sub.R,
S_OUT.sub.L to directly select one of the image capture outputs
S_OUT.sub.R, S_OUT.sub.L as the specific image capture output
S_OUT. Please refer to FIG. 8, which is an example illustrating an
operation of obtaining the specific image capture output S_OUT
according to yet another embodiment of the present invention. As
shown in FIG. 8, the left-view video sequence S_OUT.sub.L includes
two images each having a blurry face region, whereas each image
included in the right-view video sequence S_OUT.sub.R is clear.
Hence, the video quality metric index of the right-view video
sequence S_OUT.sub.R is larger than that of the left-view video
sequence S_OUT.sub.L, which implies that the right-view video
sequence S_OUT.sub.R has better video quality due to a stable face
region in this example. Based on the comparison result of the video
quality metric indices of the right-view video sequence S_OUT.sub.R
and the left-view video sequence S_OUT.sub.L, the controller 504
directly selects the right-view video sequence S_OUT.sub.R as the
specific video sequence S_OUT to be stored and outputted.
[0068] Please refer to FIG. 5 in conjunction with FIG. 9. FIG. 9 is
a flowchart illustrating an image capture method according to
another embodiment of the present invention. The image capture
method may be employed by the image capture device 500. Provided
that the result is substantially the same, the steps are not
required to be executed in the exact order shown in FIG. 9. The
image capture method may be briefly summarized by following
steps.
[0069] Step 900: Start.
[0070] Step 902: Check if the shutter/capture button 508 is
touched/pressed. If yes, go to step 904; otherwise, perform step
902 again.
[0071] Step 904: Utilize the multi-view image capture device 502
for simultaneously generating a plurality of image capture outputs
respectively corresponding to a plurality of different viewing
angles. For example, each of the image capture outputs is a single
image when the shutter/capture button 508 is touched/pressed to
enable a photo mode. For another example, each of the image capture
outputs is a video sequence when the shutter/capture button 508 is
touched/pressed to enable a video recording mode.
[0072] Step 906: Utilize the controller 504 for calculating an
image capture quality metric index for each of the image capture
outputs. For example, the image capture quality metric index may be
derived from face detection information, auto-focus information,
inherent image characteristic information, and/or EIS
information.
[0073] Step 908: Utilize the controller 504 to compare image
capture quality metric indices of the image capture outputs.
[0074] Step 910: Utilize the controller 504 to decide which one of
the image capture outputs has best image/video quality based on the
comparison result.
[0075] Step 912: Output a specific image capture output generated
from the multi-view image capture module 502 according to an image
capture output identified in step 910 to have best image/video
quality. For example, the image capture output which is identified
in step 910 to have best image/video quality is directly selected
as the specific image capture output. For another example, a camera
unit used for generating the image capture output which is
identified in step 910 to have best image/video quality is selected
to capture a new image capture output as the specific image capture
output.
[0076] Step 914: End.
[0077] As a person skilled in the art can readily understand
details of each step shown in FIG. 9 after reading above paragraphs
directed to the image capture device 500 shown in FIG. 5, further
description is omitted here for brevity.
[0078] As mentioned above, the first image capture method shown in
FIG. 3/FIG. 4 is to obtain a captured image based on consecutive
preview images generated by a single camera unit in a temporal
domain, and the second image capture method shown in FIG. 9 is to
obtain a specific image capture output based on multiple image
capture outputs (e.g., multi-view images or multi-view video
sequences) generated by different camera units in a spatial domain.
However, combining technical features of the first image capture
method shown in FIG. 3/FIG. 4 and the second image capture method
shown in FIG. 9 to obtain a captured image based on multiple image
capture outputs generated by different camera units under a
temporal-spatial domain is feasible. Please refer to FIG. 8 again.
In an alternative design, the controller 504 may be adequately
modified to perform the second image capture method shown in FIG. 9
to select the image capture output S_OUT from multiple image
capture outputs S_OUT.sub.L and S_OUT.sub.R, and then perform the
first image capture method upon images included in the selected
image capture output S_OUT to obtain a captured image for the
automatic shot mode. In other words, the modified controller 504 is
configured to treat the images included in the image capture output
S_OUT selected based on the second image capture method as the
aforementioned consecutive preview images to be processed by the
first image capture method. This also falls within the scope of the
present invention.
[0079] Those skilled in the art will readily observe that numerous
modifications and alterations of the device and method may be made
while retaining the teachings of the invention. Accordingly, the
above disclosure should be construed as limited only by the metes
and bounds of the appended claims.
* * * * *