U.S. patent application number 12/078632 was filed with the patent office on 2008-11-20 for digital camera.
This patent application is currently assigned to NIKON CORPORATION. Invention is credited to Koichi Abe.
Application Number | 20080284900 12/078632 |
Document ID | / |
Family ID | 39769597 |
Filed Date | 2008-11-20 |
United States Patent
Application |
20080284900 |
Kind Code |
A1 |
Abe; Koichi |
November 20, 2008 |
Digital camera
Abstract
A digital camera includes: an imaging unit that receives and
images a light from a subject transmitted a photographing optical
system; a recognition unit that recognizes a feature region of the
subject using an image obtained by imaging with the imaging unit; a
detection unit that detects a size of the feature region that is
recognized with the recognition unit; and a control unit that
predicts a distance to the subject after a predetermined period of
time according to the size of the feature region, and controls the
photographing optical system so as to focus on the subject.
Inventors: |
Abe; Koichi; (Setagaya-Ku,
JP) |
Correspondence
Address: |
OLIFF & BERRIDGE, PLC
P.O. BOX 320850
ALEXANDRIA
VA
22320-4850
US
|
Assignee: |
NIKON CORPORATION
TOKYO
JP
|
Family ID: |
39769597 |
Appl. No.: |
12/078632 |
Filed: |
April 2, 2008 |
Current U.S.
Class: |
348/349 ;
348/E5.042 |
Current CPC
Class: |
G03B 13/36 20130101;
H04N 5/23212 20130101; H04N 5/232945 20180801; H04N 5/232123
20180801; G03B 3/00 20130101; H04N 5/23296 20130101; H04N 5/23218
20180801 |
Class at
Publication: |
348/349 ;
348/E05.042 |
International
Class: |
H04N 5/232 20060101
H04N005/232; G03B 13/36 20060101 G03B013/36 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 4, 2007 |
JP |
2007-098136 |
Apr 1, 2008 |
JP |
2008-094974 |
Claims
1. A digital camera, comprising: an imaging unit that receives and
images light from an object which has passed through shooting
optical system; a recognition unit that recognizes a feature region
of the object by using an image obtained by imaging by the imaging
unit; a detection unit that detects a size of the feature region
that is recognized by the recognition unit; and a control unit that
predicts a distance to the object after a predetermined time
according to the size of the feature region, and controls the
shooting optical system so as to bring the object into focus.
2. The digital camera according to claim 1, further comprising: a
distance calculation unit that calculates a distance to the object
according to the size of the feature region; and a speed
calculation unit that calculates a moving speed of the object
according to a time change of the distance to the object, wherein
the control unit predicts the distance to the object calculated by
the distance calculation unit and the distance to the object
according to the moving speed of the object calculated by the speed
calculation unit.
3. The digital camera according to claim 2, wherein the distance
calculation unit calculates the distance to the object based on
position information of a lens constituting the shooting optical
system, and calculates the distance to the object based on the
calculated distance to the object and the size of the feature
region after calculating the distance to the object based on the
position information of the lens constituting the shooting optical
system.
4. The digital camera according to any of claims 1-3, wherein the
control unit predicts the distance to the object at the time of
imaging based on the time between a shooting execution operation
and imaging by the imaging unit, and controls the shooting optical
system so as to bring the object into focus at the time of imaging
by the imaging unit.
5. The digital camera according to any of claims 1-4, comprising: a
registration unit that selects the feature region of the object for
predicting the distance to the object, from at least one said
feature regions recognized by the recognition unit, and registers
feature information of the selected feature region of the object;
wherein after feature information of the feature region of the
object is registered, the recognition unit recognizes the feature
region of the object based on the feature information of the
feature region of the object.
6. The digital camera according to claim 5, further comprising: a
record control unit that stores in a recording medium an image,
which is obtained by imaging by the imaging unit, wherein the
registration unit registers the feature information of the feature
region of the object based on the image stored in the recording
medium.
7. The digital camera according to claim 5 or 6, wherein the
feature information of the feature region is at least one of
position information of a lens constituting the shooting optical
system, the distance to the object, and the size of the feature
region.
8. The digital camera according to claim 4, wherein a shooting
condition is modified in response to one of calculation results of
the distance calculation unit and the speed calculation unit.
9. The digital camera according to claim 8, wherein the shooting
condition is one of a shutter speed and ISO sensitivity.
10. The digital camera according to any of claims 1-9,
Description
TECHNICAL FIELD
[0001] This invention relates to a digital camera.
BACKGROUND TECHNOLOGY
[0002] As a method of autofocus (AF) of digital cameras, a contrast
detection method is heretofore known. In the contrast detection
method, image signals are obtained by imaging an object by an
imaging element such as a CCD, a component of a predetermined
spatial frequency band is extracted from the image signals
contained within a predetermined AF area within an image and a
focus evaluation value is calculated by integrating its absolute
value. The focus evaluation value is a value that corresponds to
the contrast in the focal point detection area, and the value
increases as the contrast increases. Based on the characteristic
that the contrast of an image becomes higher as a focus lens
assumes a position closer to a focus position, the lens position at
which the focus evaluation value peaks (hereafter referred to as
the peak position) is determined, the peak position is determined
to be the focus position, and the focus lens is driven to this
focus position (Patent Reference 1).
[0003] [Patent Reference 1] Japanese Published Patent Application
2003-315665
DISCLOSURE OF THE INVENTION
Problems to be Resolved by the Invention
[0004] However, for detecting the peak position of the contrast,
which is the focus position, the focus evaluation values are
calculated at predetermined intervals, moving the focus lens along
an optical axis, the focus evaluation values of those points are
analyzed, and the peak position is detected. Therefore, there has
been a problem that bringing an object into focus takes time and
that a moving object cannot be brought into focus.
[0005] An object of this invention is to provide a digital camera
in which an object can be more accurately brought into focus, and
shooting can be performed.
Means of Solving the Problem
[0006] The digital camera according to claim 1 is provided with an
imaging unit that receives and images light from an object which
has passed through shooting optical system; a recognition unit that
recognizes a feature region of the object by using an image imaged
by the imaging unit; a detection unit that detects a size of the
feature region recognized by the recognition unit; and a control
unit that predicts a distance to the object after a predetermined
time according to the size of the feature region and controls the
shooting optical system so as to bring the object into focus.
[0007] In the digital camera according to claim 2, the digital
camera according to claim 1 is further provided with a distance
calculation unit that calculates a distance to the object according
to the size of the feature region; and a speed calculation unit
that calculates a moving speed of the object according to a time
change of the distance to the object, in which the control unit
predicts the distance to the object calculated by the distance
calculation unit and the distance to the object according to the
moving speed of the object calculated by the speed calculation
unit.
[0008] In the digital camera according to claim 3, based on the
digital camera according to claim 2, the distance calculation unit
calculates the distance to the object based on position information
of a lens constituting the shooting optical system, and calculates
the distance to the object based on the calculated distance to the
object and the size of the feature region after calculating the
distance to the object based on the position information of the
lens constituting the shooting optical system.
[0009] In the digital camera according to claim 4, based on the
digital camera according to any of claims 1-3, the control unit
predicts the distance to the object at the time of imaging based on
the time between a shooting execution operation and imaging by the
imaging unit, and controls the shooting optical system so as to
bring the object into focus at the time of imaging by the imaging
unit.
[0010] In the digital camera according to claim 5, the digital
camera according to any of claims 1-4 is further provided with a
registration unit that selects the feature region of the object for
predicting the distance to the object, from the feature regions
recognized by the recognition unit, and registers feature
information of the selected feature region of the object, in which
after feature information of the feature region of the object is
registered, the recognition unit recognizes the feature region of
the object based on the feature information of the feature region
of the object.
[0011] In the digital camera according to claim 6, the digital
camera according to claim 5 is further provided with a record
control unit that stores in a recording medium an image, which has
been obtained by imaging by the imaging unit, in which the
registration unit registers the feature information of the feature
region of the object based on the image stored in the recording
medium.
[0012] In the digital camera according to claim 7, based on the
digital camera according to claim 5 or 6, the feature information
of the feature region is at least one of position information of a
lens constituting the shooting optical system, the distance to the
object, and the size of the feature region.
[0013] In the digital camera according to claim 8, based on the
digital camera according to claim 4, a shooting condition is
modified in response to one of calculation results of the distance
calculation unit and the speed calculation unit.
[0014] In the digital camera according to claim 9, based on the
digital camera according to claim 8, the shooting condition is one
of a shutter speed and ISO sensitivity.
[0015] In the digital camera according to claim 10, based on the
digital camera according to any of claims 1-9, the control unit
predicts the distance to the object after the predetermined time
based on the size of a plurality of the feature regions, existing
on a plurality of images time-sequentially obtained by the imaging
unit.
EFFECTS OF THE INVENTION
[0016] According to this invention, a digital camera is provided in
which an object can be more accurately brought into focus, and
shooting can be performed.
BEST MODE TO IMPLEMENT THE INVENTION
First Embodiment
[0017] The first embodiment of the present invention is described
hereinafter.
[0018] FIG. 1 is a block diagram that shows an electrical
configuration of a digital camera 1 according to the
embodiment.
[0019] A Lens 2 includes a focus lens 2a and a zoom lens 2b, and
constitutes a shooting optical system. The focus lens 2a is a lens
for making adjustment with respect to an object, and is moved in
the optical axial direction by a focus lens drive unit 3. The zoom
lens 2b is a lens for modifying a focal length of the lens 2, and
is moved in the optical axial direction by a zoom lens drive unit
4. Each of the focus lens drive unit 3 and the zoom lens drive unit
4 is composed of, for example, a stepping motor and is controlled
based on an instruction from a control unit 5. A focus lens
position detection unit 6 detects a position on the optical axis of
the focus lens 2a and sends a detection signal to the control unit
5. A zoom lens position detection unit 7 detects a position on the
optical axis of the zoom lens 2b and sends the detection signal to
the control unit 5.
[0020] Light from the object forms an image on an imaging element 8
by the lens 2. The imaging element 8, which is a solid-state
imaging element such as a CCD and a CMOS, outputs an imaging
signal, in which an object image is photoelectrically converted
into an electrical signal, to an analog signal processing unit 9.
The imaging signal, which is an analog signal input to the analog
signal processing unit 9, is subject to processing such as
correlated double sampling (CDS) and is input to an analog-digital
converter (ADC) 10. Additionally, the imaging signal is converted
from the analog signal to a digital signal with the ADC 10 and is
stored in a memory 11. The memory 11 includes a buffer memory in
which the imaging signal is temporarily stored, a built-in memory
in which already shot image data is recorded, etc. The image data
that is stored in the memory 11 is sent to a digital signal
processing unit 13 through a bus 12. The digital signal processing
unit 13, which is, for example, a digital signal processor (DSP),
performs known image processings such as white balance processing,
interpolation processing, and gamma correction, for the image data,
and then stores the image data in the memory 11 again.
[0021] The image data in which the image has been processed is
subject to known compression processing such as JPEG by a
compression/expansion unit 14 and is recorded in a memory card 15,
which is detachable with respect to the digital camera 1. In the
case of reproducing and displaying the image recorded in the memory
card 15, the image recorded in the memory card 15 is read in the
memory 11, digital image is converted to an analog imaging signal
with a digital-analog converter (DAC) 16, and the image is
displayed on a display unit 17. The display unit 17, which is, for
example, a liquid crystal display, reproduces and displays images
recorded in the memory card 15, and displays an image that is
imaged by the imaging element 8 when the image is shot as a through
image. The image data can be recorded in the memory card 15 or the
built-in memory within the memory 11. However, when the built-in
memory is used, the memory card 15 is not used.
[0022] The control unit 5 is connected to an operation unit 18. The
control unit 5 includes, for example, a CPU, and controls the
operation of the digital camera 1 in response to signals input from
the operation unit 18. The operation unit 18 includes a power
source button 19, a release button 20, a menu button 21, an arrow
key 22, an enter button 23, an AF mode selection switch 24,
etc.
[0023] The power source button 19 is a button for switching the
digital camera 1 to be powered on (ON) and off (OFF).
[0024] The release button 20 is a button that a user presses down
in order to issue an instruction on image shooting. Pressing the
release button 20 halfway down causes a halfway-press switch SW1 to
be powered on (ON) and causes an ON signal to be output, while not
pressing the release button 20 halfway down causes the
halfway-press switch SW1 to be powered off (OFF) and causes an OFF
signal to be output. The signal output by the halfway-press switch
SW1 is input to the control unit 5. Pressing the release button 20
down fully (pressing the button down deeper than the halfway-press
operation) causes a fully-press switch SW2 to be powered on (ON)
and causes the ON signal to be output, while not pressing the
release button 20 down fully causes the fully-press switch SW2 to
be powered off (OFF) and causes the OFF signal to be output. The
signal output by the fully-press switch SW2 is input to the control
unit 5.
[0025] The menu button 21 is a button for displaying a menu
corresponding to a mode selected by the user.
[0026] The arrow key 22 is a button for selecting an operation
desired by the user, such as moving a cursor in vertical and
horizontal directions for selecting items to be displayed on the
display unit 17.
[0027] The enter button 23 is a button for determining the
operation selected with the arrow key 22.
[0028] The AF mode selection switch 24 is a switch for selecting
whether an image is shot in a predictive AF mode. The predictive AF
mode, which is a shooting mode performed by the AF mode selection
switch 24 being powered on (ON), is an operation described
hereinafter in FIG. 3. FIG. 3 and predictive AF processing are
hereinafter described in detail. When the AF mode selection switch
24 is powered on (ON), a mode is switched to the predictive AF
mode. When the AF mode selection switch 24 is powered off (OFF), a
mode is switched to a conventional contrast AF mode, for example,
as shown in the Background Technology section.
[0029] A feature region recognition calculation unit 25 recognizes
a feature region from the image data. If the recognition is
successful, coordinates indicating the position and the size of the
recognized feature region is output to the control unit 5. Once the
coordinates indicating the position and the size of the feature
region is input, based on this, an image is created on which a
frame indicating the size of the feature region (feature region
mark) is superimposed on an image for displaying a through image,
and the image is displayed on the display unit 17. The calculation
for recognizing the feature region can also be performed at the
control unit 5.
[0030] The digital camera 1 of the first embodiment of this
invention recognizes the feature region from the image shot by the
imaging element 8, continuously detects the size of the feature
region specified by the user, and calculates the movement of the
object from a change in size of the feature region. Then, the
digital camera 1, based on the result, predicts the distance to the
object at the time of imaging, and controls the drive position of
the focus lens 2a so that the object is brought into focus.
[0031] A method for recognizing a feature region is hereinafter
described. For example, when the object is a person, a face of the
person is recognized as a feature region. The feature region
recognition calculation unit 25 detects whether the face of the
person exists on the through image displayed on the display unit
17. A method for detecting a face of a person includes, for
example, detecting flesh color from an image (Japanese Published
Patent Application 2004-037733), extracting a candidate region
corresponding to a face shape and determining the face region from
within the region (Japanese Published Patent Application 8-063597),
etc. Furthermore, a method for recognizing a person includes, for
example, identifying the person by means of comparing an image, in
which each feature point such as an eye, a nose, a mouth, etc. is
extracted from the image, to a dictionary image of each person that
has been registered in advance (Japanese Published Patent
Application 9-251534). If the recognition of the face of the person
is successful by using such known methods, coordinates indicating
the position and the size of the recognized face region is output
to the control unit 5.
[0032] When a plurality of persons are recognized, as hereinafter
described, a person who is desired to be brought into focus is
specified from among the plurality of persons. The control unit 5
controls the display unit 17 according to the coordinates input
from the feature region recognition calculation unit 25 and
displays the frame indicating the face region (face region mark)
superimposing on the through image as illustrated in FIG. 2. If
there is only one face detected by the feature region recognition
calculation unit 25, the feature region mark is displayed on the
face region. If there are a plurality of faces (three faces in FIG.
2) detected with the feature region recognition calculation unit 25
as illustrated in FIG. 2, the respective feature region marks M1 to
M3 are displayed corresponding to a plurality of face regions.
[0033] A registration method for a feature region and a prediction
method for an object distance are described hereinafter.
[0034] FIG. 3 is a flowchart showing a shooting procedure in a
predictive AF mode. Processing shown in FIG. 3 is performed by the
control unit 5. In this embodiment, a case is explained in which a
plurality of persons exist within a through image and the person
who is desired to be continuously brought into focus is selected
from among them and is shot.
[0035] If the AF mode selection switch 24 is switched ON with the
power source button 19 of the digital camera 1 being switched ON, a
predictive AF program is executed, which performs an operation
shown in FIG. 3.
[0036] First, steps S101 to S105 are steps relating to recognition
of a feature region.
[0037] In step S101, when the AF mode selection switch 24 is
switched ON, a through image is displayed on the display unit 17.
The image that is repeatedly shot by the imaging element 8 is
consecutively updated and displayed on the display unit 17 as a
through image.
[0038] In step S102, when the menu button 21 is pressed down in a
state in which the through image is displayed on the display unit
17, the control unit 5 sends an instruction to the display unit 17
and superimposes a screen for selecting a type of the object to be
recognized over the through image and displays it on the display
unit 17. As a type of the object, something in which an object
itself, such as a person, a soccer ball, and a car, moves is
displayed on the select screen. The user selects the type of the
object in the select screen by operating the arrow key 22 and
determines this by pressing down the enter button 23. If the enter
button 23 is not ON, the determination of step S102 is repeated
until the enter button 23 is turned ON. If the enter button 23 is
turned ON, the operation proceeds to step S103. Since the object of
this embodiment is a person, a case is described in which the
person is selected as the type of the object.
[0039] When the type of the object to be recognized is selected, in
step S103, the control unit 5 sends the feature region recognition
calculation unit 25 an instruction for initiating the feature
region recognition processing to the through image. Here, since the
person is selected as the type of the object in step S102, face
region recognition processing is initiated in which a face of a
person is recognized as a feature region.
[0040] In step S104, the control unit 5 determines whether the face
region has been recognized in response to a recognition result of
the face region at that time from the feature region recognition
calculation unit 25. If the face region is not recognized for some
reason, for example, a face region does not exist in the through
image, that the face region exists in the through image but is too
small, etc., the operation returns to step S103, and performs the
face region recognition processing again. If the face region is
recognized, the operation proceeds to step S105, the control unit 5
sends an instruction to the display unit 17, and the face region
marks M1 to M3 are superimposed on the through image and displayed
on the display unit 17 as illustrated in FIG. 2.
[0041] At this time, a cross-shaped mark M4 for selecting a face
region is displayed on the display unit 17, as hereinafter
described in detail. The cross-shaped mark M4 is displayed only
within the face region mark M1 that is the closest to the center of
the through image to be displayed on the display unit 17. In other
words, if only one face region mark exists, the cross-shaped mark
is displayed within the face region mark. If a plurality of face
region marks exist, the cross-shaped mark is displayed only within
the face region mark that is the closest to the center of the
through image to be displayed on the display unit 17.
[0042] The following steps S106 to S109 are steps relating to
registration of a face region.
[0043] In step S106, a face region of an object to be shot is
selected. In a state in which a plurality of the face region marks
M1 to M3 are displayed on the through image as illustrated in FIG.
2, the user operates the arrow key 22 and selects the face region
mark that the user desires to register from among the face region
marks M1 to M3. At this time, the cross-shaped mark M4 indicates
the face region mark (FIG. 2 shows that the face region mark M2 is
selected) that is selected at that time. If the user operates the
arrow key 22 in the vertical and horizontal directions, the
cross-shaped mark M4 jumps and moves from the face region mark
where the cross-shaped mark M4 is displayed to other face region
mark. For example, in a state in which the face region mark M2 is
selected as illustrated in FIG. 2, if the user presses down the
left portion of the arrow key 22, the cross-shaped mark M4 jumps
and moves from the face region mark M2 to the face region mark
M1.
[0044] In a state in which the user matches the cross-shaped mark
M4 with the face region mark of the person to be shot, the decision
is made by pressing down the enter button 23. Once the face region
mark is selected, the feature region recognition calculation unit
25 extracts the selected feature points such as eyes, a nose, a
mouth, etc. An adjacent region including the feature points
(feature point adjacent region including an eye region, a nose
region, a mouth region, etc.) is registered in the memory 11 as a
template. Once the face region mark is selected, the through image
is displayed on the display unit 17. Then, the feature region
recognition calculation unit 25 extracts the feature point adjacent
region with respect to the face region recognized within the
through image. The control unit 5 compares the feature point
adjacent region extracted from the through image to the template
registered in the memory 11, that is, calculates similarity.
[0045] Based on the calculation result of similarity, the control
unit 5 sends the display unit 17 an instruction for executing
displaying the face region mark in the feature point adjacent
region in which similarity to the template is determined to be
high. The control unit 5 sends the display unit 17 an instruction
for canceling displaying the face region mark in the feature point
adjacent region in which similarity to the template is determined
to be low.
[0046] Therefore, after selecting the face region mark, among the
face region within the through image, the face region mark is
displayed only on the face region that matches the selected face
region, and a face region mark is not displayed in other face
regions. A method for selecting a feature region is not limited to
this. For example, as long as there is a display unit 17 that
provides a touch panel on the surface, the face region mark can be
selected by pressing the face region mark to be selected with a
finger, etc., instead of the user operating the arrow key 22 and
selecting the face region mark. If the face region is not selected
in step S106, the operation returns to step S105.
[0047] If the face region is selected, the operation proceeds to
step S107. In step S107, when the ON signal of the halfway-press
switch SW1 is input to the control unit 5 by operating
halfway-pressing the release button 20, the AF processing is
performed with respect to the face region selected in step S106.
This AF processing is a conventional contrast AF as described in
the Background Technology section. Once the face region is brought
into focus in step S108, it is whether the enter button 23 is
pressed down in a state in which the face region is brought into
focus. If the enter button 23 is not pressed, the operation returns
to step S107 and performs the AF processing again. If the enter
button 23 is pressed, the operation proceeds to step S109.
[0048] In step S109, the control unit 5 registers in the memory 11
information related to the object determined in step S108. The
information related to the object refers to position information of
the lens 2 at the time of determining the face region in step S108,
distance (object distance) to the object (face region) calculated
based on the position information of the lens 2, and the size of
the face region mark. The position information of the lens 2 refers
to the position of the focus lens 2a and the zoom lens 2b on the
optical axis and is obtained by the focus lens position detection
unit 6 and the zoom lens position detection unit 7. The detection
signal is output to the control unit 5. Once the detection signal
is input, the control unit 5 calculates the object distance based
on the detection signal. The size of the face region is the length
of one of a vertical side and a horizontal side of the face region
mark displayed in a rectangular shape, or the combination. This
determines the relationship between the predetermined object
distance and the size of the face region mark. Upon completion of
the registration of the information related to the object, a
through image is displayed on the display unit 17.
[0049] The following steps S110 to S120 are steps relating to
shooting.
[0050] In step S110, it is determined whether the halfway-press
switch SW1 of the release button 20 is ON. If the halfway-press
switch SW1 of the release button 20 is OFF, the determination of
step S110 is repeated until the halfway-press switch SW1 is turned
ON. If the halfway-press switch SW1 of the release button 20 is ON,
the operation proceeds to step S111.
[0051] When the halfway-press switch SW1 of the release button 20
is ON, in step S1, a stop value is set so that an undepicted stop
becomes the smallest or close to the smallest. In other words, it
is set to be a pan focus. The reason why this is set is that an
object which is being moved during the object distance calculation
processing of the later-mentioned step S116, particularly, the face
region is recognized; thus, by deepening the depth of field, the
moving object, particularly the face region, can be recognized in a
wide range with respect to the optical axis direction. Here, the
focus lens 2a is driven so as to correspond to a hyperfocal
distance. The hyperfocal distance is the shortest object distance
among the object distances included in the depth of field at the
time of pan focus shooting. Here, the depth of field can be set
according to the configuration of the object in which the user
defines to shoot, and the focus lens 2a can be driven corresponding
to this.
[0052] In step S112, it is determined whether the face region
registered in step S109 exists in the through image. If the
registered face region does not exist within the through image, the
operation proceeds to step S113. In step S113, the control unit 5
releases the pan focus setting by resetting the stop value that has
been set in step S111, and sets the stop value so as to have an
appropriate exposure with respect to the object existing within the
through image. In step S114, the predictive AF mode is switched to
a normal AF mode, for example, a conventional contrast AF as
described in the Background Technology section. For example, if a
landscape such as a mountain is being displayed as a through image,
the focus lens 2a is driven so as to be brought into focus to be
infinity. In step S115, it is determined whether the fully-press
switch SW2 of the release button 20 is ON.
[0053] If the fully-press switch SW2 is OFF, the operation returns
to step S111, and the stop value is set so that an undepicted stop
becomes the smallest or close to the smallest. If the fully-press
switch SW2 is ON, the operation proceeds to step S120.
[0054] Meanwhile, in step S112, if the registered face region
exists within the through image, the operation proceeds to step
S116 after displaying the face region mark with respect to the
registered face region, and the object distance calculation
processing begins. The object distance at this time is calculated
by substituting a parameter of the size of the face region mark and
a parameter of the focal length of the lens 2 for a predetermined
arithmetic expression. A table in which the relationship between
the size of the face region mark and the focal length of the lens 2
is correlated to the object distance may be created in advance and
stored in the memory 11, and the object distance can be calculated
by referring to this table.
[0055] As long as the face region is detected, the face region mark
is displayed by tracking the face region in order to be
superimposed on the face region even if the face region moves.
Here, in step S111, since the focus lens 2a is driven so as to
correspond to the hyperfocal distance, the focus lens 2a is not
driven even if the halfway-press switch SW1 of the release button
20 is ON. The focus lens 2a, however, may be driven so as to be
brought into focus in response to the movement of the object. This
processing is described in a fourth embodiment. After the face
region is determined to exist in step S112, the setting for the pan
focus continues until the setting of the pan focus is canceled in
step S118.
[0056] In step S116, for each image that has been time-sequentially
shot by the imaging element 8 (for example, 30 frames per second),
information on the size of the face region mark and the focal
length of the lens 2 is obtained, and the object distance is
calculated. If the focal length of the lens 2 is the same as when
the face region is registered in step S109, if the size of the face
region mark displayed on the through image is smaller than the size
of the face region mark when the face region is registered, the
object distance is recognized to be longer than the object distance
when the face region is registered.
[0057] Meanwhile, if the size of the face region mark displayed on
the through image is larger than the size of the face region mark
at the time of registering the face region, the object distance is
recognized to be shorter than the object distance at the time of
registering the face region. The obtained object distance is
recorded in the memory 11. The object distance recorded in the
memory 11 refers to a plurality of frames, and the object distance
in the memory 11 is sequentially updated every time an object is
sequentially shot by the imaging element 8.
[0058] Additionally, a moving speed of the object is calculated
from the time change of the object distance of the plurality of
frames recorded in the memory 11, and the subsequent object
distance is calculated. A description is given in FIG. 4, with an
object person being person A herein, supposing that the vertical
length of the face region mark is "a" and the calculated object
distance is 5 meters when time t=0 seconds. The object moves
thereafter at a certain speed, the vertical length of the face
region mark is "b", which is longer than "a", and the calculated
object distance is 4.83 meters when time t= 5/30 seconds. During
this period, if the focal length of the lens 2 has not changed, the
moving speed of the object is 1 meter per second. Thus, the object
can be predicted to be at a position of 4.80 meters of the object
distance when t= 6/30 seconds. The calculation of the object
distance is repeatedly performed until the release button 20 is
pressed down fully.
[0059] In step S117, it is determined whether the fully-press
switch SW2 of the release button 20 is ON. If the fully-press
switch SW2 of the release button 20 is OFF, the operation returns
to step S112 and it is again determined whether the registered face
region exists within the through image. If the fully-press switch
SW2 of the release button 20 is ON, the operation proceeds to step
S118.
[0060] In step S118, the pan focus setting is released by resetting
the stop value that has been set in step S111, and the stop value
is set so as to have an appropriate exposure with respect to the
object.
[0061] In step S119, the predictive AF processing is performed with
respect to the object. In a camera with an AF function, the time
difference between the release button 20 being fully pressed down
and actual shooting being performed (hereafter referred to as
release time lag) may become a problem. There has been a problem
that a focus of a shot image is shifted because the focus position
with respect to the object changes during the release time lag
especially when the object moves. Here, the focus position with
respect to the object after the release time lag is predicted from
the moving speed of the object according to the result of the
object distance calculation processing of step S116. The focus lens
2a is moved so as to bring the predicted focus position into focus,
so the state in which the object is brought into focus during the
shooting becomes optimized. The release time lag is 0.01 second in
this embodiment. The position of the object at 0.01 second and
after is predicted after the release button 20 is pressed down
fully, and shooting is performed after the position is brought into
focus.
[0062] FIGS. 5(a) and 5(b) are diagrams showing examples of the
predictive AF processing of step S119. Here, movement of an object,
which is a person B, is shown.
[0063] FIG. 5(a) is a diagram showing a time change in a size of a
face region mark of a person B. Here, the size of the face region
mark is a vertical side length of the face region mark. The
horizontal axis represents numbers for each image time-sequentially
shot by the imaging element 8 (through image I1 to through image
I7). These images are taken at 30 frames per second. That is, one
scale on the horizontal axis indicates 1/30 seconds. The vertical
axis indicates the size of the face region mark.
[0064] FIG. 5(b) is a diagram showing a time change of an object
distance of a person B. In the same manner as in FIG. 5(a), the
horizontal axis represents numbers for each image time-sequentially
shot by the imaging element 8, and one scale on the horizontal axis
indicates 1/30 seconds. The vertical axis indicates an object
distance of the person B. As described above, the object distance
is calculated according to the size of the face region mark and the
focal length of the lens 2.
[0065] Movement of the person B is described as follows. The size
of the face region mark is "a" at the time of the through image 11
(FIG. 5(a)), and the object distance of the person B is 5.0 meters
(FIG. 5(b)). In the same manner, in the through image 12 and the
through image 13, the size of the face region mark remains "a"
(FIG. 5(a)). Thus, the object distance of the person B remains 5.0
meters (FIG. 5(b)). In the through image 14, the size of the face
region mark is changed to "b", which is larger than "a" (FIG.
5(a)). Thus, the object distance of the person B becomes shorter,
which is 4.9 meters (FIG. 5(b)). Additionally, in the through image
15, the through image 16, and the through image 17, the size of the
face region mark becomes larger in proportion to c, d, and e,
respectively, as time elapses (FIG. 5(a)), and the object distance
of the person B is 4.6 meters at the time of the through image 17.
Consequently, the person B is determined to be moving closer to the
camera at the moving speed of 3.0 meters per second.
[0066] Supposing that the release button 20 is pressed down fully
at the time of the through image 17, in response to the fully-press
signal, the control unit 5 calculates the position of the person B
at 0.01 second at the time of imaging, that is, for the release
time lag, based on the object distance of the person B and the
moving speed. Therefore, the position of the person B at the time
of imaging can be predicted to be 4.6 m+(-3.0 m/second).times.0.01
second=4.57 meters. Based on the calculation result of the object
distance, the control unit 5, sends the focus lens drive unit 3 an
instruction for driving the focus lens 2a so the position of 4.57
meters of the object distance is brought into focus. Then, the
focus lens drive unit 3, which has received the instruction from
the control unit 5, drives the focus lens 2a.
[0067] In step S120, shooting is performed by the imaging element
8. Here, exposure conditions of the digital camera 1 may as well be
modified according to the movement of the object. For example, if
the moving speed of the object is fast, shutter speed may be made
faster, or ISO sensitivity may be increased.
[0068] According to the above-described embodiment, the following
operational effects can be obtained.
[0069] By calculating the size of the feature region and the
distance between the focal length of the lens 2 and the object for
each image time-sequentially shot by the imaging element 8, the
distance to the object at the time of imaging is predicted, and the
focus lens 2a is driven so as to bring the object into focus.
Because of this, the object can be more accurately brought into
focus, and shooting can be performed.
[0070] The feature region, selected by the user, of at least one
feature region recognized from the image is registered. This causes
the predictive AF to be performed with respect to the object having
the registered feature region even if a plurality of objects exist
at the time of imaging. Therefore, the object having the registered
feature region can be constantly brought into focus without
bringing other objects that are not registered into focus.
[0071] When displaying the through image at the time of imaging, a
stop value is set so that a diameter of an undepicted stop becomes
the smallest or close to the smallest, and the focus lens 2a is
driven to a position corresponding to a hyperfocal distance. This
enables the depth of field to be deepened. Even if it is a moving
object (feature region), the image data which has been brought into
focus can be obtained in a wide range. Additionally, since the lens
2 is not needed to be driven, power consumption of the digital
camera 1 can be decreased.
[0072] The focus lens 2a is fixed after being driven to the
position corresponding to a hyperfocal distance and is driven to a
position in which the object is brought into focus at the time of
imaging. This enables the lens 2 to be efficiently driven to the
focus position, and the speed of the AF processing can be
increased.
[0073] The exposure conditions of the digital camera 1 are modified
according to the movement of the object at the time of imaging.
Because of this, shooting can be performed under appropriate
exposure conditions with respect to the object.
[0074] The embodiment can also be modified as follows.
[0075] In step S102 of in FIG. 3, an example of a person being
selected as a type of the object was explained. Here, the example
of a soccer ball is described. A method for recognizing and
determining a soccer ball as a feature region is described. The
other parts are described above according the embodiment.
[0076] A method for recognizing a soccer ball includes a method
extracting a round-shaped region candidate corresponding to the
shape of the soccer ball from image data and determining a soccer
ball from within the region, a method detecting color from image
data, etc. In addition, the soccer ball may as well be recognized
by combining these methods.
[0077] A method for recognizing a soccer ball by detecting color
from image data is described here. Supposing that a soccer ball is
formed by two colors, black and white, the soccer ball is
recognized by extracting a region formed by the two colors, black
and white, from image data. Additionally, the region ratio of black
and white forming the soccer ball has no significant difference
even if it is seen from any angles. Therefore, the region ratio of
black and white is registered in advance. Then, the region
corresponding to the pre-registered region ratio of black and white
is extracted.
[0078] Furthermore, the user may set the shape of the feature
region. In step S102 of FIG. 3, once a soccer ball is selected as a
type of object, a selection tool corresponding to the shape of the
soccer ball, for example, a round-shaped frame, is displayed with
overlapping with a through image. The size of the selection tool
can be adjusted by operating the arrow key 22. The user adjusts the
size of the selection tool so as to be substantially the same size
of the soccer ball displayed on the through image. After adjusting
the size of the selection tool, pressing down the enter button 23
causes the size of the selection tool to be fixed. The selection
tool whose size is fixed can be moved in vertical and horizontal
directions by operating the arrow key 22. The user superimposes the
selection tool on the soccer ball displayed on the through image.
Once the position of the selection tool is adjusted, pressing down
the enter button 23 causes the soccer ball to be registered as a
feature region.
[0079] According to the above-mentioned modified example, the
following operational effects can be obtained.
[0080] A method for recognizing a soccer ball from an image is to
include registering the color region ratio specific to the feature
region in advance and detecting the feature region corresponding to
the color region ratio in addition to extracting the round-shaped
region and detecting particular colors from image data. This
improves the accuracy of recognizing the feature region from the
image data.
[0081] The selection tool according to the type of the selected
object is displayed, and the user can adjust the selection tool
according to the size and the position of the feature region. This
enables the feature region to be designated even if the object
whose feature region is difficult to be recognized from the image
data.
Second Embodiment
[0082] The second embodiment of the present invention is described
hereinafter.
[0083] In the first embodiment, the feature region is designated
which is desired to be brought into focus from the feature region
that is recognized within the through image. In this embodiment,
the feature region which is desired to be brought into focus can be
set in advance from the image data saved in the memory card 15,
etc.
[0084] A basic configuration of the digital camera of the second
embodiment is the same as that of the first embodiment. Portions
different from the first embodiment are described hereinafter. FIG.
6 is a flowchart showing a processing procedure for setting a
feature region based on image data saved in the memory card 15,
etc. The processing shown in FIG. 6 is executed by the control unit
5, etc.
[0085] First, while the power source button 19 of the digital
camera 1 is ON, if a setup mode is selected by operating an
undepicted mode dial, a setup menu screen is displayed on the
display unit 17. On this setup menu screen, various menu items
related to shooting and reproduction are displayed. Here, there is
an item of "predictive AF" which performs various settings of the
predictive AF mode. The operating the arrow key 22 causes the item
of "predictive AF" to be selected. Pressing down the enter key 23
causes the item of "predictive AF" to be determined. Then, a
predictive AF menu screen is displayed on the display unit 17.
[0086] On this predictive AF menu screen, various menu items
related to the predictive AF mode are displayed. Here, there is an
item of "feature region setting" for designating a feature region
when shooting is performed in the predictive AF mode. Operating the
arrow key 22 causes the item of "feature region setting" to be
selected. Pressing down the enter key 23 causes the item of
"feature region setting" to be determined. Then, the operation
proceeds to step S201, and a feature region setting screen is
displayed on the display unit 17.
[0087] A list of the images saved in the memory card 15 is
displayed on the feature region setting screen. As a method of
displaying a list, thumbnail images of the images saved in the
memory card 15 may be displayed. If the memory card 15 is not used,
thumbnail images of the images saved in the built-in memory within
the memory 11 may be displayed.
[0088] In step S202, it is determined whether the thumbnail image
including the feature region which is desired to be brought into
focus from among the thumbnail images displayed on the display unit
17. As a method of determining a thumbnail image, operating the
arrow key 22 causes the thumbnail images to be selected. Pressing
down the enter key 23 causes the thumbnail images to be determined.
If the thumbnail image is not decided, the operation repeats the
determination of step S202 until the thumbnail image is determined.
If the thumbnail image is determined, the operation proceeds to
step S203.
[0089] Once the thumbnail image is determined, in step S203, the
image corresponding to the thumbnail image is reproduced and
displayed on the display unit 17. At this time, the image for
selecting the type of the object to be recognized is superimposed
on the through image and displayed on the display unit 17. Once the
type of the object is selected, the selection tool corresponding to
the shape of the object is superimposed on the through image and
displayed. For example, if a person is selected as the type of the
object, a selection tool is displayed whose elliptical shape is
vertically long. Then, operating both the arrow key 22 and the
enter key 23 causes the size and the position of the selection tool
to be adjusted for setting the feature region. Details of a method
of setting a selection tool is the same as that of the modified
example of the first embodiment, description is omitted herein.
[0090] In step S204, it is determined whether an instruction for
completing the feature region setting screen exists. If the feature
region setting screen is not completed, the operation returns to
step S201, and the feature region setting screen is again displayed
on the display unit 17. For example, if the user operates the
operation unit 18 and selects the completion of the feature region
setting screen, and the feature region setting screen is completed,
setting for the feature region is completed. If an object is shot
after the setting for the feature region, the operation proceeds to
processing shown in FIG. 7.
[0091] Furthermore, in this embodiment, there is only one image
data which is used when the feature region is set. However, the
feature region can also be set by using a plurality of image data
with respect to the same object. For example, if the type of the
object is a person, the face becomes a feature region. A feature
region can also be set from image data including a face facing an
angled direction such as a side view of the face. Furthermore, if
the feature region is set by using a plurality of image data with
respect to the same object, the plurality of image data are
configured to become related image data. As a method for making
related image data, for example, there is a method in which the
user inputs and saves the same keyword into the image data.
[0092] Additionally, the number of objects to be subject to the
feature region is not limited to one, but a plurality of feature
regions of different objects can be set.
[0093] FIG. 7 is a flowchart showing a shooting procedure in the
predictive AF mode when a feature region is already set from image
data.
[0094] If the AF mode selection switch 24 is switched ON with the
power source button 19 of the digital camera 1 being switched ON,
in step S205, it is determined whether there is one feature region
set from the image data. If there is one feature region set from
the image data, the operation proceeds to step S208. If there are a
plurality of feature regions set from the image data, the operation
proceeds to step S206, and the list is displayed so as to see the
set feature region. As a method of displaying the list, the
thumbnail images including the set feature regions may be
displayed, or the keywords registered in the images including the
set feature regions may also be displayed.
[0095] In step S207, it is determined whether one feature region is
selected from among the list of the set feature regions. If a
feature region is not selected, determination of step S207 is
repeated until a feature region is selected. If a feature region is
selected, the operation proceeds to step S208.
[0096] In step S208, information related to the set object is
registered in the memory 11. The information related to the object
refers to position information of the lens 2 at the time of
shooting an image, the distance (object distance) to the object
calculated according to the position information of the lens 2, and
the size of the feature region. Such information is recorded in the
images in Exif format. The size of the feature region is the size
of the selection tool when the size of the selection tool is
determined in step S203.
[0097] For example, if the selection tool is elliptical-shaped, the
size of the feature region is either the length of an elliptical
long axis (segment in which intersection of ellipse and a straight
line connecting two foci of the ellipse is used as both ends), the
length of a short elliptical axis (segment in which intersection of
ellipse, a straight line perpendicular to the center of the
ellipse, and a long axis is used as both ends), or the combination
of the length of the long elliptical axis and the length of the
short elliptical axis. Thus, the control unit 5 reads the
information related to the object from the image and saves the
information in the built-in memory within the memory 11.
[0098] Once the information about the object is registered in step
S208, the operation proceeds to step S110 of FIG. 3. Since
subsequent steps are the same as those of the first embodiment,
description is omitted herein.
[0099] According to the above-described embodiment, the following
operational effects can be obtained.
[0100] The feature region which is desired to be brought into focus
from the image data saved in the memory card 15, etc. can be set.
By so doing, the user can set the feature region which is desired
to be brought into focus in advance before shooting. As soon as the
digital camera 1 is activated, shooting can be performed.
[0101] The feature region can be set by using a plurality of image
data with respect to the same object. This improves accuracy of
recognizing an object.
Third Embodiment
[0102] The third embodiment of the present invention is described
hereinafter.
[0103] In the first embodiment, in response to the operation of the
enter key 23 by the user, the feature region is set which is
desired to be brought into focus, and the object information is
registered. In this embodiment, setting of the feature region and
registration of object information are automatically performed.
[0104] The basic configuration of the digital camera of the third
embodiment is the same as that of the first embodiment. Portions
different from the first embodiment are described hereinafter. A
shooting procedure in the predictive AF mode of the third
embodiment is described with reference to a flowchart of FIG. 8.
The processing shown in the flowchart of FIG. 8 is executed by the
control unit 5, etc.
[0105] In step S301, if the AF mode selection switch 24 is switched
ON, the through image is displayed on the display unit 17. In step
S302, the type of the object is selected. A case is described
hereinafter in which a person is selected as an object type.
[0106] Once the object type to be recognized is selected, in step
S303, the control unit 5 sends the feature region recognition
calculation unit 25 an instruction for initiating the feature
region recognition processing with respect to the through image.
Here, since a person is selected as the object type in step S302,
the face region recognition processing is initiated in which a face
of a person is recognized as a feature region.
[0107] In step S304, the control unit 5 determines whether the face
region is recognized in response to a recognition result of the
face region at that time from the feature region recognition
calculation unit 25. If the face region is not recognized, the
operation returns to the step S303 and again performs the face
region recognition processing. If the face region is recognized,
the operation proceeds to step S305. Here, if a plurality of face
regions are recognized, the largest face region is automatically
selected among the plurality of the recognized face regions. Then,
the operation proceeds to step S305. Alternatively, a region
located in a position closest to the center of the screen may be
automatically selected among the plurality of the face regions.
[0108] In step S305, the face region mark indicating the recognized
face region is superimposed on the through image and displayed on
the display unit 17. The face region mark indicates the face region
whose object information is to be registered. The face region mark
is made to be, for example, a rectangular frame shown in FIG. 2 and
is displayed, for example, in white.
[0109] When the face region to be registered is set, the operation
proceeds to step S307. In step S307, in response to the ON signal
of the halfway-press switch SW1, the contrast AF processing is
performed with respect to the designated face region. In the
following step S308, it is determined whether the designated face
region is brought into focus. If the face region is determined to
be brought into focus, the operation proceeds to step S309. In step
S309, the face recognition processing is again performed in a state
in which the object is accurately brought into focus. The object
information in this state is registered in the memory 11. Bringing
the designated face region into focus causes the display color of
the face region mark to be changed so as to indicate that the face
region is brought into focus. For example, the face region mark
displayed in white is changed into green after being brought into
focus. Alternatively, the face region mark may be flashed after
being brought into focus.
[0110] If it is determined that the face region is not brought into
focus in step S308, the operation proceeds to step S321, and the
user is informed that the object information cannot be registered.
A warning of being non-registrable includes, for example,
displaying a warning on the display unit 17 or turning on a warning
light.
[0111] Thus, after the recognition of the face region and the
registration of the object information are automatically performed,
the operation proceeds to step S311. Processing of steps S311 to
S320 is the same as that of steps S111 to S120 of the first
embodiment, description is omitted.
[0112] In the above-mentioned third embodiment, setting of the
feature region and registration of the object information can be
easily performed.
Fourth Embodiment
[0113] The fourth embodiment of the present invention is described
hereinafter.
[0114] In the above-mentioned first embodiment, pan focus setting
is performed when displaying a through image in response to the
operation of the halfway-press switch SW1. In the fourth
embodiment, if the selected face region exists within the through
image, the predictive AF processing is performed so that movement
of the face region is predicted and brought into focus.
[0115] The basic configuration of the digital camera of the fourth
embodiment is the same as that of the first embodiment. Portions
different from the first embodiment are described hereinafter. A
shooting procedure of the predictive AF mode of the fourth
embodiment is described with reference to a flowchart of FIG. 9.
The processing shown in the flowchart in FIG. 9 is executed by the
control unit 5, etc.
[0116] Processing of steps S401 to S409 is the same as that of
steps S101 to S109 of the above-mentioned first embodiment, so
description is omitted here.
[0117] In step S410, it is determined whether the halfway-press
switch SW1 of the release button 20 is ON. If the halfway-press
switch SW1 of the release button 20 is OFF, the determination of
step S410 is repeated until the halfway-press switch SW1 is turned
ON. If the halfway-press switch SW1 of the release button 20 is ON,
the operation proceeds to step S412.
[0118] In step S412, it is determined whether the face region
registered in step S409 exists within the through image. If the
registered face region does not exist within the through image, the
operation proceeds to step S414. In step S414, a conventional
contrast AF processing is performed. In step S415, it is determined
whether the fully-press switch SW2 of the release button 20 is ON.
If the fully-press switch SW2 is OFF, the operation returns to step
S412. If the fully-press switch SW2 is ON, the operation proceeds
to step S420.
[0119] In step S412, if the registered face region exists within
the through image, the face region mark is displayed with respect
to the registered face region, and then the operation proceeds to
step S416, and the object distance calculation processing is
performed. The control unit 5 calculates a current object distance
according to the information about the size of the face region mark
and the focal length of the lens 2. Furthermore, in the same manner
as the above-mentioned first embodiment, the object distance after
a predetermined time is predicted.
[0120] In the following step S416A, the previous object distance
calculated in a previous cycle and recorded in the memory 11 is
compared with the object distance after the predetermined time
calculated in step S416. For the predetermined time, an appropriate
value is set in advance in consideration of control delay in the
control unit 5. For example, the predetermined time may be set at
the same value as the above-mentioned release time lag.
[0121] In step S416B, if it is determined that the difference
between the previous object distance and the predicted object
distance (="previous object distance"-"predicted object distance")
is a threshold value or more, the operation proceeds to step S416C.
If ("previous object distance"-"predicted object distance") is
determined to be a threshold value or less, the operation proceeds
to step S417. Even if the object distance changes, the threshold
value is appropriately set in advance as a value in which the face
of the object corresponding to the registered face region is not
blurred on the through image. A change rate of the object distance
may also be set as a threshold value. If the object distance is
short, the threshold value may be set to be smaller than that of
the long distance.
[0122] In step S416C, the predictive AF processing is performed
with respect to the object. Here, a focus position of the object
after a predetermined time is predicted based on the object
distance after the predetermined time calculated in step S416, and
the focus lens 2a is moved so as to bring the focus position into
focus. In step S416D, the object distance of this cycle calculated
in step S416 is recorded in the memory 11.
[0123] In step S417, it is determined whether the fully-press
switch SW2 of the release button 20 is ON. If the fully-press
switch SW2 of the release button 20 is OFF, the operation returns
to the step S412, and it is again determined whether the registered
face region exists within the through image. If the fully-pressed
switch SW2 of the release button 20 is ON, the operation proceeds
to step S418.
[0124] In step S418, the object distance calculation processing is
performed with respect to the registered face region. In the
following step S419, the predictive AF processing is performed with
respect to the object. After that, in step S420, shooting is
performed by the imaging element 8.
[0125] According to the above-mentioned fourth embodiment, the
previous object distance and a predicted object distance are
compared to each other when the through image is displayed. If the
image of the face of the object corresponding to the face region
set on the through image is predicted to be blurred, the predictive
AF processing is performed with respect to the set face region.
Thus, the through image which has accurately brought the moving
object into focus can be displayed as needed.
[0126] The above-mentioned second embodiment may also be combined
with the third or fourth embodiment. Alternatively, the third and
fourth embodiments may also be combined.
BRIEF DESCRIPTION OF THE DRAWINGS
[0127] FIG. 1 is a block diagram showing an electrical
configuration of a digital camera 1 in a first embodiment of the
present invention.
[0128] FIG. 2 is a diagram displaying face region marks with
respect to faces of persons who are objects in the first embodiment
of the present invention.
[0129] FIG. 3 is a flowchart showing a shooting procedure in a
predictive AF mode in the first embodiment of the present
invention.
[0130] FIG. 4 is a diagram showing an example of the relationship
between a time change of an object distance of a plurality of
frames and a subsequent object distance in the first embodiment of
the present invention.
[0131] FIG. 5 shows diagrams showing an example of the relationship
between a through image, a size of a face region mark, and an
object distance in the first embodiment of the present
invention.
[0132] FIG. 6 is a flowchart showing a procedure of setting a
feature region from image data in a second embodiment of the
present invention.
[0133] FIG. 7 is a flowchart showing a shooting procedure in a
predictive AF mode if a feature region is set from image data in
the second embodiment of the present invention.
[0134] FIG. 8 is a flowchart showing a shooting procedure in a
predictive AF mode in a third embodiment of the present
invention.
[0135] FIG. 9 is a flowchart showing a shooting procedure in a
predictive AF mode in a fourth embodiment of the present
invention.
EXPLANATION OF THE SYMBOLS
[0136] 1 Digital camera [0137] 2 Lens [0138] 2a Focus lens [0139]
2b Zoom lens [0140] 3 Focus lens drive unit [0141] 4 Zoom lens
drive unit [0142] 5 Control unit [0143] 6 Focus lens position
detection unit [0144] 7 Zoom lens position detection unit [0145] 8
Imaging element [0146] 9 Analog signal processing unit [0147] 10
Analog-digital converter [0148] 11 Memory [0149] 12 Bus [0150] 13
Digital signal processing unit [0151] 14 Compression/expansion unit
[0152] 15 Memory card [0153] 16 Digital-analog converter [0154] 17
Display unit [0155] 18 Operation unit [0156] 19 Power source button
[0157] 20 Release button [0158] 21 Menu button [0159] 22 Arrow key
[0160] 23 Enter button [0161] 24 AF mode selection switch [0162] 25
Feature region recognition calculation unit
* * * * *