U.S. patent application number 17/683634 was filed with the patent office on 2022-06-09 for collation system.
This patent application is currently assigned to NEC Corporation. The applicant listed for this patent is NEC Corporation. Invention is credited to Taketo Kochi, Kenji Saito.
Application Number | 20220180657 17/683634 |
Document ID | / |
Family ID | 1000006168067 |
Filed Date | 2022-06-09 |
United States Patent
Application |
20220180657 |
Kind Code |
A1 |
Kochi; Taketo ; et
al. |
June 9, 2022 |
COLLATION SYSTEM
Abstract
A collation system of the present invention includes imaging
means for acquiring a captured image of a pre-passage side area
with respect to each of gates arranged in parallel with each other,
and collation means for performing a collation process on the
captured image of the pre-passage side area for each of the gates,
between a previously registered target and a target included in the
captured image. The collation means performs the collation process
on the basis of a target in the captured image corresponding to one
of the gates and a target in the captured image corresponding to
another one of the gates.
Inventors: |
Kochi; Taketo; (Tokyo,
JP) ; Saito; Kenji; (Tokyo, JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
NEC Corporation |
Tokyo |
|
JP |
|
|
Assignee: |
NEC Corporation
Tokyo
JP
|
Family ID: |
1000006168067 |
Appl. No.: |
17/683634 |
Filed: |
March 1, 2022 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16645821 |
Mar 10, 2020 |
11295116 |
|
|
PCT/JP2018/029807 |
Aug 8, 2018 |
|
|
|
17683634 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06V 40/165 20220101;
G06V 40/171 20220101; G07C 9/10 20200101; G06V 2201/07
20220101 |
International
Class: |
G06V 40/16 20060101
G06V040/16; G07C 9/10 20060101 G07C009/10 |
Foreign Application Data
Date |
Code |
Application Number |
Sep 19, 2017 |
JP |
2017-179064 |
Claims
1. A collation method comprising: acquiring a first captured image
that is a captured image of a pre-passage side area with respect to
a gate; acquiring a second captured image that is a captured image
of an area including at least a part of the pre-passage side area;
and using the second captured image in a collation process between
a previously registered target and a target included in the first
captured image.
2. The collation method according to claim 1, further comprising
determining a collation target on which the collation process is
performed, on a basis of a target located in a predetermined area
of the first captured image and a target in the second captured
image.
3. The collation method according to claim 2, further comprising
determining the collation target on a basis of a target located in
a first area of the first captured image and a target located in a
second area of the second captured image.
4. The collation method according to claim 3, further comprising
when the target located in the first area of the first captured
image is also located in the second area of the second captured
image at a timing when the target is located in the first area,
excluding the target located in the first area of the first
captured image from the collation target.
5. The collation method according to claim 4, further comprising
when determining that the target located in the first area of the
first captured image is also located in the second area of the
second captured image on a basis of a feature amount of the target
located in the first area of the first captured image and a feature
amount of the target located in the second area of the second
captured image, excluding the target located in the first area of
the first captured image from the collation target.
6. The collation method according to claim 5, further comprising
when determining that the target located in the first area of the
first captured image is also located in the second area of the
second captured image on a basis of an inter-eye distance of a
person who is the target located in the first area of the first
captured image and an inter-eye distance of a person who is the
target located in the second area of the second captured image,
excluding the target located in the first area of the first
captured image from the collation target.
7. The collation method according to claim 3, further comprising
setting the first area of the first captured image and the second
area of the second captured image, on a basis of the target in the
first captured image and the target in the second captured image
acquired at a timing when the target in the first captured image is
located in the first captured image.
8. The collation method according to claim 7, further comprising
when the target in the first captured image and the target in the
second captured image acquired at a timing when the target in the
first captured image is located in the first captured image are
identical, on a basis of a position of the target in the first
captured image and a position of the target in the second captured
image, setting a periphery of the position of the target in the
first captured image as the first area and setting a periphery of
the position of the target in the second captured image as the
second area.
9. An information processing device comprising: at least one memory
configured to store instructions; and at least one processor
configured to execute instructions to: acquire a first captured
image that is a captured image of a pre-passage side area with
respect to a gate; acquire a second captured image that is a
captured image of an area including at least a part of the
pre-passage side area; and use the second captured image in a
collation process between a previously registered target and a
target included in the first captured image.
10. A non-transitory computer-readable storage medium storing
thereon a program comprising instructions for causing an
information processing device to execute processing to: acquire a
first captured image that is a captured image of a pre-passage side
area with respect to a gate; acquire a second captured image that
is a captured image of an area including at least a part of the
pre-passage side area; and use the second captured image in a
collation process between a previously registered target and a
target included in the first captured image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a continuation application of
U.S. patent application Ser. No. 16/645,821 filed on Mar. 10, 2020,
which is a National Stage Entry of international application
PCT/JP2018/029807, filed on Aug. 8, 2018, which claims the benefit
of priority from Japanese Patent Application 2017-179064 filed on
Sep. 19, 2017, the disclosures of all of which are incorporated in
their entirety by reference herein.
TECHNICAL FIELD
[0002] The present invention relates to a collation system, and in
particular, to a collation system for performing collation on a
target that is about to pass through a gate.
BACKGROUND ART
[0003] As a means for limiting or managing persons who enter or
leave a specific location such as an office or an event site, a
collation system is used to perform collation on a person who is
about to pass through to check whether or not the person is a
previously registered person. In particular, since a human face
authentication technique has been developed recently, a walkthrough
face authentication system has been used to perform face
authentication from a face image of a person captured by a camera
installed at a gate.
[0004] Patent Literature 1: JP 2015-1790 A
SUMMARY
[0005] In a walkthrough face authentication system, there is a case
where a plurality of gates are installed adjacently, and lanes on
which persons move toward the respective gates are provided
adjacently. In that case, in an image in which persons moving on a
lane of a gate are captured, there is a case where a person moving
on the adjacent lane toward the adjacent gate is also shown.
Therefore, the person moving on the adjacent lane may be
erroneously recognized as a person moving on the own lane, which
may hinder collation of a person passing through the own gate
provided ahead of the own lane. Therefore, in a situation where a
plurality of gates are installed, erroneous recognition of a person
moving toward each gate should be suppressed.
[0006] Further, as art related to the walkthrough face
authentication system, art described in Patent Literature 1 has
been known. Patent Literature 1 presents a problem that
authentication is completed when a person is located away from the
gate and the gate is opened before the person reaches the gate. In
order to solve such a problem, in Patent Literature 1, collation is
performed between the captured face image of a person and a
registered face image, and based on the size of the collated person
on the input image, approaching to the gate of the person is
detected. In addition, in Patent Literature 1, it is attempted to
prevent erroneous recognition of a person moving toward an adjacent
gate by setting different determination reference values for the
size of a face area, for the respective areas of a captured
image.
[0007] However, the art described in Patent Literature 1 only uses
a captured image by a camera installed corresponding to a gate that
a person is about to pass through, and approaching to the own gate
by a person is detected with use of a determination reference value
for the size of a person set to each area of the capture image.
This still causes a problem that when the area of a person shown in
a captured image is not appropriate or the determination reference
value is not appropriate, a person moving toward another gate is
erroneously recognized.
[0008] Further, a problem of erroneous recognition of persons
moving toward the gates may be caused in the case where collation
is performed on any target, without being limited to the case where
a target that is about to pass through the gate is a person.
[0009] Therefore, an object of the present invention is to provide
a collation system capable of solving the problem described above,
that is, a problem of erroneous recognition of a target that is
about to pass through a gate.
[0010] A collation system, according to one aspect of the present
invention, includes
[0011] an imaging means for acquiring a captured image of a
pre-passage side area with respect to each of gates arranged in
parallel with each other, and
[0012] a collation means for performing a collation process on the
captured image of the pre-passage side area for each of the gates,
between a previously registered target and a target included in the
captured image.
[0013] The collation means is configured to perform the collation
process on the basis of a target in the captured image
corresponding to one of the gates and a target in the captured
image corresponding to another one of the gates.
[0014] Further, an information processing apparatus, according to
one aspect of the present invention, includes
[0015] a collation means for performing a collation process on a
captured image of a pre-passage side area with respect to each of
gates installed in parallel with each other, between a previously
registered target and a target in the captured image.
[0016] The collation means is configured to perform the collation
process on the basis of a target in the captured image
corresponding to one of the gates and a target in the captured
image corresponding to another one of the gates.
[0017] Further, a program, according to one aspect of the present
invention, is a program for causing an information processing
apparatus to realize
[0018] a collation means for performing a collation process on a
captured image of a pre-passage side area with respect to each of
gates installed in parallel with each other, between a previously
registered target and a target in the captured image.
[0019] The collation means is configured to perform the collation
process on the basis of a target in the captured image
corresponding to one of the gates and a target in the captured
image corresponding to another one of the gates.
[0020] Further, a collation method, according to one aspect of the
present invention, is a method including performing a collation
process on a captured image of a pre-passage side area with respect
to each of gates installed in parallel with each other, between a
previously registered target and a target in the captured
image.
[0021] The collation process is performed on the basis of a target
in the captured image corresponding to one of the gates and a
target in the captured image corresponding to another one of the
gates.
[0022] As the present invention is configured as described above,
erroneous recognition of a target that is about to pass through a
gate can be suppressed.
BRIEF DESCRIPTION OF DRAWINGS
[0023] FIG. 1 illustrates a used state of a face authentication
system according to a first exemplary embodiment of the present
invention.
[0024] FIG. 2 is a block diagram illustrating a configuration of
the face authentication system according to the first exemplary
embodiment of the present invention.
[0025] FIG. 3 is illustrates an imaging state by the face
authentication system disclosed in FIG. 1.
[0026] FIG. 4A illustrates a captured image captured in the imaging
state of FIG. 3 by the face authentication system disclosed in FIG.
1.
[0027] FIG. 4B illustrates a captured image captured in the imaging
state of FIG. 3 by the face authentication system disclosed in FIG.
1.
[0028] FIG. 5 illustrates an imaging state by the face
authentication system disclosed in FIG. 1.
[0029] FIG. 6A illustrates a captured image captured in the imaging
state of FIG. 5 by the face authentication system disclosed in FIG.
1.
[0030] FIG. 6B illustrates a captured image captured in the imaging
state of FIG. 5 by the face authentication system disclosed in FIG.
1.
[0031] FIG. 7 is a flowchart illustrating a processing operation by
the face authentication system disclosed in FIG. 1.
[0032] FIG. 8 illustrates an imaging state by a face authentication
system according to a second exemplary embodiment of the present
invention.
[0033] FIG. 9A illustrates a captured image captured in the imaging
state of FIG. 8 by the face authentication system according to the
second exemplary embodiment of the present invention.
[0034] FIG. 9B illustrates a captured image captured in the imaging
state of FIG. 8 by the face authentication system according to the
second exemplary embodiment of the present invention.
[0035] FIG. 10 illustrates an imaging state by the face
authentication system according to the second exemplary embodiment
of the present invention.
[0036] FIG. 11A illustrates a captured image captured in the
imaging state of FIG. 10 by the face authentication system
according to the second exemplary embodiment of the present
invention.
[0037] FIG. 11B illustrates a captured image captured in the
imaging state of FIG. 10 by the face authentication system
according to the second exemplary embodiment of the present
invention.
[0038] FIG. 12 is a flowchart illustrating a processing operation
by the face authentication system according to the second exemplary
embodiment of the present invention.
[0039] FIG. 13 is a block diagram illustrating a configuration of a
face authentication system according to a fourth exemplary
embodiment of the present invention.
[0040] FIG. 14 illustrates an imaging state by the face
authentication system according to the fourth exemplary embodiment
of the present invention.
[0041] FIG. 15A illustrates a captured image captured in the
imaging state of FIG. 12 by the face authentication system
according to the fourth exemplary embodiment of the present
invention.
[0042] FIG. 15B illustrates a captured image captured in the
imaging state of FIG. 12 by the face authentication system
according to the fourth exemplary embodiment of the present
invention.
[0043] FIG. 16A illustrates a state of image processing by the face
authentication system according to the fourth exemplary embodiment
of the present invention.
[0044] FIG. 16B illustrates a state of image processing by the face
authentication system according to the fourth exemplary embodiment
of the present invention.
[0045] FIG. 17 is a block diagram illustrating a configuration of a
collation system according to a fifth exemplary embodiment of the
present invention.
[0046] FIG. 18 is a block diagram illustrating a configuration of
an information processing apparatus according to the fifth
exemplary embodiment of the present invention.
EXEMPLARY EMBODIMENTS
First Exemplary Embodiment
[0047] A first exemplary embodiment of the present invention will
be described with reference to FIGS. 1 to 7. FIG. 1 illustrates a
used state of a face authentication system. FIG. 2 illustrates a
configuration of a face authentication system. FIGS. 3 to 7 are
diagrams for explaining processing operation of the face
authentication system.
[Overall Configuration]
[0048] A face authentication system 10 (collation system) of the
present invention is a system to be used for limiting and managing
entrance/exit of persons (targets) at a specific location such as
an office or an event site. For example, an imaging device
constituting the face authentication system 10 is installed for
each gate that is opened and closed when a person enters or leaves,
in the vicinity of the installed location of the gate.
[0049] In the example illustrated in FIG. 1, three gates G1, G2,
and G3 are adjacently arranged in parallel with each other, and are
configured such that persons go through in a direction shown by
arrows from the right side in FIG. 1 toward the respective gates
G1, G2, and G3. Therefore, the right side area in FIG. 1 with
respect to each of the gates G1, G2, and G3 is an area before a
person passes through the gate (pre-passage side area). In the
pre-passage side areas of the respective gates G1, G2, and G3,
lanes R1, R2, and R3, on which persons who are about to pass
through the gates G1, G2, and G3 move in lines, are located in
parallel with each other corresponding to the gates G1, G2, and G3,
respectively. Note that the respective lanes R1, R2, and R3 may or
may not be partitioned with some members.
[0050] In the state illustrated in FIG. 1, imaging devices C1, C2,
and C3 constituting the face authentication system 10 in the
present embodiment are installed in the vicinity of the
corresponding gates G1, G2, and G3, respectively, on the right side
thereof as viewed from the persons moving toward the respective
gates G1, G2, and G3. However, the installation positions of the
imaging devices are not limited to the positions as illustrated in
FIG. 1. They may be installed at any positions such as a left side
as viewed toward the gates or above the gates. Note that the face
authentication system 10 also has display devices in the vicinity
of the imaging devices C1, C2, and C3.
[0051] In the face authentication system 10, at the gate G1, for
example, an image of a person moving toward the gate G1 is captured
by the installed imaging device C1. Then, from a face image of the
person shown in the captured image, collation is performed to check
whether or not the person is a previously registered person. When
the collation succeeded, a process of opening the gate G1 is
performed so as to allow the person to pass through. Note that to
the other gates G2 and G3, face authentication systems and the
imaging devices C2 and C3 are also installed respectively, and
collation is performed on persons moving toward the respective
gates G2 and G3. The configuration of the face authentication
system 10 will be described in detail below. Note that while
description will be mainly given below on the face authentication
system 10 provided corresponding to the gate G1, a face
authentication system provided corresponding to another gate G2 or
G3 has the same configuration.
[Configuration of Face Authentication System]
[0052] The face authentication system 10 of the present embodiment
is an information processing apparatus including an arithmetic unit
and a storage unit integrally formed with the imaging device C1
(camera) and a display device D1 (display). Put another way, an
information processing apparatus having an arithmetic unit that
performs face authentication processing and a storage unit, and the
display device D1 are mounted on the imaging device C1. However,
the face authentication system 10 is not necessarily limited to
that integrally formed with the imaging device C1 and the display
device D1. For example, the imaging device C1, the display device
D1, and the information processing apparatus that processes
captured images may be different devices and installed at different
locations.
[0053] Specifically, as illustrated in FIG. 2, the face
authentication system 10 includes the imaging device C1 and the
display device D1, and a communication unit 16, and also includes a
target extraction unit 11 constructed by execution of a program by
the arithmetic unit, a target determination unit 12, a collation
unit 13, and a gate control unit 14. The face authentication system
10 also includes a collation data storage unit 17 that are
constructed in the storage unit.
[0054] The imaging device C1 (imaging means) is provided with a
camera for acquiring a captured image of a pre-passage side area
with respect to the gate G1, that is, an area in front of the gate
G1 of the corresponding lane R1, at a predetermined frame rate, and
a camera control unit. As illustrated in FIG. 3, the capturing area
of the imaging device C1 is a range between lines C1a, for example.
Here, the capturing area of the imaging device C1 is set such that
a person P10, moving on the lane R1 (own lane) corresponding to the
gate G1 at which the imaging device C1 is installed, is located in
a center area in the horizontal direction between lines C1b of the
capturing area. Note that the captured image is set to be roughly
focused in a range of a preset distance in the approaching
direction with respect to the imaging device C1, that is, a
distance from a line L1 to a line L2 illustrated in FIG. 3, for
example.
[0055] Since the capturing area of the imaging device C1 is set as
described above, in an end area in the horizontal direction
relative to the center area, that is, in an area between the line
C1a and the line C1b of the capturing area, a person P20 moving on
the lane R2 corresponding to the adjacent gate G2 may be shown, in
the state illustrated in FIG. 3. Further, in the state illustrated
in FIG. 5, a person P12 located on the own lane R1, corresponding
to the own gate G1 at which the imaging device C1 is installed, may
be shown in an end area. Even if such a state is caused, the face
authentication system 10 of the present invention is configured to
exclude the person P20 who is about to pass through the adjacent
gate G2 from the collation process target, and appropriately
recognize the person P12 who is about to pass through the own gate
G1, to thereby enable determination of propriety of gate
passage.
[0056] The target extraction unit 11 acquires a captured image from
the imaging device C1, and extracts a person who is a processing
target from the captured image. Extraction of a person is performed
by, for example, extracting a moving object, or performing
determination from the position of a characteristic shape part or
the position of a color with respect to the overall shape or
overall image. At that time, the target extraction unit 11 also
identifies an area in the captured image where the extracted person
is located. In particular, in the present embodiment, it is
identified whether the extracted person is located in a center area
or left and right end areas in the horizontal direction of the
captured image. For example, in the state of FIG. 3, in a captured
image by the imaging device C1 corresponding to the gate G1, a
person P10 located in the center area as illustrated in FIG. 4A is
extracted, and a person P20 located in the right end area is
detected.
[0057] The target determination unit 12 (collation means)
determines whether or not the persons P10 and P20 extracted by the
target extraction unit 11 are persons moving toward the own gate
G1. At this time, when the person P20 is located in an end area
(first area), the target determination unit 12 determines whether
or not the person P20 is moving toward the own gate G1, on the
basis of a captured image captured by the imaging device C2 of the
face authentication system 20 corresponding to another gate G2. In
this example, since the person P20 is shown in an end area on the
right side in the captured image corresponding to the own gate G1
illustrated in FIG. 4A, a captured image captured by the imaging
device C2 of the gate G2, adjacent on the right side to the gate
G1, is acquired. Specifically, the target determination unit 12
communicates with the face authentication system 20 of the adjacent
gate G2 via the communication unit 16, and acquires a captured
image captured at the gate G2 at a timing that is the same as the
timing when the captured image is captured at the gate G1 (for
example, the same time). Note that the positional relationship
between the face authentication systems (imaging devices C1, C2,
and C3) corresponding to the gates G1, G2, and G3, respectively is
assumed to be registered in advance in the respective systems.
[0058] Then, the target determination unit 12 checks whether or not
the person P20 located in an end area of the captured image of the
own gate G1 is shown in the center area (second area) of the
captured image of the adjacent gate G2. In the present embodiment,
when any person is shown in the center area of the captured image
of the adjacent gate G2, such a person is determined to be the
person P20 located in the end area of the captured image of the own
gate G1. In that case, in the face authentication system 10 of the
own gate G1, it is determined to exclude the person P20 located in
the end area of the captured image from the collation target. Note
that the end area (first area) of the captured image of the own
gate G1 and the center area (second area) of the captured image of
the adjacent gate G2 are areas different from each other on the
captured images, but are overlapped space in the real space.
[0059] For example, in the state of FIG. 3, the person P20 located
in the end area of the captured image of the own gate G1
illustrated in FIG. 4A is located in the center area of the
captured image of the adjacent gate G2 illustrated in FIG. 4B.
Therefore, the person P20 is excluded from the collation target of
the face authentication system 10 of the own gate G1. This means
that in the face authentication system 10 of the own gate G1, only
the person P10 located in the center area of the captured image is
determined to be a collation target. Further, in the state of FIG.
5, the person P12 located in the end area of the captured image of
the own gate G1 illustrated in FIG. 6A is not located in the center
area of the captured image of the adjacent gate G2 illustrated in
FIG. 6B. Therefore, the person P12 is handled as a collation target
of the face authentication system 10 of the own gate G1. This means
that in the face authentication system 10 of the own gate G1, the
person P10 located in the center area of the captured image and the
person P12 located in the end area are determined to be collation
targets.
[0060] Nota that the target determination unit 12 may determine
whether or not a person is a collation target of the own gate by a
method other than that described above. As an example, when a
person is located in the end area (first area) of the captured
image, the target determination unit 12 may inquire of the face
authentication system 20 corresponding to another gate G2 whether
or not the person is a person of the own gate. Specifically, in the
example of FIGS. 3, 4A, and 4B, the person P20 is shown in the
right end area of the captured image corresponding to the own gate
G1 illustrated in FIG. 4A. Therefore, the target determination unit
12 inquires of the face authentication system 20 of the gate G2
adjacent on the right side to the gate G1 whether or not the person
P20 is shown in the center area of the adjacent gate G2. Similarly,
in the example of FIGS. 5, 6A, and 6B, the person P12 is shown in
the right end area of the captured image corresponding to the own
gate G1 illustrated in FIG. 6A. Therefore, the target determination
unit 12 inquires of the face authentication system 20 of the gate
G2 adjacent on the right side to the gate G1 whether or not the
person P12 is shown in the center area of the adjacent gate G2. At
that time, the target determination unit 12 communicates with the
face authentication system 20 of the adjacent gate G2 via the
communication unit 16, and inquires whether or not a person is
shown in the center area of a captured image captured at the gate
G2 at a timing that is the same as the timing when the captured
image is captured at the gate G1 (for example, the same time).
[0061] Then, upon receipt of the inquiry, the face authentication
system 20 of the adjacent gate G2 checks whether or not any person
is shown in the center area (second area) of the own captured
image, and notifies the target determination unit 12 of the face
authentication system 10 of the gate G1, from which the inquiry is
made, of the result. Specifically, the face authentication system
20 of the adjacent gate G2 has a function similar to that of the
target extraction unit 11 described above. The face authentication
system 20 acquires a captured image from the imaging device C2,
extracts a person who is a processing target from the captured
image, and identifies an area in the captured image where the
extracted person is located. Then, the face authentication system
20 of the adjacent gate G2 notifies the face authentication system
10 of the gate G1 whether or not a person is shown in the center
area, as a response to the inquiry.
[0062] Upon receipt of a notification of the response to the
inquiry from the face authentication system of the adjacent gate
G20, the target determination unit 12 of the face authentication
system 10 of the gate G1 determines as described below, depending
on the response. Upon receipt of a notification indicating that a
person is shown in the center area of the captured image of the
adjacent gate G2 (example of FIGS. 3, 4A, and 4B), the target
determination unit 12 determines that such a person is the person
P20 located in the end area of the captured image of the own gate
G1. In that case, in the face authentication system 10 of the own
gate G1, it is determined to exclude the person P20 located in the
end area of the captured image from the collation target. On the
other hand, upon receipt of a notification indicating that no
person is shown in the center area of the captured image of the
adjacent gate G2 (example of FIGS. 5, 6A and 6B), the target
determination unit 12 determines to handle the person P12 located
in the end area of the captured image of the own gate G1 as a
collation target.
[0063] By determining whether or not a person is a collation target
by means of the method described above, a person extraction process
is performed in each of the face authentication systems 10 and 20.
Therefore, processing is not concentrated on one face
authentication system, and also, image transfer is not performed.
Accordingly, a processing load on one face authentication system 10
can be suppressed, and a communication amount between the face
authentication systems 10 and 20 can also be suppressed. Thereby,
quick determination can be made.
[0064] The collation unit 13 (collation means) performs a collation
process on the person determined to be a collation target in the
captured image of the own gate G1 by the target determination unit
12. In other words, the collation unit 13 does not perform a
collation process on the person excluded from the collation target
by the target determination unit 12 even though the person is shown
in the captured image of the own gate G1. Here, a collation process
is performed as described below, for example. First, a face area of
a person who is a target of a collation process is identified, and
a feature amount required for collation is generated from the face
area. Then, a collation score such as similarity between the
generated feature amount and the feature amount of the person
having been registered in the collation data storage unit 17 is
calculated, and it is determined whether or not the collation score
is higher than a threshold. When the collation score is higher than
the threshold, it is determined that the collation has succeeded
and that the person who is about to pass through the gate G1 is the
person having been registered. At this time, the feature amount of
the person to be detected for collation may be a feature amount
used in an existing face collation technique, and may be a feature
amount calculated by any method. Also, as the collation method, any
method may be used.
[0065] The gate control unit 14 first determines propriety of
passage of the person with respect to the gate G1, based on the
collation result by the collation unit 13. Specifically, it is
determined that the person whose collation by the collation unit 13
has been succeeded is allowed to pass through. The gate control
unit 14 also has a function of displaying the collation result,
that is, success or failure of collation, on the display device D1.
Moreover, the gate control unit 14 also has a gate control function
to open and close the gate G1, and performs control to open the
gate G1 for the person determined to be allowed to pass
through.
[0066] Note that the display device D1 is installed such that the
display surface faces the pre-passage side area of the gate G1 so
as to be viewable by a person who is about to pass through the gate
G1. However, the display device D1 is not necessarily provided.
[Operation]
[0067] Next, operation of the face authentication system 10 as
described above will be described with reference to the flowchart
of FIG. 7. Here, operation of the face authentication system 10
corresponding to the gate G1 will be described. Description will be
given on the case where the capturing state by the imaging device
C1 is as illustrated in FIGS. 3 to 6B, as an example.
[0068] The imaging device C1 corresponding to the gate G1
continuously captures images of the pre-passage side area of the
gate G1. Then, the face authentication system 10 regularly performs
processing, as described below, on the captured images.
[0069] First, the target extraction unit 11 extracts a person
(target) to be processed from a captured image (step S1). Then,
when the person P20 is located in an end area of the captured
image, the target determination unit 12 acquires a captured image
captured by the imaging device C2 of the face authentication system
20 corresponding to the adjacent gate G2 (step S2). At this time, a
captured image of the adjacent gate G2 that is captured at the same
timing as the captured image of the own gate G1 is acquired.
[0070] Then, the target determination unit 12 checks whether or not
the person P20 located in the end area of the captured image of the
own gate G1 is shown in the center area of the captured image of
the adjacent gate G2 (step S3). When any person is shown in the
center area of the captured image of the adjacent gate G2, the
target determination unit 12 determines that such a person is the
person P20 located in the end area of the captured image of the own
gate G1. In that case, in the face authentication system 10 of the
own gate G1, it is determined to exclude the person P20 located in
the end area from the collation target.
[0071] For example, in the state of FIG. 3, the person P20 located
in an end area of the captured image of the own gate G1 illustrated
in FIG. 4A is located in the center area of the captured image of
the adjacent gate G2 illustrated in FIG. 4B. Therefore, the person
P20 is excluded from the collation target of the face
authentication system 10 of the own gate G1 (Yes at step S3).
Further, in the state of FIG. 5, the person P12 located in an end
area of the captured image of the own gate G1 illustrated in FIG.
6A is not located in the center area of the captured image of the
adjacent gate G2 illustrated in FIG. 6B. Therefore, the person P12
is not excluded from the collation target by the face
authentication system 10 of the own gate G1, and is handled as a
collation target (No at step S3).
[0072] Then, the collation unit 13 performs a collation process on
the person determined to be a collation target in the captured
image of the own gate G1 by the target determination unit 12 (step
S4). In the example of FIGS. 3, 4A, and 4B, the collation unit 13
initiates the collation process only for the person P10, and in the
example of FIGS. 5, 6A, and 6B, the collation unit 13 initiates the
collation process for the person P10 and the person P12. In the
collation process, first, a feature amount necessary for collation
is detected from the face area of a person who is a target of the
collation process, and the feature amount is collated with the
feature amount of the person having been registered in the
collation data storage unit 17, whereby a collation score is
acquired. Then, it is determined whether or not the collation score
exceeds a threshold.
[0073] As a result of the collation process by the collation unit,
when collation of the person who is about to pass through has
succeeded (Yes at step S5), the gate control unit 14 allows passage
of the person with respect to the gate G1 and performs control to
open the gate G1 (step S6).
[0074] As described above, according to the face authentication
system 10 of the present embodiment, it is possible to prevent a
collation process with respect to the person P20 who is about to
pass through the adjacent gate G2, and to perform a collation
process appropriately on the persons P10 and P12 who are about to
pass through the own gate G1. For example, in the example of FIGS.
3, 4A, and 4B, a collation process is performed on the person P10
who is about to pass through the own rage G1 but is not performed
on the person P20 who is about to pass through the adjacent gate
G2. Further, in the example of FIGS. 5, 6A, and 6B, even for the
person P12 shown in the end area of the captured image, since the
person P12 is about to pass through the own gate G1, a collation
process is performed appropriately. As a result, erroneous
recognition of a person who is about to pass through another gate
can be suppressed.
[0075] Note that while a captured image of the own gate G1 is
compared with a captured image of the adjacent gate G2 in the above
description, it is also possible to compare it with a captured
image of another gate (for example, gate G3), without being limited
to a captured image of the adjacent gate G2. Moreover, while an end
area of a captured image of the own gate G1 is compared with the
center area of the adjacent gate G2, areas to be compared with each
other is not limited to those described above. Any areas between
captured images of different gates may be compared.
[0076] Furthermore, while the face authentication system 10
corresponding to the own gate G1 acquires a captured image of the
adjacent gate G2 and determines whether or not a person is a
collation target in the above description, such a determination
process may be performed by the face authentication system of the
adjacent gate G2. That is, the face authentication system 10 of the
own gate G1 may provide the face authentication system 20 of the
adjacent gate G2 with a captured image, and the face authentication
system 20 of the adjacent gate G2 may return the determination
result to the face authentication system 10 of the own gate G1.
Further, such a determination process may be performed by another
information processing apparatus.
[0077] Moreover, the face authentication system 10 may inquire of
the face authentication system 20 of the adjacent gate G2 whether
or not a person is shown in the center area of the captured image
of the adjacent gate G2 as described above, and determine whether
or not the person is a collation target, depending on the response.
In that case, when a given notification indicates that a person is
shown in the center area of the captured image of the adjacent gate
G2, it is determined to exclude the person located in the end area
of the captured image of the own gate G1 from the collation target.
On the other hand, when a given notification indicates that no
person is shown in the center area of the captured image of the
adjacent gate G2, it is determined to handle the person located in
the end area of the captured image of the own gate G1 as a
collation target.
[0078] Further, while description has been given on the case where
a target that is about to pass through the gate G1 is a person as
an example, it is not limited to a person but may be any object.
For example, an object such as baggage is also acceptable.
Second Exemplary Embodiment
[0079] A second exemplary embodiment of the present invention will
be described with reference to FIGS. 8 to 12. FIGS. 8 to 12 are
diagrams for explaining a processing operation of the face
authentication system. In particular, FIG. 12 is a flowchart
illustrating an operation of the face authentication system.
[0080] A face authentication system 10 of the present embodiment
has a configuration similar to that of the face authentication
system of the first exemplary embodiment described above. However,
a process of determining whether or not to handle a person shown in
a captured image of the own gate G1 as a collation target is
different. A configuration different from that of the first
exemplary embodiment will be mainly described in detail below.
[0081] In the present embodiment, when the target extraction unit
11 extracts the person P20 from an end area of a captured image of
the own gate G1 (step S11), the target determination unit 12
determines whether or not a person same as the person P20 is
located in the center area of a captured image of another gate G2.
In this example of FIGS. 8 and 10, since the person P20 is shown in
the right end area in the captured image corresponding to the own
gate G1, the target determination unit 12 acquires a captured image
captured by the imaging device C2 of the gate G2, adjacent on the
right side to the gate G1 (step S12). Specifically, the target
determination unit 12 communicates with the face authentication
system 20 of the adjacent gate G2 via the communication unit 16,
and acquires a captured image captured at the gate G2 at the same
timing as the captured image captured at the gate G1 (for example,
the same time). Note that the positional relationship between the
face authentication systems (imaging devices C1, C2, and C3)
corresponding to the gates G1, G2, and G3, respectively, is assumed
to be registered in advance in the respective systems.
[0082] Then, the target determination unit 12 determines the
sameness between the person P20 located in the end area of the
captured image of the own gate G1 and the person P20 located in the
center area (second area) of the captured image of the adjacent
gate G2. In the present embodiment, first, a face area of each
person is specified, and a feature amount required for collation is
extracted from the face area (step S13). Then, from the feature
amounts of the both persons, it is determined whether or not the
both persons are the same person (step S14). Note that the process
of determining whether or not the persons match each other may be a
process having relatively low accuracy compared with the collation
process described below. For example, it is possible to detect only
a feature amount of a part of a person, or detect gender or age
from the feature amount to thereby determine the sameness from such
information. However, any process may be used as a process of
determining the sameness of the persons.
[0083] Then, when it is determined that the same person as the
person P20 located in an end area of the captured image of the own
gate G1 is located in the center area of the captured image of the
adjacent gate G2, the target determination unit 12 determines that
the person P20 is a person moving toward the adjacent gate G2 (Yes
at step S15). In that case, in the face authentication system 10 of
the own gate G1, it is determined to exclude the person P20 located
in the end area from the collation target. On the other hand, when
it is determined that the same person as the person P12 located in
an end area of the captured image of the own gate G1 is not located
in the center area of the captured image of the adjacent gate G2,
the target determination unit 12 determines that the person P12 is
a person moving toward the own gate G1 (No at step S15). In that
case, in the face authentication system 10 of the own gate G1, it
is determined that the person P12 located in the end area is a
collation target. Note that the end area (first area) of the
captured image of the own gate G1 and the center area (second area)
of the captured image of the adjacent gate G2 are areas different
from each other when the captured images are compared, but are
areas overlapping each other in the real space.
[0084] For example, in the state of FIG. 8, the person P20 located
in an end area of the captured image of the own gate G1 illustrated
in FIG. 9A is located in the center area of the captured image of
the adjacent gate G2 illustrated in FIG. 9B. Therefore, the person
P20 is excluded from the collation target of the face
authentication system 10 of the own gate G1. This means that in the
face authentication system 10 of the own gate G1, only the person
P10 located in the center area of the captured image is determined
to be a collation target. Further, in the state of FIG. 10, a
person who is the same as the person P12 located in the end area of
the captured image of the own gate G1 illustrated in FIG. 11A is
not located in the center area of the captured image of the
adjacent gate G2 illustrated in FIG. 11B. Therefore, the person P12
is handled as a collation target of the face authentication system
10 of the own gate G1. This means that in the face authentication
system 10 of the own gate G1, the person P10 located in the center
area and the person P12 located in the end area of the captured
image are determined to be collation targets.
[0085] The collation unit 13 (collation means) performs a collation
process on the person determined to be a collation target in the
captured image of the own gate G1 by the target determination unit
12 (step S16). In other words, the collation unit 13 does not
perform a collation process on the person excluded from the
collation target by the target determination unit 12 even though
the person is shown in the captured image of the own gate G1. Here,
a collation process is performed as described below, for example.
First, the face area of a person who is a target of the collation
process is identified, and a feature amount required for collation
is generated from the face area. Then, a collation score such as
similarity between the generated feature amount and the feature
amount of the person having been registered in the collation data
storage unit 17 is calculated, and it is determined whether or not
the collation score is higher than a threshold. When the collation
score is higher than the threshold, it is determined that the
collation has succeeded and that the person who is about to pass
through the gate G1 is the person having been registered. Note that
any method may be used as the collation method.
[0086] The gate control unit 14 determines propriety of passage of
the person with respect to the gate G1, based on the collation
result by the collation unit 13. Specifically, the gate control
unit 14 determines that the person whose collation by the collation
unit 13 has succeeded is allowed to pass through (step S17), and
performs control to open the gate G1 (step S18).
[0087] As described above, in the face authentication system of the
present embodiment, by extracting and comparing feature amounts of
persons in captured images of the gates, it is possible to
determine whether or not the same target is shown in captured
images of other gates more reliably. Then, it is possible to
exclude the target shown in an overlapped manner for a given gate
and another gate from the collation process, and to suppress
erroneous recognition of a target who is about to pass through a
gate. Moreover, it is possible to use the feature amounts of the
targets extracted for determining overlapping of the targets, for
the collation process. This enhances the efficiency of the
process.
[0088] Note that while description has been given on the case where
a target that is about to pass through the gate G1 is a person as
an example, it is not limited to a person but may be any object.
For example, an object such as baggage is also acceptable.
Third Exemplary Embodiment
[0089] Next, a third exemplary embodiment of the present invention
will be described. In the face authentication system 10 of the
present embodiment, basically, it is determined whether or not a
person shown in an end area of a captured image is a person moving
toward the own gate G1, similar to the first exemplary embodiment
described above. That is, it is determined whether or not a person
shown in an end area of a captured image of the own gate G1 is a
person moving toward the own gate G1, depending on whether or not
the person is shown in the center area of a captured image captured
by the face authentication system 20 corresponding to another gate
G2. However, in the present embodiment, the determination process
differs as described below. The configuration different from that
of the first exemplary embodiment will be mainly described in
detail below.
[0090] First, the target extraction unit 11 of the present
embodiment acquires a captured image from the imaging device C1,
and extracts a person who is a processing target from the captured
image. At that time, the target extraction unit 11 also identifies
an area in the captured image where the extracted person is
located. Then, the target extraction unit 11 of the present
embodiment determines whether or not the feature that is
determination information of the person shown in the end area
exceeds a reference value, and depending on the determination
result, performs a determination process on whether or not the
person is a person moving toward the own gate G1.
[0091] As an example, the target extraction unit 11 first extracts
the face area of a person located in an end area of a captured
image. Extraction of the face area of a person is performed by
determining the position, color, and the like with respect to the
entire image of a moving person. Then, the target extraction unit
11 detects the feature of the person that is determination
information, from the face area. In the present embodiment, the
feature of a person as determination information is a distance
between the eyes of a person (inter-eye distance).
[0092] Then, the target determination unit 12 of the present
embodiment checks whether or not the inter-eye distance that is
determination information detected by the target determination unit
11 exceeds a reference value. Here, in the present embodiment, the
reference value is set to a value enabling determination on whether
or not a person shown in an end area of a captured image of the own
gate G1 is located on the lane G1 of the own gate G1. For example,
in the state of FIG. 10 described above, it is considered that the
inter-eye distance of the person P12 shown in an end area of the
captured image of the own gate G1 and moving toward the own gate G1
is larger than the inter-eye distance of the person P20 of the
adjacent gate G2. In consideration of it, the reference value is
set to a value between the inter-eye distance of the person P12
shown in an end area of the captured image of the own gate G1 and
moving toward the own gate G1 and the inter-eye distance of the
person P20 of the adjacent gate G2.
[0093] Then, when the inter-eye distance of the person shown in the
end area of the captured image of the gate G1 detected by the
target extraction unit 11 exceeds the reference value, the target
determination unit 12 determines that the person is a person moving
toward the own gate G1, and handles the person as a target of a
collation process. That is, when the inter-eye distance of a person
exceeds the reference value, comparison with a captured image of
the adjacent gate G2 is not performed, and a collation process is
directly performed on the person as a person moving toward the own
gate G1.
[0094] On the other hand, when the inter-eye distance of the person
shown in the end area of the captured image of the gate G1 detected
by the target extraction unit 11 is equal to or smaller than the
reference value, since there is a possibility that the person is a
person moving toward the adjacent gate G2, the target determination
unit 12 performs comparison with a captured image of the adjacent
gate G2. That is, as described in the first exemplary embodiment,
it is checked whether or not any person is shown in the center area
of the captured image of the adjacent gate G2. When a person is
shown in the center area of the captured image of the adjacent gate
G2, it is determined that the person shown in the end area of the
captured image of the own gate G1 is a person moving toward the
adjacent gate G2, and the person is excluded from the target of a
collation process. When no person is shown in the center area of
the captured image of the adjacent gate G2, it is determined that
the person shown in the end area of the captured image of the own
gate G1 is a person moving toward the own gate G1, and the person
is handled as a target of a collation process.
[0095] Through the process described above, in the example of FIGS.
10, 11A and 11B, for example, the person P12 is shown in an end
area of the captured image of the own gate G1, but since the person
P12 is close to the imaging device C1 of the G1, the inter-eye
distance that is determination information showing the size of the
person P12 is larger than the reference value. Therefore, it is
determined that the person P12 is a person moving toward the own
gate G1, and the person P12 is handled as a target of a collation
process. That is, even if the person P20 moving toward the adjacent
gate G2 exists, the person P12 of the own gate G1 can be handled as
a target of a collation process.
[0096] On the other hand, in the example of FIGS. 8, 9A and 9B, for
example, the person P20 is shown in an end area of the captured
image of the own gate G1, but since the person P20 is far away from
the imaging device C1 of the G1, the inter-eye distance that is
determination information showing the size of the person P20 is
equal to or smaller than the reference value. Therefore, it is
checked whether or not the person P20 is shown in the center area
of the captured image of the adjacent gate G2. In that case, since
a person is shown in the center area of the captured image of the
adjacent gate G2, it is determined that the person shown in the end
area of the captured image of the own gate G1 is a person moving
toward the adjacent gate G2, and the person is excluded from the
target of a collation process.
[0097] Note that while description has been given on the case where
the determination information of a person shown in the end area of
the captured image of the own gate G1 is an inter-eye distance, the
feature of the person as the determination information may be a
distance between other parts of the face of a person, or a value
representing the size of another part of a person. However, the
feature of a person that is determination information is not
necessarily limited to a value representing the size of a part of a
person.
Fourth Exemplary Embodiment
[0098] Next, a fourth exemplary embodiment of the present invention
will be described with reference to FIGS. 13 to 16B. FIG. 13 is a
block diagram showing the configuration of a face authentication
system of the present embodiment, and FIGS. 14 to 16 are diagrams
for explaining a processing operation of the face authentication
system.
[0099] A face authentication system 10 of the present embodiment
has a configuration similar to that of the first exemplary
embodiment described above. However, as described in the first and
second exemplary embodiments, the face authentication system 10
also has a function of setting overlapping areas in which captured
images of adjacent gates are overlapped each other and which are
compared with each other to check whether or not the same person is
located. The configuration different from that of the first and
second exemplary embodiments will be mainly described in detail
below.
[0100] As illustrated in FIG. 13, a face authentication system 10
of the present embodiment includes an area setting unit 15 (area
setting means) constructed by execution of a program by the
arithmetic unit. The area setting unit 15 first acquires a captured
image from the imaging device C1 of the own gate G1, and also
acquires a captured image from the imaging device C2 of the
adjacent gate G2. At this time, the area setting unit 15
communicates with the face authentication system 20 of the adjacent
gate G2 via the communication unit 16, and acquires a captured
image captured at the gate G2 at a timing that is the same as the
timing (for example, the same time) when the captured image is
captured at the gate G1.
[0101] Then, the area setting unit 15 extracts persons from the
acquired captured image of the gate G1 and the captured image of
the gate G2. Extraction of a person is performed by, for example,
extracting a moving object, or performing determination from the
position of a characteristic shape part or the position of a color
with respect to the overall shape or the overall image. Then, the
area setting unit 15 checks whether or not the same person is shown
in both captured images. At this time, the area setting unit 15
specifies the face areas of the respective persons shown in the
captured images, and detects feature amounts required for collation
from the face areas. Then, from the feature amounts of the persons,
it is determined whether or not the persons are the same person.
Note that any process may be used as a process of determining the
sameness of the persons.
[0102] Then, when the area setting unit 15 determines that the same
person is shown in the captured images, the area setting unit 15
sets areas surrounding the portions where the same person is shown
in the respective captured images as overlapping areas that overlap
each other in the respective captured images. At this time, the set
overlapping area is an area obtained by expanding the part in which
the same person is detected up to the upper and lower ends of the
captured image and further expanding the area by a predetermined
range in the horizontal direction of the captured image. In
particular, in the captured image of the own gate G1, it is
considered that the part where the same person is extracted may be
positioned near one end in the horizontal direction of the captured
image. However, a range slightly expanded in the horizontal
direction with respect to the part where the person is extracted is
set as an overlapping area (first area), and an area having the
same size near the other end is also set as the overlapping area
(first area). On the other hand, in the captured image of the
adjacent gate G2, it is considered that a part where the same
person is extracted is positioned near the center in the horizontal
direction of the captured image. However, a range, expanded in the
horizontal direction to be larger than the overlapping area of the
end area with respect to the part where the person is extracted, is
set as an overlapping area (second area). That is, in the present
embodiment, the center area is set to be larger than the end area.
However, the method of setting the size of the end area and the
center area is not limited to the method described above.
[0103] As an example, in the state shown in FIG. 14, first, a
captured image of a capturing area between lines C1a is acquired
from the imaging device C1 of the own gate G1, and a captured image
of a capturing area between lines C2a is acquired from the imaging
device C2 of the adjacent gate G2. It is assumed that by extracting
the same person from the captured images of FIGS. 15A and 15B, the
same person P20 is detected in an end area of the captured image of
the own gate G1 and in the center area of the captured image of the
adjacent gate G2. In that case, as for the captured image of the
own gate G1, an area defined by the lines C1a and C1b obtained by
further expanding the area (hatched area), expanded in the up and
down direction of the part where the person P20 is extracted,
slightly in the horizontal direction is set as an overlapping area,
as illustrated in FIG. 16A. Further, in the end area on the
opposite side in the horizontal direction, an overlapping area
defined by the lines C1a and C1b is set. On the other hand, as for
the captured image of the adjacent gate G2, an area defined by
lines C2b obtained by further expanding the area (hatched area),
expanded in the up and down direction of the part where the person
P20 is extracted, in the horizontal direction is set as an
overlapping area, as illustrated in FIG. 16B.
[0104] As described above, in the present embodiment, the area
setting unit 15 of the face authentication system 10 corresponding
to the gate G1 automatically set an end area of a captured image of
the own gate G1 and the center area of a captured image of the
adjacent gate G2 as overlapping areas. Then, as described in the
first and second exemplary embodiments, the face authentication
system 10 performs processing of extracting a person from the end
area of a captured image of the own gate G1 and the center area of
a captured image of the adjacent gate G2 having been set, and
determining whether or not the same person exists in both
areas.
[0105] According to the face authentication system 10 of the fourth
exemplary embodiment, on the basis of the person in a captured
image corresponding to a given gate and a person in a captured
image corresponding to another gate, it is possible to
automatically set overlapping areas between the captured images
corresponding to the gates. As a result, it is possible to set
overlapping areas appropriately between the imaging devices C1 and
C2 of the gates G1 and G2 adjacent to each other, to realize easy
use of the collation system, and to suppress erroneous recognition
with higher accuracy.
[0106] Note that while an example of setting overlapping areas by
using images of persons who are about to pass through the gates G1
and G2 has been described, it is possible to set overlapping areas
by using any objects shown in the captured images, without being
limited to persons. For example, it is possible to determine the
sameness of articles or landscapes shown in the captured images,
and on the basis of the determination result, to set overlapping
ranges of the respective captured images.
Fifth Exemplary Embodiment
[0107] Next, a fifth exemplary embodiment of the present invention
will be described with reference to FIGS. 17 and 18. FIG. 17 is a
block diagram illustrating a configuration of a collation system
according to the fifth exemplary embodiment. FIG. 18 is a block
diagram illustrating a configuration of an information processing
apparatus according to the fifth exemplary embodiment. Note that
the present embodiment shows the outline of the configuration of
the face authentication system described in the first to fourth
exemplary embodiments.
[0108] As illustrated in FIG. 17, a collation system 100 of the
present embodiment includes
[0109] an imaging means 110 for acquiring a captured image of each
of pre-passage side areas with respect to each of gates arranged in
parallel with each other, and
[0110] a collation means 120 for performing, for each of the gates,
a collation process on the captured image of the pre-passage side
area between a previously registered target and a target in the
captured image.
[0111] Then, the collation means 120 performs a collation process
on the basis of a target in a captured image corresponding to a
given gate and a target in a captured image corresponding to
another gate.
[0112] Further, in the present embodiment, the imaging means 110
may be removed from the collation system 100 illustrated in FIG.
17.
[0113] That is, an information processing apparatus 200 of the
present embodiment includes
[0114] a collation means 210 for performing a collation process on
a captured image of a pre-passage side area with respect to each of
gates installed in parallel with each other.
[0115] Then, the collation means 210 performs a collation process
on the basis of a target in a captured image of a given gate and a
target in a captured image corresponding to another gate.
[0116] Note that each of the collation means 120 and 210 may be
constructed by execution of a program by an arithmetic unit, or may
be constructed by an electronic circuit.
[0117] According to the collation system 100 or the information
processing apparatus 200 having the configurations described
above,
[0118] a collation method of performing a collation process on a
captured image of a pre-passage side area of each of gates
installed in parallel with each other, between a previously
registered target and a target in the captured image, is
provided.
[0119] In such a collation method, the collation process is
performed on the basis of a target in a captured image of a given
gate and a target in a captured image corresponding to another
gate.
[0120] According to the collation system 100 or the information
processing apparatus 200 described above, a target collation
process is performed on the basis of a target in a captured image
corresponding to a given gate and a target in a captured image
corresponding to another gate. Therefore, even in the case where a
target who is about to pass through another gate is shown in a
captured image corresponding to the given gate, it is possible to
determine that such a target is a target who is about to pass
through the other gate. Thereby, such a target can be excluded from
the collation process of the given gate. As a result, erroneous
recognition of a target who is about to pass through a gate can be
suppressed.
SUPPLEMENTARY NOTES
[0121] The whole or part of the exemplary embodiments disclosed
above can be described as, but not limited to, the following
supplementary notes. Hereinafter, outlines of the configurations of
a collation system, an information processing apparatus, a program,
and a collation method, according to the present invention, will be
described. However, the present invention is not limited to the
configurations described below.
Supplementary Note 1
[0122] A collation system comprising:
[0123] imaging means for acquiring a captured image of a
pre-passage side area with respect to each of gates arranged in
parallel with each other; and
[0124] collation means for performing a collation process on the
captured image of the pre-passage side area for each of the gates,
between a previously registered target and a target included in the
captured image, wherein
[0125] the collation means performs the collation process on a
basis of a target in the captured image corresponding to one of the
gates and a target in the captured image corresponding to another
one of the gates.
[0126] With the configuration described above, the collation system
performs a collation process on a target, on the basis of a target
in a captured image corresponding to a given gate and a target in a
captured image corresponding to another gate. Therefore, even in
the case where a target who is about to pass through another gate
is shown in a captured image corresponding to a given gate, it is
possible to determine that such a target is a target who is about
to pass through the other gate. Thereby, such a target can be
excluded from the collation process of the given gate. As a result,
erroneous recognition of a target who is about to pass through a
gate can be suppressed.
Supplementary Note 2
[0127] The collation system according to supplementary note 1,
wherein
[0128] the collation means performs the collation process on a
basis of a target located in a predetermined area in the captured
image corresponding to the one of the gates and the target in the
captured image corresponding to the other one of the gates.
Supplementary Note 3
[0129] The collation system according to supplementary note 2,
wherein
[0130] the collation means performs the collation process on a
basis of a target located in a first area of the captured image
corresponding to the one of the gates and a target located in a
second area of the captured image corresponding to the other one of
the gates.
Supplementary Note 4
[0131] The collation system according to supplementary note 3,
wherein
[0132] when the target located in the first area of the captured
image corresponding to the one of the gates is also located in the
second area of the captured image corresponding to the other one of
the gates, the collation means excludes the target located in the
first area of the captured image corresponding to the one of the
gates from the collation process.
Supplementary Note 5
[0133] The collation system according to supplementary note 4,
wherein
[0134] when a target located in an end area in a horizontal
direction that is the first area of the captured image
corresponding to the one of the gates is also located in a center
area in a horizontal direction that is the second area of the
captured image corresponding to the other one of the gates, the
collation means excludes the target located in the end area in the
horizontal direction of the captured image corresponding to the one
of the gates from the collation process.
Supplementary Note 6
[0135] The collation system according to supplementary note 4 or 5,
wherein
[0136] when the target located in the first area of the captured
image corresponding to the one of the gates is also located in the
second area of the captured image corresponding to the other one of
the gates at a timing when the target is located in the first area,
the collation means excludes the target located in the first area
of the captured image corresponding to the one of the gates from
the collation process.
[0137] According to the configuration described above, the
collation system excludes a target shown in an overlapping manner
by comparing a target shown in a predetermined area of the captured
image corresponding to the given gate with a target shown in the
captured corresponding to the other gate, and performs the
collation process only on the target shown individually. In
particular, when a target located in an end area (first area) in
the horizontal direction of the captured image corresponding to the
given gate is also shown in the center area (second area) in the
horizontal direction of the captured image corresponding to the
other gate, the collation process is not performed on such a
target. On the other hand, when a target located in an end area
(first area) in the horizontal direction of the captured image
corresponding to the given gate is not shown in the center area
(second area) in the horizontal direction of the captured image
corresponding to the other gate, the collation process is performed
on such a target. As a result, when a target is shown in the
captured image corresponding to the given gate and in the captured
image corresponding to the other gate in an overlapping manner,
such a target can be excluded from the collation process.
Therefore, erroneous recognition of a target who is about to pass
through a gate can be suppressed.
Supplementary Note 7
[0138] The collation system according to supplementary note 4 or 5,
wherein
[0139] when the collation means determines that the target located
in the first area of the captured image corresponding to the one of
the gates is also located in the second area of the captured image
corresponding to the other one of the gates on a basis of a feature
amount of the target located in the first area of the captured
image corresponding to the one of the gates and a feature amount of
the target located in the second area of the captured image
corresponding to the other one of the gates, the collation means
excludes the target located in the first area of the captured image
corresponding to the one of the gates from the collation
process.
Supplementary Note 8
[0140] The collation system according to supplementary note 7,
wherein
[0141] when the collation means determines that the target located
in the first area of the captured image corresponding to the one of
the gates is also located in the second area of the captured image
corresponding to the other one of the gates on a basis of a feature
amount of a part of a face of a person who is the target located in
the first area of the captured image corresponding to the one of
the gates and a feature amount of a part of a face of a person who
is the target located in the second area of the captured image
corresponding to the other one of the gates, the collation means
excludes the target located in the first area of the captured image
corresponding to the one of the gates from the collation
process.
[0142] According to the configuration described above, the
collation system can determine whether or not the same target is
shown in the captured images of the gates in an overlapping manner,
by extracting and comparing the feature amounts of the targets in
the captured images corresponding to the gates. Then, a target
shown in the captured images of the given gate and the other gate
in an overlapping manner can be excluded from the collation
process, and erroneous recognition of a target who is about to pass
through a gate can be suppressed. Moreover, the feature amounts of
the targets extracted for determining overlapping of the target can
be used for the collation process. This enhances the efficiency of
the process.
Supplementary Note 9
[0143] The collation system according to any of supplementary notes
3 to 8, further comprising area setting means for setting the first
area of the captured image corresponding to the one of the gates
and the second area of the captured image corresponding to the
other one of the gates, on a basis of the target in the captured
image corresponding to the one of the gates and the target in the
captured image corresponding to the other one of the gates acquired
at a timing when the target is located in the captured image
corresponding to the one of the gates.
Supplementary Note 10
[0144] The collation system according to supplementary note 9,
wherein
[0145] when the target in the captured image corresponding to the
one of the gates and the target in the captured image corresponding
to the other one of the gates acquired at a timing when the target
is located in the captured image corresponding to the one of the
gates are identical, the area setting means sets a periphery of a
position of the target in the captured image corresponding to the
one of the gates as the first area and sets a periphery of a
position of the target in the captured image corresponding to the
other one of the gates as the second area, on a basis of the
position of the target in the captured image corresponding to the
one of the gates and the position of the target in the captured
image corresponding to the other one of the gates. Then, it is
possible to exclude the target shown in an overlapped manner for a
given gate and another gate from the collation process, and to
suppress erroneous recognition of a target who is about to pass
through a gate. Moreover, it is possible to use the feature amounts
of the targets extracted for determining overlapping of the target
for a collation process. This enhances the efficiency of the
process.
[0146] According to the configuration described above, the
collation system can automatically set overlapping areas between
the captured images corresponding to the gates, on the basis of the
target in the captured image corresponding to the given gate and
the target in the captured image corresponding to the other gate.
As a result, it is possible to to realize easy use of the collation
system, and to suppress erroneous recognition with high
accuracy.
Supplementary Note 11
[0147] An information processing apparatus comprising:
[0148] collation means for performing a collation process on a
captured image of a pre-passage side area with respect to each of
gates installed in parallel with each other, between a previously
registered target and a target in the captured image, wherein
[0149] the collation means performs the collation process on a
basis of a target in the captured image corresponding to one of the
gates and a target in the captured image corresponding to another
one of the gates.
Supplementary Note 12
[0150] The information processing apparatus according to
supplementary note 11, wherein
[0151] the collation means performs the collation process on a
basis of a target located in a predetermined area of the captured
image corresponding to the one of the gates and the target in the
captured image corresponding to the other one of the gates.
Supplementary Note 13
[0152] A program for causing an information processing apparatus to
realize
[0153] collation means for performing a collation process on a
captured image of a pre-passage side area with respect to each of
gates installed in parallel with each other, between a previously
registered target and a target in the captured image, wherein
[0154] the collation means performs the collation process on a
basis of a target in the captured image corresponding to one of the
gates and a target in the captured image corresponding to another
one of the gates.
Supplementary Note 14
[0155] The program according to supplementary note 13, wherein
[0156] the collation means performs the collation process on a
basis of a target located in a predetermined area of the captured
image corresponding to the one of the gates and the target in the
captured image corresponding to the other one of the gates.
Supplementary Note 15
[0157] A collation method comprising
[0158] performing a collation process on a captured image of a
pre-passage side area with respect to each of gates installed in
parallel with each other, between a previously registered target
and a target in the captured image, wherein
[0159] the collation process is performed on a basis of a target in
the captured image corresponding to one of the gates and a target
in the captured image corresponding to another one of the
gates.
Supplementary Note 16
[0160] The collation method according to supplementary note 15,
wherein
[0161] the collation process is performed on a basis of a target
located in a predetermined area of the captured image corresponding
to the one of the gates and the target in the captured image
corresponding to the other one of the gates.
Supplementary Note 17
[0162] The collation method according to claim 16, wherein
[0163] the collation process is performed on a basis of a target
located in a first area of the captured image corresponding to the
one of the gates and a target located in a second area of the
captured image corresponding to the other one of the gates.
Supplementary Note 18
[0164] The collation method according to claim 17, further
comprising
[0165] setting the first area of the captured image corresponding
to the one of the gates and the second area of the captured image
corresponding to the other one of the gates, on a basis of the
target in the captured image corresponding to the one of the gates
and the target in the captured image corresponding to the other one
of the gates acquired at a timing when the target is located in the
captured image corresponding to the one of the gate, wherein
[0166] the collation process is performed on a basis of the target
located in the first area of the captured image corresponding to
the one of the gates and the target located in the second area of
the captured image corresponding to the other one of the gates.
[0167] It should be noted that the program described above may be
stored in a storage device or stored on a computer-readable storage
medium. The storage medium is a portable medium such as a flexible
disk, an optical disk, a magneto-optical disk, or a semiconductor
memory, for example.
[0168] While the present invention has been described with
reference to the exemplary embodiments described above, the present
invention is not limited to the above-described embodiments. The
form and details of the present invention can be changed within the
scope of the present invention in various manners that can be
understood by those skilled in the art.
[0169] The present invention is based upon and claims the benefit
of priority from Japanese patent application No. 2017-179064, filed
on Sep. 19, 2017, the disclosure of which is incorporated herein in
its entirety by reference.
REFERENCE SIGNS LIST
[0170] 10 face authentication system [0171] 11 target extraction
unit [0172] 12 target determination unit [0173] 13 collation unit
[0174] 14 gate control unit [0175] 15 area setting unit [0176] 16
communication unit [0177] 17 collation data storage unit [0178] 20
face authentication system [0179] 100 collation system [0180] 110
imaging means [0181] 120 collation means [0182] 200 information
processing apparatus [0183] 210 collation means [0184] C1, C2, C3
imaging device [0185] D1 display device [0186] G1, G2, G3 gate
* * * * *