U.S. patent application number 15/039956 was filed with the patent office on 2016-12-29 for method and system for face image recognition.
This patent application is currently assigned to BEIJING SENSE TIME TECHNOLOGY DEVELOPMENT CO., LTD.. The applicant listed for this patent is BEIJING SENSE TIME TECHNOLOGY DEVELOPMENT CO., LTD.. Invention is credited to Yi Sun, Xiaoou Tang, Xiaogang Wang.
Application Number | 20160379044 15/039956 |
Document ID | / |
Family ID | 53198257 |
Filed Date | 2016-12-29 |
United States Patent
Application |
20160379044 |
Kind Code |
A1 |
Tang; Xiaoou ; et
al. |
December 29, 2016 |
METHOD AND SYSTEM FOR FACE IMAGE RECOGNITION
Abstract
A method for face image recognition is disclosed. The method
comprises generating one or more face region pairs of face images
to be compared and recognized; forming a plurality of feature modes
by exchanging the two face regions of each face region pair and
horizontally flipping each face region of each face region pair;
receiving, by one or more convolutional neural networks, the
plurality of feature modes, each of which forms a plurality of
input maps in the convolutional neural network; extracting, by the
one or more convolutional neural networks, relational features from
the input maps, which reflect identity similarities of the face
images; and recognizing whether the compared face images belong to
the same identity based on the extracted relational features of the
face images. In addition, a system for face image recognition is
also disclosed.
Inventors: |
Tang; Xiaoou; (Shatin,
CN) ; Sun; Yi; (Shatin, CN) ; Wang;
Xiaogang; (Shatin, CN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
BEIJING SENSE TIME TECHNOLOGY DEVELOPMENT CO., LTD. |
Haidian District, Beijing |
|
CN |
|
|
Assignee: |
BEIJING SENSE TIME TECHNOLOGY
DEVELOPMENT CO., LTD.
Haidian District, Beijing
CN
|
Family ID: |
53198257 |
Appl. No.: |
15/039956 |
Filed: |
November 30, 2013 |
PCT Filed: |
November 30, 2013 |
PCT NO: |
PCT/CN2013/088254 |
371 Date: |
May 27, 2016 |
Current U.S.
Class: |
382/118 |
Current CPC
Class: |
G06K 9/00288 20130101;
G06K 9/4628 20130101; G06K 9/00281 20130101; G06K 9/4652 20130101;
G06K 9/4604 20130101; G06K 9/6257 20130101; G06K 9/66 20130101 |
International
Class: |
G06K 9/00 20060101
G06K009/00; G06K 9/46 20060101 G06K009/46; G06K 9/62 20060101
G06K009/62; G06K 9/66 20060101 G06K009/66 |
Claims
1. A method for face image recognition, comprising: generating one
or more face region pairs of face images to be compared and
recognized; forming a plurality of feature modes by exchanging two
face regions of each face region pair and horizontally flipping
each face region of each face region pair; receiving, by one or
more convolutional neural networks, the plurality of feature modes,
each of which forms a plurality of input maps; extracting, by the
one or more convolutional neural networks, one or more identity
relational features from the input maps to form a plurality of
output maps which reflect identity relations of the compared face
images; and recognizing whether the face images belong to the same
identity based on the identity relational features of the face
images.
2. The method according to claim 1, wherein the step of generating
further comprises: detecting a plurality of facial feature points
of the face images to be recognized; aligning the face images to be
recognized according to the detected one or more facial feature
points; and selecting a plurality of regions on the same position
of the aligned face images to generate one or more face region
pairs, respectively.
3. The method according to claim 1, wherein the convolutional
neural network comprises a plurality of convolutional layers, the
relational features comprises local low-level relational features
and global high-level relational features, and the step of
extracting comprises: extracting local low-level relational
features from the input maps in lower convolutional layers of the
convolutional neural network; and extracting global high-level
relational features based on the extracted local low-level
relational feature in subsequent layers of the convolutional neural
network which reflect identity similarities of the compared face
images.
4. The method according to claim 1, wherein after the step of
extracting and before the step of recognizing, the method further
comprises: pooling the exacted relational features to obtain stable
and compact relational features.
5. The method according to claim 1, wherein the step of extracting
comprises: extracting, by the same convolutional neural network,
the relational features from the input maps formed by different
input feature modes; or extracting, by different convolutional
neural networks, the relational features from the same region of
different face region pairs.
6. The method according to claim 1, wherein each face region of the
face region pairs comprises a plurality of color channels, each
color channel in each face region forms an input map in the
convolutional neural network.
7. A system for face image recognition, comprising: a memory that
stores executable units, and a processor electronically coupled to
the memory to execute the executable units to perform operations of
the system, wherein, the executable units comprise: a generating
unit configured to generate one or more face region pairs of face
images to be compared and recognized; a forming unit configured to
form a plurality of feature modes by exchanging the two face
regions of each face region pair and horizontally flipping each
face region of each face region pair; one or more convolutional
neural networks configured to receive the plurality of feature
modes, each of which forms a plurality of input maps, and the
convolutional neural networks is further configured to extract
identity relational features hierarchically from the input maps,
which reflect identity similarities of the compared face images;
and a recognizing unit configured to recognize whether the face
images belong to same identity based on the identity relational
features of the compared face images.
8. The system according to claim 7, wherein the generating unit
comprises: a detecting module configured to detect a plurality of
facial feature points of face images to be recognized; an aligning
module configured to align the face images to be recognized
according to the detected facial feature points; and a selecting
module configured to select one or more regions on the same
position of the aligned face images to be recognized to generate
one or more face region pairs, respectively.
9. The system according to claim 7, wherein the convolutional
neural network comprises a plurality of convolutional layers, the
identity relational features comprise local low-level relational
features and global high-level relational features, and each of the
convolutional layers is further configured to: extract local
low-level relational features from the input maps in lower
convolutional layers of the convolutional neural network; and
extract global high-level relational features based on the
extracted local low-level relational features in the subsequent
layers of the convolutional neural network, which reflect identity
similarities of the face images.
10. The system according to claim 7, further comprising a pooling
unit configured to pool the exacted relational features to obtain
stable and compact relational features.
11. The system according to claim 7, wherein each of the
convolutional layers is further configured to: extract, by the same
convolutional neural network, the relational features from the
input maps formed by different input feature modes; or extract, by
different convolutional neural networks, the relational features
from the same region of different face region pairs.
12. The system according to claim 7, wherein each face region of
the face region pairs comprise a plurality of color channels, each
color channel in each face region forms an input map in the
convolutional neural network.
13. A plurality of convolutional neural networks for extracting
identity relational features for a face image recognition system,
wherein each convolutional neural network comprises a plurality of
convolutional layers, the identity relational features comprise
local low-level relational features and global high-level
relational features, each convolutional neural network is
configured to: receive one particular feature mode of one
particular face region pair from the face image recognition system
to form a plurality of input maps; extract local low-level
relational features from the input maps in lower convolutional
layers of the convolutional neural network; and extract global
high-level relational features based on the extracted local
low-level relational features in subsequent layers of the
convolutional neural network, the extracted global high-level
relational features reflect identity similarities of the compared
face images.
Description
TECHNICAL FIELD
[0001] The present application generally relates to a field of
image processing, in particular, to a method and a system for face
image recognition.
BACKGROUND
[0002] The fundamental of face image recognition is verifying
whether two compared faces belong to the same identity or not based
on biological features. Recognized with other traditional means of
recognition, such as fingerprint recognition, the face image
recognition may be accurate, easy-to-use, difficult to counterfeit,
cost-effective and non-intrusive, and thus it is widely used for
safety applications. Face image recognition has been extensively
studied in recent decades. Existing methods for face image
recognition generally comprises two steps: feature extraction and
recognition. In the feature extraction stage, a variety of
hand-crafted features are used. More importantly, the existing
methods extract features from each face image separately and
compare them later at the face recognition stage. However, some
important correlations between the two compared face images may
have been lost at the feature extraction stage.
[0003] At the recognition stage, classifiers are used to classify
two face images as having the same identity or not, or other models
are employed to compute the similarities of two face images. The
purpose of these models is to separate inter-personal variations
and intra-personal variations. However, all of these models have
shallow structures. To handle large-scale data with complex
distributions, large amount of over-completed features may need to
be extracted from the faces. Moreover, since the feature extraction
stage and the recognition stage are separate, they cannot be
jointly optimized. Once useful information is lost in the feature
extraction stage, it cannot be recovered in the recognition
stage.
SUMMARY
[0004] The present application proposes a solution to directly and
jointly extract relational features from face region pairs derived
from the compared face images under the supervision of face
identities. Both feature extraction and recognition stages are
unified under single deep network architecture and all the
components may be jointly optimized for face recognition.
[0005] In an aspect of the present application, a method for face
image recognition is disclosed. The method may comprise: generating
one or more face region pairs of face images to be compared and
recognized; forming a plurality of feature modes by exchanging the
two face regions of each face region pair and horizontally flipping
each face region of each face region pair; receiving, by one or
more convolutional neural networks, the plurality of feature modes,
each of which forms a plurality of input maps in each of the
convolutional neural networks; extracting hierarchically, by the
one or more convolutional neural networks, identity relational
features from the input maps, where the extracted global and
high-level identity relational features reflect identity
similarities of the compared face images; and recognizing whether
the face images belong to same identity based on the identity
relational features of the compared face images.
[0006] In another aspect of the present application, a system for
face image recognition is disclosed. The system may comprise a
generating unit, a forming unit, one or more convolutional neural
networks, a pooling unit and a recognizing unit. The generating
unit may be configured to generate one or more face region pairs of
face images to be compared and recognized. The forming unit may be
configured to form a plurality of feature modes by exchanging the
two face regions of each face region pair and horizontally flipping
each face region of each face region pair. The one or more
convolutional neural networks may be configured to receive the
plurality of feature modes, each of which forms a plurality of
input maps, and to extract identity relational features
hierarchically from the plurality of input maps, where the global
and high-level identity relational features reflect identity
similarities of the face images. The pooling unit may be configured
to pool the correlated relational features to derive stable and
compact relational features. The recognizing unit may be configured
to recognize whether the face images belong to same identity based
on the identity relational features of the face images.
[0007] In another aspect of the present application, a plurality of
convolutional neural networks for extracting identity relational
features for a face image recognition system is disclosed. Each of
the convolutional neural networks may comprise a plurality of
convolutional layers; the relational features may comprise local
low-level relational features and global high-level relational
features. Each of the convolutional neural networks may be
configured to receive a plurality of feature modes from the face
image recognition system. Each feature mode forms a plurality of
input maps in the convolutional neural network. The convolutional
neural networks extract local low-level relational features from
the input maps in lower convolutional layers and extract global
high-level relational features based on the extracted local
low-level relational features in subsequent feature extracting
layers, which reflect identity similarities of the compared face
images.
BRIEF DESCRIPTION OF THE DRAWING
[0008] Exemplary non-limiting embodiments of the invention are
described below with reference to the attached drawings. The
drawings are illustrative and generally not to an exact scale.
[0009] FIG. 1 is a schematic diagram illustrating a system for face
image recognition consistent with some disclosed embodiments.
[0010] FIG. 2 is a schematic diagram illustrating a generating unit
consistent with some disclosed embodiments.
[0011] FIG. 3 is a schematic diagram illustrating architecture of
the system for face image recognition consistent with some
disclosed embodiments.
[0012] FIG. 4 is a schematic diagram illustrating architecture of
the convolutional neural networks consistent with some disclosed
embodiments.
[0013] FIG. 5 is a flowchart illustrating a method for face image
recognition consistent with some disclosed embodiments.
DETAILED DESCRIPTION
[0014] Reference will now be made in detail to exemplary
embodiments, examples of which are illustrated in the accompanying
drawings. When appropriate, the same reference numbers are used
throughout the drawings to refer to the same or like parts.
[0015] FIG. 1 is a schematic diagram illustrating a system 1000 for
face image recognition consistent with some disclosed embodiments.
The system 1000 may include one or more general purpose computer,
one or more computer cluster, one or more mainstream computer, one
or more computing device dedicated for providing online contents,
or one or more computer network comprising a group of computers
operating in a centralized or distributed fashion.
[0016] As shown in FIG. 1, the system 1000 according to one
embodiment of the present application may include a generating unit
110, a forming unit 120, one or more convolutional neural networks
130, a pooling unit 140, and a recognizing unit 150.
[0017] The generating unit 110 may be configured to generate one or
more face region pairs of face images to be recognized. In an
embodiment of the present application, the generating unit 110 may
include a detecting module 111, an aligning module 112 and a
selecting module 113, as shown in FIG. 2. The detecting module 111
may detect a plurality of facial feature points of face images to
be recognized. For example, the facial feature points may be the
two eyes centers and the mouth center. The aligning module 112 may
align the face images to be recognized according to the detected
facial feature points. In an embodiment of the present application,
the face images may be aligned by similarity transformation
according to the facial feature points. Furthermore, the selecting
module 113 may select one or more regions on the same position of
the aligned face images to be recognized to generate one or more
face region pairs, respectively. The positions of the selected face
regions can be varied in order to form a plurality of different
face region pairs.
[0018] The forming unit 120 may be configured to form a plurality
of feature modes by exchanging the two face regions of each face
region pair and horizontally flipping each face region of each face
region pair. For example, eight modes may be formed by exchanging
the two face regions and horizontally flipping each face region in
an embodiment.
[0019] The one or more convolutional neural networks 130 may be
configured to receive the plurality of feature modes. Each feature
mode may form a plurality of input maps. The convolutional neural
networks extract identity relational features hierarchically from
the plurality of input maps. The extracted global and high-level
relational features in higher convolutional layers of the
convolutional neural networks reflect identity similarities of the
compared face images. As shown in FIG. 4, in an embodiment of the
present application, the convolutional neural network 130 may
comprise a plurality of convolutional layers, such as 4
convolutional layers in this embodiment. The convolutional layers
may extract identity relational features hierarchically. In
addition, the relational features may comprise local low-level
relational features and global high-level relational features. Each
of the convolutional neural networks is further configured to
extract local low-level relational features from the input maps in
lower convolutional layers, and to extract global high-level
relational features based on the extracted local low-level
relational features in subsequent feature extracting layers, which
reflect identity similarities of the compared face images.
[0020] Additionally, in an embodiment of the present application,
the convolutional neural networks may be divided into a plurality
of groups, such as 12 groups, with a plurality of convolutional
neural networks, such as five convolutional neural networks, in
each group. Each convolutional neural network takes a pair of
aligned face regions as input. Its convolutional layers extract the
identity relational features hierarchically. Finally, the extracted
relational features pass a fully connected layer and are fully
connected to an output layer, such as the softmax layer, which
indicates whether the two regions belong to the same identity, as
shown in FIG. 4. The input region pairs for convolutional neural
networks in different groups differ in terms of region ranges and
color channels to make their predictions complementary. When the
size of the input regions changes in different groups, the input
map sizes in the following layers of the convolutional neural
network will change accordingly. Although convolutional neural
networks in the same group take the same kind of region pair as
input, they are different in that they are trained with different
bootstraps of the training data. The purpose of constructing
multiple groups of convolutional neural networks is to achieve
robustness of predictions.
[0021] In addition, in an embodiment, a pair of gray regions forms
two input maps of a convolutional neural network, while a pair of
color regions forms six input maps, replacing each gray map with
three maps from RGB channels. The input regions are stacked into
multiple maps instead of being concatenated to form one map, which
enables the convolutional neural network to model the relations
between the two regions from the first convolutional stage.
[0022] According to an embodiment, operations in each convolutional
layer of the convolutional neural network can be expressed to
y.sub.j.sup.r=max(0,b.sub.j.sup.r+.SIGMA.k.sub.ij.sup.r*x.sub.i.sup.r),
Equation (1)
where * denotes convolution, x.sub.i and y.sub.j are i-th input map
and j-th output map respectively, k.sub.ij is the convolution
kernel (filter) connecting the i-th input map and the j-th output
map, and b.sub.j is the bias for the j-th output map. max
(0,.cndot.) is the non-linear activation function, and is operated
element-wise. Neurons with such non-linearity are called rectified
linear units. Moreover, weights of neurons (including convolution
kernels and biases) in the same map in higher convolutional layers
are locally shared. Superscript r indicates a local region where
weights are shared. Since faces are structured objects, locally
sharing weights in higher layers allows the network to learn
different high-level features at different locations.
[0023] According to an embodiment, for example, the first
convolutional layer contains 20 filter pairs. Each filter pair
convolves with the two face regions in comparison, respectively,
and the results are added. For filter pairs in which one filter
varies greatly while the other remains near uniform, features are
extracted from the two input regions separately. For filter pairs
in which both filters vary greatly, some kind of relations between
the two input regions are extracted. Among the latter, some pairs
extract simple relations such as addition or subtraction, while
others extract more complex relations. Interestingly, filters in
some filter pairs are nearly the same as those in some others,
except that the order of the two filters are inversed. This makes
sense since face similarities should be invariant with the order of
the two face regions in comparison.
[0024] According to the embodiment, the output map of the
convolutional neural network 130 is represented by a two-way
softmax,
y i = exp ( x i ) j = 1 2 exp ( x j ) , Equation ( 2 )
##EQU00001##
for i=1, 2, where x.sub.i is the total input maps to an output
neuron i, and y.sub.i is an output of the output neuron i. y.sub.i
represents a probability distribution over two classes i.e. being
the same identity or not. Such a probability distribution makes it
valid to directly average multiple convolutional neural network
outputs without scaling. The convolutional neural networks are
trained by minimizing -log y.sub.i, where t.epsilon.{1, 2} denotes
the target class. The loss is minimized by stochastic gradient
descent, where the gradient is calculated by back-propagation.
[0025] In an embodiment of the present application, each
convolutional neural network of the convolutional neural networks
130 may extract the relational features from a plurality of input
feature modes. In addition, there may be multiple groups of
convolutional neural networks, with multiple convolutional neural
networks in each group, where convolutional neural networks in the
same group extract the identity relational features from the same
face region pair, while convolutional neural networks in different
groups extract features from different face region pairs.
[0026] As shown in FIG. 1, the system 1000 may include pooling unit
140, which pools the exacted identity relational features to reduce
the variance of individual features and improve their accuracy for
identity relation predictions. For example, in an embodiment of the
present application, two levels of average pooling over the
identity relational features from the convolutional neural network
outputs are used. As shown in FIG. 3, a layer L1 is formed by
averaging the eight identity relation predictions of the same
convolutional neural network from eight different input feature
modes. A layer L2 is formed by averaging the five pooled
predictions in L1 associated with the five convolutional neural
networks in the same group.
[0027] The recognizing unit 150 may be configured to recognize
whether the face images belong to the same identity based on the
relational features extracted by the convolutional neural network
unit 130, or based on the pooled relational features derived from
the pooling unit 140. The recognizing unit 150 may include a
classifier, such as the Bayesian classifier, Support Vector
Machine, or neural network classifier, and the classifier may be
configured to classify the extracted relational features as the two
classes, i.e. belong to the same identity or not. In an embodiment
in FIG. 3, the recognizing unit 150 is a classification restricted
Boltzmann machine, which takes the output of the multiple groups of
convolutional neural networks, after hierarchical pooling, as
input, and output the probability distribution over the two
classes, i.e., belong to the same identity or not.
[0028] For example, the classification restricted Boltzmann machine
models the joint distribution between its output neurons y (one out
of C classes), input neurons x (binary), and hidden neurons h
(binary) as P(y,x,h).varies.e.sup.-E(y,x,h), where
E(y,x,h)=-Wx-Uy-x-h-y. Given input x, the conditional probability
of its output y can be explicitly expressed as
p ( y c x ) = d c .PI. j ( 1 + c j + U jc + k W jk X k ) i d i .PI.
j ( 1 + c j + U ji + k W jk X k ) , Equation ( 3 ) ##EQU00002##
where c indicates the c-th class.
[0029] The large number of convolutional neural networks means that
the system 1000 has a high capacity. Directly optimizing the whole
system would lead to severe over-fitting. Therefore, each
convolutional neural network in the system can first be trained
separately. Then, by fixing all the convolutional neural networks,
the model in the recognizing unit is trained. All the convolutional
neural networks and the model in the recognizing unit may be
trained under supervision with the aim of predicting whether two
faces in comparison belong to the same identity. These two steps
initialize the system 1000 to be near a good local minimum.
Finally, the whole system is fine-tuned by back-propagating errors
from the model in the recognizing unit to all the convolutional
neural networks.
[0030] In one embodiment of the present application, the system
1000 may include one or more processors (not shown). Processors may
include a central processing unit ("CPU"), a graphic processing
unit ("GPU"), or other suitable information processing devices.
Depending on the type of hardware being used, processors can
include one or more printed circuit boards, and/or one or more
microprocessor chips. In addition, the processors are configured to
carry out the computer program instructions stored in a memory so
as to implement the process 5000 as shown in FIG. 5.
[0031] At step S201, the system 1000 may generate one or more face
region pairs of face images to be recognized. In an embodiment of
the present application, the system 1000 may first detect one or
more facial feature points of face images to be recognized. Then,
the system 1000 may align the face images to be recognized
according to the detected one or more facial feature points. Next,
the system 1000 may select one or more regions on the same position
of the aligned face images to be recognized to generate one or more
face region pairs, respectively.
[0032] At step S202, the system 1000 may form a plurality of
feature modes by exchanging two face regions of each face region
pair and horizontally flipping each face region of each face region
pair.
[0033] At step S203, the one or more convolutional neural networks
130 may receive the plurality of feature modes to form a plurality
of input maps in the convolutional neural network and extract one
or more relational features from the input maps to form a plurality
of output maps, which reflect identity relations, i.e., belonging
to the same person or not, of the compared face images.
[0034] At step S204, the system 1000 may pool the exacted identity
relational features, such as average pooling, to reduce the
variance of the individual features. This step is optional.
[0035] At step S205, the system 1000 may recognize whether the face
images belong to same identity based on the identity relational
features of the face images.
[0036] The embodiments of the present invention may be implemented
using certain hardware, software, or a combination thereof. In
addition, the embodiments of the present invention may be adapted
to a computer program product embodied on one or more computer
readable storage media (comprising but not limited to disk storage,
CD-ROM, optical memory and the like) containing computer program
codes.
[0037] In the foregoing descriptions, various aspects, steps, or
components are grouped together in a single embodiment for purposes
of illustrations. The disclosure is not to be interpreted as
requiring all of the disclosed variations for the claimed subject
matter. The following claims are incorporated into this Description
of the Exemplary Embodiments, with each claim standing on its own
as a separate embodiment of the disclosure.
[0038] Moreover, it will be apparent to those skilled in the art
from consideration of the specification and practice of the present
disclosure that various modifications and variations can be made to
the disclosed systems and methods without departing from the scope
of the disclosure, as claimed. Thus, it is intended that the
specification and examples be considered as exemplary only, with a
true scope of the present disclosure being indicated by the
following claims and their equivalents.
* * * * *