U.S. patent application number 17/462364 was filed with the patent office on 2021-12-23 for image processing system, image processing method and program storage medium for protecting privacy.
This patent application is currently assigned to NEC Corporation. The applicant listed for this patent is NEC Corporation. Invention is credited to Ryo KAWAI.
Application Number | 20210400208 17/462364 |
Document ID | / |
Family ID | 1000005810923 |
Filed Date | 2021-12-23 |
United States Patent
Application |
20210400208 |
Kind Code |
A1 |
KAWAI; Ryo |
December 23, 2021 |
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD AND PROGRAM
STORAGE MEDIUM FOR PROTECTING PRIVACY
Abstract
An image processing system includes: a receiving unit configured
to receive an input of a plurality of image frames constituting a
video from an imaging apparatus; a detection unit configured to
detect a feature point included in an image frame to be processed
in the plurality of image frames; and an output unit configured to
output an output image obtained by superimposing an image frame to
be processed of an area detected as a feature point on a background
image generated from at least some of a plurality of image
frames.
Inventors: |
KAWAI; Ryo; (Tokyo,
JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
NEC Corporation |
Tokyo |
|
JP |
|
|
Assignee: |
NEC Corporation
Tokyo
JP
|
Family ID: |
1000005810923 |
Appl. No.: |
17/462364 |
Filed: |
August 31, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
16808582 |
Mar 4, 2020 |
11159745 |
|
|
17462364 |
|
|
|
|
16295125 |
Mar 7, 2019 |
10623664 |
|
|
16808582 |
|
|
|
|
15322220 |
Dec 27, 2016 |
10432877 |
|
|
PCT/JP2015/003031 |
Jun 17, 2015 |
|
|
|
16295125 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06T 7/13 20170101; G06T
2207/20224 20130101; G06K 9/00744 20130101; G06T 7/194 20170101;
G06T 2207/30196 20130101; G06T 7/11 20170101; G06K 9/4604 20130101;
H04N 5/265 20130101; G06T 2207/20221 20130101; H04N 5/907 20130101;
G06K 9/38 20130101; G06T 2207/10016 20130101; H04N 5/272 20130101;
G06T 7/20 20130101 |
International
Class: |
H04N 5/265 20060101
H04N005/265; G06K 9/00 20060101 G06K009/00; G06K 9/38 20060101
G06K009/38; H04N 5/272 20060101 H04N005/272; G06T 7/11 20060101
G06T007/11; G06T 7/194 20060101 G06T007/194; G06T 7/20 20060101
G06T007/20; G06T 7/13 20060101 G06T007/13; G06K 9/46 20060101
G06K009/46; H04N 5/907 20060101 H04N005/907 |
Foreign Application Data
Date |
Code |
Application Number |
Jun 30, 2014 |
JP |
2014-134961 |
Claims
1. An image processing system comprising: a memory; and at least
one processor coupled to the memory, the at least one processor
performing operations to: receiving an input of an image frame;
detecting a feature point from a part of or a whole the image
frame, the feature point being an edge; outputting an output image
with a first pixel value at a first position of a background image
as a second pixel value, the first position of a background image
corresponding to a position of the feature point detected in the
image frame, the second pixel value being a value at which the
first pixel value is changed to one of at least two different types
values based on predetermined rule.
2. The image processing system according to claim 1, wherein the at
least one processor further performs operation to: outputting
sequentially the output image.
3. The image processing system according to claim 1, wherein the at
least one processor further performs operation to: detecting the
feature point of a moving object from a part of or a whole the
image frame.
4. The image processing system according to claim 1, wherein the at
least one processor further performs operation to: performing an
analysis of a behavior of a moving object.
5. The image processing system according to claim 4, wherein the
moving object is a person.
6. The image processing system according to claim 4, wherein the
edge includes a contour line of the moving object.
7. An image processing method comprising: receiving an input of an
image frame; detecting a feature point from a part of or a whole
the image frame, the feature point being an edge; outputting an
output image with a first pixel value at a first position of a
background image as a second pixel value, the first position of a
background image corresponding to a position of the feature point
detected in the image frame, the second pixel value being a value
at which the first pixel value is changed to one of at least two
different types values based on predetermined rule.
8. The image processing method according to claim 7, further
comprising: outputting sequentially the output image.
9. The image processing method according to claim 7, further
comprising: detecting the feature point of a moving object from a
part of or a whole the image frame.
10. The image processing method according to claim 7, further
comprising: performing an analysis of a behavior of a moving
object.
11. The image processing method according to claim 10, wherein the
moving object is a person.
12. The image processing method according to claim 10, wherein the
edge includes a contour line of the moving object.
13. A non-transitory computer-readable storage medium storing a
program that, if executed, causes a computer to perform operations
comprising: receiving an input of an image frame; detecting a
feature point from a part of or a whole the image frame, the
feature point being an edge; outputting an output image with a
first pixel value at a first position of a background image as a
second pixel value, the first position of a background image
corresponding to a position of the feature point detected in the
image frame, the second pixel value being a value at which the
first pixel value is changed to one of at least two different types
values based on predetermined rule.
14. The storage medium according to claim 13, wherein the program
causes the computer to perform operations comprising: outputting
sequentially the output image.
15. The storage medium according to claim 13, wherein the program
causes the computer to perform operations comprising: detecting the
feature point of a moving object from a part of or a whole the
image frame.
16. The storage medium according to claim 13, wherein the program
causes the computer to perform operations comprising: performing an
analysis of a behavior of a moving object.
17. The storage medium according to claim 16, wherein the moving
object is a person.
18. The storage medium according to claim 16, wherein the edge
includes a contour line of the moving object.
Description
[0001] The present application is a continuation application of
Ser. No. 16/808,582 filed on Mar. 4, 2020, which is a continuation
application of Ser. No. 16/295,125 filed on Mar. 7, 2019, U.S. Pat.
No. 10,623,664 issued on Apr. 14, 2020, which is a continuation
application of Ser. No. 15/322,220 filed on Dec. 27, 2016, U.S.
Pat. No. 10,432,877 issued on Oct. 1, 2019, which is a National
Stage Entry of PCT/JP2015/003031 filed on Jun. 17, 2015, which
claims priority from Japanese Patent Application 2014-134961 filed
on Jun. 30, 2014, the contents of all of which are incorporated
herein by reference, in their entirety.
TECHNICAL FIELD
[0002] Some aspects of the present invention relate to an image
processing system, an image processing method, and a program
storage medium.
BACKGROUND ART
[0003] Videos captured by an imaging apparatus such as a monitoring
camera not only are significantly useful for prevention of crime or
a criminal investigation but also can be used for a variety of
applications such as exploitation in a field of marketing using an
analysis of customer's traffic line or the like. However, when
using a video, it is desirable to perform protection of private
information or a portrait right by abstracting a person appeared in
the video.
[0004] On the other hand, in order to analyze a behavior of a
person appeared on a video for marketing or the like, it is
desirable also to understand what action a person does or how a
person interacts with a background object.
[0005] In order to meet both of these needs, it is demanded that a
concrete action of a person is known and simultaneously what is on
the background is seen while who a person appeared on a video is
cannot be identified.
[0006] In order to meet at least some of such a demand, Patent
Literature 1 discloses a technique in which a face is detected and
a portion thereof is pixelated. Patent Literature 2 discloses a
technique in which a portion different from a background is
pixelated or painted in black. Patent Literature 3 discloses a
technique of a blurring process as a means for protecting private
information. Patent Literature 4 discloses a technique in which a
foreground image in an image is synthesized into a background image
or a mask processing or the like is performed on the foreground
image depending on the authority of a user for the foreground
image. Patent Literature 5 discloses a technique in which a person
area is specified from an image and the person area is changed to
another image. Further, Non Patent Literature 1 discloses a
technique such as see-through expression or expression of a contour
in a specific color.
CITATION LIST
Patent Literature
[0007] PTL 1: Japanese Patent No. 4036051
[0008] PTL 2: Japanese Unexamined Patent Application Publication
No. 2000-000216
[0009] PTL 3: Japanese Patent No. 4701356
[0010] PTL 4: Japanese Unexamined Patent Application Publication
No. 2009-225398
[0011] PTL 5: Japanese Unexamined Patent Application Publication
No. 2013-186838
Non Patent Literature
[0012] NPL 1: Noboru BABAGUCHI, "Privacy Protected Video
Surveillance", IPSJ Magazine, Vol.48, No.1, pp.30-36, January,
2007
SUMMARY OF INVENTION
Technical Problem
[0013] However, as in a method described in PTL 1, private
information cannot be completely protected since information about
clothes, belongings or the like other than a face will be left only
by pixelating a face portion. Although PTL 2 and PTL 3 disclose a
technique of performing mosaicking, blurring, painting-out, or the
like on the whole person, most of information about person's
behavior such as picking up an object will be lost since
information in a person area or around a person is largely reduced.
As in a method of NPL 1, an individual is easily specified by a
see-through expression. In a method of contour expression, a person
needs to be highly precisely extracted, and it is technically
difficult to precisely extract a contour of the person.
[0014] Some aspects of the present invention have been made in view
of the above-described problems, and an object of the present
invention is to provide an image processing system, an image
processing method, and a program storage medium whereby an image
for which private information has been suitably protected can be
generated.
Solution to Problem
[0015] An image processing system according to one aspect of the
present invention includes: means for receiving an input of a
plurality of image frames constituting a video from an imaging
apparatus; detection means for detecting a feature point included
in an image frame to be processed in the plurality of image frames;
and means for outputting an output image obtained by superimposing
an image frame to be processed of an area detected as a feature
point on a background image generated from at least some of a
plurality of image frames.
[0016] An image processing method according to one aspect of the
present invention includes: receiving an input of a plurality of
image frames constituting a video from an imaging apparatus;
detecting a feature point included in an image frame to be
processed in the plurality of image frames; and outputting an
output image obtained by superimposing an image frame to be
processed of an area detected as a feature point on a background
image generated from at least some of a plurality of image
frames.
[0017] A storage medium according to one aspect of the present
invention for storing a program causing a computer to execute: a
processing of receiving an input of a plurality of image frames
constituting a video from an imaging apparatus; a processing of
detecting a feature point included in an image frame to be
processed in the plurality of image frames; and a processing of
outputting an output image obtained by superimposing an image frame
to be processed of an area detected as a feature point on a
background image generated from at least some of a plurality of
image frames.
[0018] In the present invention, a "unit", "means", "apparatus", or
a "system" does not simply means a physical means, and also
includes a software realizing a function of the "unit", "means",
"apparatus", or "system". A function of one "unit", "means",
"apparatus", or "system" may be realized by two or more physical
means or apparatuses, or two or more functions of a "unit",
"means", "apparatus", or a "system" may be realized by one physical
means or apparatus.
Advantageous Effects of Invention
[0019] According to the present invention, an image processing
system, an image processing method, and a program storage medium
whereby an image for which private information has been suitably
protected can be generated can be provided.
BRIEF DESCRIPTION OF DRAWINGS
[0020] FIG. 1 is a diagram illustrating an example of a
relationship between an input image frame and a background
image.
[0021] FIG. 2 is a diagram illustrating a summary of processing of
an image processing system according to the present example
embodiment.
[0022] FIG. 3 is a functional block diagram illustrating schematic
configuration of an image processing system according to a first
example embodiment.
[0023] FIG. 4 is a flow chart illustrating a flow of processing of
an image processing system illustrated in FIG. 3.
[0024] FIG. 5 is a block diagram illustrating a hardware
configuration which can implement a detection system illustrated in
FIG. 3.
[0025] FIG. 6 is a functional block diagram illustrating a
schematic configuration of an image processing system according to
a second example embodiment.
EXAMPLE EMBODIMENT
[0026] In the following, example embodiments according to the
present invention will be described. In the description of the
following explanation and drawings to be referred to, identical or
similar configurations have identical or similar signs,
respectively.
[0027] (1 First Example Embodiment)
[0028] (1.1 Summary)
[0029] FIGS. 1 to 5 are diagrams illustrating a first example
embodiment. Hereinafter, description will be made with reference to
these drawings.
[0030] The present example embodiment relates to an image
processing system which generates an image representing a moving
object such as a person from a video captured by an imaging
apparatus such as a monitoring camera. In particular, in an image
processing system according to the present example embodiment,
while considering private information, analysis of a behavior of a
moving object such as a case of picking up goods for sale or which
direction the moving object faces is considered to be
performed.
[0031] In order to attain the above, an image processing system
according to the present example embodiment extracts only an edge
area from an image frame to be processed (hereinafter, also
referred to as an original image). The image processing system then
superimposes the original image of the edge area on a background
image stored in a storage unit, and outputs the superimposed image.
By this, a face, clothing, or the like of a moving object such as a
person is not appeared or displayed on an output image to a degree
to which it is recognized, and therefore, private information can
be protected. On the other hand, since a contour line or the like
of a moving object such as a person is included in the output image
as an edge, a person's behavior such as a direction of a person or
a relation with a background such as a good for sale can be
analyzed. Further, even when an edge other than a moving object is
detected, an edge of a portion other than the moving object is a
portion corresponding to a background area in the original image,
and is not considerably different from a background image on which
the edge is superimposed, and therefore, a user who confirms the
image does not particularly have a feeling of strangeness.
[0032] An image processing system according to the present example
embodiment is a system in which an edge area in an image frame to
be processed (original image) is detected, but is not limited
thereto. For example, an image processing system may be a system in
which a special point such as a corner point is extracted.
[0033] FIG. 1 illustrates an example of an image frame which is
input to an image processing system according to the present
example embodiment. In the example of FIG. 1, image frames each at
different imaging times constituting a video are input
sequentially. A background image can be generated by a method, for
example, averaging pixel values of each pixel in a plurality of
image frames at different imaging times, or extracting the most
frequent value from the image frames for each pixel. Alternatively,
one fixed image frame on which a moving object is not captured may
be used as a background image.
[0034] Hereinafter, with reference to FIG. 2, summary of a process
until an output image is generated in the present example
embodiment will be described. In example of FIG. 2, an image frame
to be processed (original image) on which a moving object appears
and a background image are prepared in advance.
[0035] First, the image processing system specifies a difference
area in an image frame to be processed from a background image.
More specifically, for example, a pixel value of an image frame to
be processed is compared with a pixel value of a background image
for each pixel, and then, the image processing system specifies an
area in which a difference between them is not less than a
threshold. The area corresponds to an area on which a moving object
appears in an image frame.
[0036] Next, the image processing system performs edge detection of
a difference area in an image frame to be processed from a
background image corresponding to an area on which the moving
object appears. After that, the image processing system extracts
only an edge area portion detected from the image frame to be
processed. The image processing system then generates an output
image by superimposing an image frame to be processed (original
image) of the extracted portion on the background image.
[0037] By the output image, it is possible to visually recognize
how the relationship between a background portion other than the
moving object and the moving object is or what action the moving
object performs. In particular, since a contour portion of the
moving object can be confirmed, the direction of a face, a posture,
or the like of the moving object can easily be visualized or
estimated. Since a background image appears on a portion excluding
a contour portion of a moving object, a face, clothing, or the like
cannot be specified from an output image, and as a result, private
information can be protected.
[0038] An image processing system of the present example embodiment
having such a function can be applied to a safely field such as
monitoring by utilizing a video of a monitoring camera or a
marketing field in which a behavior of a moving object or the like
is analyzed.
[0039] (1.2 System Configuration)
[0040] Hereinafter, with reference to FIG. 3, a system
configuration of an image processing system 100 according to the
present example embodiment will be described. FIG. 3 illustrates a
system configuration of the image processing system 100 for
generating an image for which protection of private information has
been considered. The image processing system 100 includes an image
input unit 110, a background image generation unit 120, a
background image database (DB) 130, a difference area extraction
unit 140, an edge detection unit 150, an edge area extraction unit
160, a superimposition unit 170, and an output unit 180.
[0041] The image input unit 110 receives an input of image frames
constituting a video, i.e. image frames each having different
imaging times from an unillustrated imaging apparatus such as a
camera.
[0042] The background image generation unit 120 generates a
background image from one or more image frames sequentially
supplied from the image input unit 110. As described above, for
example, a pixel value of each pixel of image frames input in a
past certain time from a processing time may be averaged or a mode
thereof may be determined to generate a background image.
Alternatively, for example, one image frame in which a moving
object is considered to be absent may be used as a background
image. When a fixed image frame is used as a background image, the
image processing system 100 does not necessarily include the
background image generation unit 120. A background image to be
generated is stored in the background image DB 130.
[0043] The difference area extraction unit 140 extracts, from an
image frame to be processed supplied from the image input unit 110,
a difference area in which a difference from a background image
generated in the background image generation unit 120 is not less
than a certain threshold. As described above, the difference area
generally corresponds to an area of an image frame to be processed
in which a moving object appears.
[0044] The edge detection unit 150 performs edge detection on a
difference area in an image frame to be processed extracted by the
difference area extraction unit 140. For a method of edge detection
used by the edge detection unit 150, a variety of already-existing
methods can be applied.
[0045] The edge area extraction unit 160 extracts only an area
detected by the edge detection unit 150 as an edge from an image
frame to be processed. In this case, for example, only some of a
detected edge area may be extracted. For example, a head, a hand,
or the like of a moving object is detected, and then, only such a
site may be extracted or the transparency for extraction may be
changed depending on the site. Alternatively, in order to express
an edge or the whole edge corresponding to a predetermined site of
a body, a detected predetermined edge area may be separated in a
certain length unit, and the separated areas may be output in this
length unit. As described above, the edge area extraction unit 160
may extract an area detected as an edge portion from an image frame
to be processed in an aspect which differs depending on a site of a
corresponding moving object and output the area, or may output the
detected whole edge in a method such as expression in a dashed
line. Namely, an output method of the edge area extraction unit 160
is not limited to a particular one.
[0046] Alternatively, the transparency may be changed depending on
the intensity (degree of importance) of an edge detected by the
edge detection unit 150. By this, since, in an output image to be
generated in the superimposition unit 170 later, a strong edge
portion where the degree of importance is considered to be high of
a moving object (foreground) is densely displayed, and a weak edge
portion where the degree of importance is considered to be low of a
moving object (foreground) is lightly displayed, a visibility can
be improved.
[0047] The superimposition unit 170 generates an output image
obtained by superimposing an image frame to be processed of an edge
area output from the edge area extraction unit 160 on a background
image. For example, letting an original image (image frame to be
processed) be fg, a background image be bg, and coordinates of each
image be (x, y), an output image out can be expressed as
follows:
out .times. .times. ( x , y ) = .alpha. .times. .times. fg
.function. ( x , y ) + ( 1 - .alpha. ) .times. bg .function. ( x ,
y ) [ Math . .times. 1 ] ##EQU00001##
[0048] Here, when (x, y) belongs to an edge area, .alpha.=1, and
when (x, y) does not belong to an edge area, .alpha.=0. When a
transparency is set to an area of an original image to be
superimposed depending on a site or the like of an edge as
described above, a may be set as a real number in the range of
0.ltoreq..alpha..ltoreq.1 depending on the transparency.
[0049] The output unit 180 outputs a generated output image. A
variety of output methods may be employed, and examples thereof
include displaying a video to a user by sequentially displaying
output images on an apparatus, or recording such output images
together as a video file on a storage medium.
[0050] In the present example embodiment, the difference area
extraction unit 140 extracts a difference area in an image frame
from a background image corresponding to a moving object, and then
performs edge detection only on the difference area. By performing
edge detection only on an area on which a moving object is assumed
to appear, it is possible to prevent an image frame of a background
area which is not to be analyzed from being included in an output
image.
[0051] However, the present invention is not limited to the above,
and edge detection may be performed without extracting a difference
area. In such cases, the edge detection unit 150 performs edge
detection on the whole image frame, but a pixel value of an area
other than a moving object in an image frame is considered to be
not significantly different from a pixel value of a background
image, and therefore, a feeling of strangeness is less likely to be
inspired by a user who confirms an output image. In this case, the
image processing system 100 does not necessarily include the
difference area extraction unit 140.
[0052] (1.3 Process Flow)
[0053] Hereinafter, a process flow of the image processing system
100 will be described with reference to FIG. 4. FIG. 4 is a flow
chart illustrating a process flow of the image processing system
100.
[0054] Each processing step in the following may be executed in an
arbitrary sequence or in parallel within the scope of not creating
any inconsistencies in the processing contents, and another step
may be added between the processing steps. Further, a step
described as one step for the sake of convenience may be executed
in a plurality of substeps, and steps described as substeps for the
sake of convenience may be executed as one step.
[0055] The image input unit 110 receives an input of a new image
frame (image frame at the processing time) (S401). The background
image generation unit 120 generates a background image from the
input image frame and a background image stored in the background
image DB 130 which has been generated from an image frame input
before the processing time (S403).
[0056] The difference area extraction unit 140 extracts a
difference area of an input image frame from a background image
(S405). The difference area is, for example, specified as an area
of a pixel in which a pixel value of each pixel of an input image
frame is compared with a pixel value of each pixel of a background
image, and a difference between them is not less than a threshold.
As described above, the difference area corresponds to a moving
object area on which a moving object appears.
[0057] The edge detection unit 150 performs edge detection on a
moving object area (difference area) extracted by the difference
area extraction unit 140 (S407). The edge area extraction unit 160
extracts only an area detected as the edge from an image frame to
be processed (S409). The superimposition unit 170 generates an
output image by superimposing an image frame to be processed
detected as the edge area on a background image stored in the
background image DB130 (S411).
[0058] The output unit 180 outputs a generated output image to a
display apparatus, a storage medium, or the like (S413).
Sequentially performing processes S401 to S413 by the image
processing system 100 makes it possible to display or store an
output image as a video.
[0059] (1.4 Specific Example of Hardware Configuration)
[0060] Hereinafter, one example of a hardware configuration when
the above-described image processing system 100 is realized by a
computer will be described with reference to FIG. 5. As illustrated
in FIG. 5, the image processing system 100 includes a processor
501, a memory 503, a storage apparatus 505, an input interface
(I/F) unit 507, a data I/F unit 509, a communication I/F unit 511,
and a display apparatus 513.
[0061] The processor 501 controls a variety of processes of the
image processing system 100 by executing a program stored in the
memory 503. For example, processes related to the image input unit
110, the background image generation unit 120, the difference area
extraction unit 140, the edge detection unit 150, the edge area
extraction unit 160, the superimposition unit 170, and the output
unit 180 illustrated in FIG. 3 can be realized as a program which
is temporarily stored in the memory 503 and operates mainly on the
processor 501.
[0062] The memory 503 is, for example, a storage medium such as a
RAM
[0063] (Random Access Memory). The memory 503 temporarily stores a
program code of a program executed by the processor 501 or data
needed when a program is executed.
[0064] The storage apparatus 505 is, for example, a non-volatile
storage medium such as a hard disk or a flash memory. The storage
apparatus 505 can store a variety of programs for realizing each
function of an operating system or the image processing system 100,
a variety of data including the background image DB130, or the
like. A program or data stored in the storage apparatus 505 is
referred to by the processor 501 by being loaded on the memory 503
as needed.
[0065] The input I/F unit 507 is a device for receiving an input
from a user. Specific examples of the input I/F unit 507 include a
keyboard, a mouse, or a touch panel. The input I/F unit 507 may be
connected to the image processing system 100 via an interface such
as a USB (Universal Serial Bus).
[0066] The data I/F unit 509 is a device for inputting data from
outside the image processing system 100. Specific examples of the
data I/F unit 509 include a drive apparatus for reading data stored
in a variety of storage apparatuses. The data I/F unit 509 may be
provided outside the image processing system 100. In this case, the
data I/F unit 509 is connected to the image processing system 100
via an interface such as a USB.
[0067] The communication I/F unit 511 is a device for performing
data communication with an apparatus outside the image processing
system 100 such as an imaging apparatus (a video camera, a
monitoring camera, or a digital camera) with a wire or wirelessly.
The communication I/F unit 511 may be provided outside the image
processing system 100. In this case, the communication I/F unit 511
is connected to the image processing system 100 via an interface
such as a USB.
[0068] The display apparatus 513 is a device for displaying, for
example, an output image or the like output by the output unit 180.
Specific examples of the display apparatus 513 include a liquid
crystal display or an organic EL (Electro-Luminescence) display.
The display apparatus 513 may be provided outside the image
processing system 100. In this case, the display apparatus 513 is
connected to the image processing system 100 via, for example, a
display cable.
[0069] (1.5 Effects According To The Present Embodiment)
[0070] As described above, the image processing system 100
according to the present example embodiment generates an output
image by superimposing only an image frame to be processed
(original image) of an edge area on the background image.
[0071] By this, a user who visually confirms an output image can
confirm a shape of a moving object corresponding to an edge area to
be detected, a relationship between the moving object and a
background, or the like. Since an area other than an edge area of a
moving object is not included in an output image, what type of user
appears cannot be determined by the output image. That is, private
information is considered for an output image generated by the
image processing system 100.
[0072] A method such as displaying a contour portion of a moving
object detected by performing detection or the like of a moving
object has a problem that, when the degree of precision of
detection of a moving object is not sufficient, a portion other
than a moving object appears on an output image. On the other hand,
in a method according to the present example embodiment, an image
frame to be processed of an edge area itself is displayed; and
therefore, a feeling of strangeness is less likely to be inspired
by a user who confirms an output image even when a portion other
than a moving object is detected as an edge area since the portion
is almost assimilated to a background image.
[0073] (2 Second Embodiment)
[0074] Hereinafter, a second example embodiment will be described
with reference to FIG. 6. FIG. 6 is a block diagram illustrating a
functional configuration of the image processing system 600
according to the present example embodiment. The image processing
system 600 includes an input unit 610, a detection unit 620, and an
output unit 630. The input unit 610 receives an input of a
plurality of image frames constituting a video from an
unillustrated imaging apparatus such as a camera.
[0075] The detection unit 620 detects a feature point contained in
an image frame to be processed in a plurality of image frames.
Here, a feature point may include an edge such as a contour line of
a moving object, for example, a person, or a special point such as
a corner point.
[0076] The output unit 630 outputs an output image obtained by
superimposing an image frame to be processed of an area detected as
a feature point on a background image generated from at least some
of a plurality of image frames.
[0077] By employing such an implementation, the image processing
system 600 according to the present example embodiment can generate
an image for which private information is suitably protected.
[0078] (3 Supplementary Notes)
[0079] The configurations of the example embodiments described
above may be combined or some configuration may be replaced with.
The configuration of the present invention is not limited only to
the example embodiments described above, and a variety of changes
can be made without departing from the scope of the present
invention.
[0080] Some or all of the example embodiments described above may
also be described as the following Supplementary notes, but the
present invention is not limited to the following. A program
according to the present invention may be a program which causes a
computer to execute each operation described in each of the
above-described example embodiments.
[0081] (Supplementary note 1)
[0082] An image processing system including: means for receiving an
input of a plurality of image frames constituting a video from an
imaging apparatus; detection means for detecting a feature point
included in an image frame to be processed in the plurality of
image frames; and means for outputting an output image obtained by
superimposing an image frame to be processed of an area detected as
a feature point on a background image generated from at least some
of a plurality of image frames.
[0083] (Supplementary note 2)
[0084] The image processing system according to Supplementary note
1, wherein the feature point is an edge portion.
[0085] (Supplementary note 3)
[0086] The image processing system according to Supplementary note
2, wherein the detection means detects the edge portion for an area
in which a difference in the image frame to be processed from the
background image is not less than a threshold.
[0087] (Supplementary note 4)
[0088] The image processing system according to Supplementary note
2 or 3, wherein the output image is obtained by superimposing the
image frame to be processed of some of an area detected as the edge
portion on the background image.
[0089] (Supplementary note 5)
[0090] The image processing system according to any one of
Supplementary notes 2 to 4, wherein the output image is obtained by
superimposing an image frame of an area detected as the edge
portion on the background image in an aspect differing depending on
a position.
[0091] (Supplementary note 6)
[0092] The image processing system according to Supplementary note
5, wherein the output image is obtained by superimposing an image
frame of an area detected as the edge portion on the background
image in an aspect differing depending on a site of a moving
object.
[0093] (Supplementary note 7)
[0094] An image processing method including: a step of receiving an
input of a plurality of image frames constituting a video from an
imaging apparatus; a step of detecting a feature point included in
an image frame to be processed in the plurality of image frames;
and a step of outputting an output image obtained by superimposing
an image frame to be processed of an area detected as a feature
point on a background image generated from at least some of a
plurality of image frames.
[0095] (Supplementary note 8)
[0096] The image processing method according to Supplementary note
7, wherein the feature point is an edge portion.
[0097] (Supplementary note 9)
[0098] The image processing method according to Supplementary note
8, wherein the edge portion for an area in which a difference in
the image frame to be processed from the background image is not
less than a threshold, is detected.
[0099] (Supplementary note 10)
[0100] The image processing method according to Supplementary note
8 or 9, wherein the output image is obtained by superimposing the
image frame to be processed of some of an area detected as the edge
portion on the background image.
[0101] (Supplementary note 11)
[0102] The image processing method according to any one of
Supplementary notes 8 to 10, wherein the output image is obtained
by superimposing an image frame of an area detected as the edge
portion on the background image in an aspect differing depending on
a position.
[0103] (Supplementary note 12)
[0104] The image processing method according to Supplementary note
11, wherein the output image is obtained by superimposing an image
frame of an area detected as the edge portion on the background
image in an aspect differing depending on a site of a moving
object.
[0105] (Supplementary note 13)
[0106] A program causing a computer to execute: a processing of
receiving an input of a plurality of image frames constituting a
video from an imaging apparatus; a processing of detecting a
feature point included in an image frame to be processed in the
plurality of image frames; and a processing of outputting an output
image obtained by superimposing an image frame to be processed of
an area detected as a feature point on a background image generated
from at least some of a plurality of image frames.
[0107] (Supplementary note 14)
[0108] The program according to Supplementary note 13, wherein the
feature point is an edge portion.
[0109] (Supplementary note 15)
[0110] The program according to Supplementary note 14, wherein the
edge portion for an area in which a difference in the image frame
to be processed from the background image is not less than a
threshold, is detected.
[0111] (Supplementary note 16)
[0112] The program according to Supplementary note 14 or 15,
wherein the output image is obtained by superimposing the image
frame to be processed of some of an area detected as the edge
portion on the background image.
[0113] (Supplementary note 17)
[0114] The program according to any one of Supplementary notes 14
to 16, wherein the output image is obtained by superimposing an
image frame of an area detected as the edge portion on the
background image in an aspect differing depending on a
position.
[0115] (Supplementary note 18)
[0116] The program according to Supplementary note 17, wherein the
output image is obtained by superimposing an image frame of an area
detected as the edge portion on the background image in an aspect
differing depending on a site of a moving object.
[0117] The present invention has been described by way of example
embodiments as described above as exemplary examples. However, the
present invention is not limited to the above-described example
embodiments. In other words, a variety of aspects which can be
understood by those skilled in the art can be applied to the
present invention without departing from the scope of the present
invention.
[0118] This application claims the priority based on Japanese
Patent Application No. 2014-134961 filed on Jun. 22, 2014, the
entire disclosure of which is incorporated herein by reference.
REFERENCE SIGNS LIST
[0119] 100: Image processing system
[0120] 110: Image input unit
[0121] 120: Background image generation unit
[0122] 130: Background image database
[0123] 140: Difference area extraction unit
[0124] 150: Edge detection unit
[0125] 160: Edge area extraction unit
[0126] 170: Superimposition unit
[0127] 180: Output unit
[0128] 501: Processor
[0129] 503: Memory
[0130] 505: Storage apparatus
[0131] 507: Input interface unit
[0132] 509: Data interface unit
[0133] 511: Communication interface unit
[0134] 513: Display apparatus
[0135] 600: Image processing system
[0136] 610: Input unit
[0137] 620: Detection unit
[0138] 630: Output unit
* * * * *