U.S. patent application number 11/487482 was filed with the patent office on 2007-07-19 for closed form method and system for matting a foreground object in an image having a background.
This patent application is currently assigned to Yissum Research Development Co.. Invention is credited to Anat Levin, Daniel Lischinski, Yair Weiss.
Application Number | 20070165966 11/487482 |
Document ID | / |
Family ID | 41266489 |
Filed Date | 2007-07-19 |
United States Patent
Application |
20070165966 |
Kind Code |
A1 |
Weiss; Yair ; et
al. |
July 19, 2007 |
Closed form method and system for matting a foreground object in an
image having a background
Abstract
In a method and system for matting a foreground object F having
an opacity .alpha. constrained by associating a characteristic with
selected pixels in an image having a background B, weights are
determined for all edges of neighboring pixels for the image and
used to build a Laplacian matrix L. The equation .alpha. is solved
where .alpha.=arg min .alpha..sup.T L.alpha.
s.t..alpha..sub.i=s.sub.i, .A-inverted.i .di-elect cons. S, S is
the group of selected pixels, and s.sub.i is the value indicated by
the associated characteristic. The equation
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i is solved for
F and B with additional smoothness assumptions on F and B; after
which the foreground object F may be composited on a selected
background B' that may be the original background B or may be a
different background, thus allowing foreground features to be
extracted from the original image and copied to a different
background.
Inventors: |
Weiss; Yair; (Jerusalem,
IL) ; Lischinski; Daniel; (Jerusalem, IL) ;
Levin; Anat; (Jerusalem, IL) |
Correspondence
Address: |
BROWDY AND NEIMARK, P.L.L.C.;624 NINTH STREET, NW
SUITE 300
WASHINGTON
DC
20001-5303
US
|
Assignee: |
Yissum Research Development
Co.
Jerusalem
IL
|
Family ID: |
41266489 |
Appl. No.: |
11/487482 |
Filed: |
July 17, 2006 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60699503 |
Jul 15, 2005 |
|
|
|
60714265 |
Sep 7, 2005 |
|
|
|
Current U.S.
Class: |
382/284 ;
382/173 |
Current CPC
Class: |
H04N 5/272 20130101;
H04N 5/275 20130101 |
Class at
Publication: |
382/284 ;
382/173 |
International
Class: |
G06K 9/36 20060101
G06K009/36; G06K 9/34 20060101 G06K009/34 |
Claims
1. A method for matting a foreground object F having an opacity
.alpha. in an image having a background B, the respective opacity
of selected pixels in the foreground object F and the background B
being constrained by associating a characteristic with said pixels,
the method comprising: (a) determining weights for all edges of
neighboring pixels for the image; (b) build a Laplacian matrix L
with the weights; (c) solve the equation .alpha. where .alpha.=arg
min .alpha..sup.T L.alpha. s.t..alpha..sub.i=s.sub.i,
.A-inverted..sub.i .di-elect cons. S, S is the group of selected
pixels, and s.sub.i is the value indicated by said characteristic;
(d) solve for F and B in the equation
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i with
additional smoothness assumptions on F and B; and (e) compositing
the foreground object F on a selected background B'.
2. The method according to claim 1, wherein the characteristic with
said pixels is a respective distinctive color.
3. The method according to claim 2, wherein areas of F and B are
each identified by a scribble of a respective distinctive
color.
4. The method according to claim 2, wherein only two distinctive
colors are used to constrain the opacity .alpha. to be 0 or 1.
5. The method according to claim 2, wherein additional colors are
used to identify F and B respectively so as to constrain the value
of .alpha..
6. The method according to claim 3, wherein F and B are constrained
to be constant but unknown within a scribble.
7. The method according to claim 1, wherein the selected background
B' is different from the background B.
8. The method according to claim 1, wherein the input is a video
sequence and the foreground F, background B and opacity .alpha. are
also video sequences.
9. A system for matting a foreground object F having an opacity
.alpha. in an image having a background B, the respective opacity
of selected pixels in the foreground object F and the background B
being constrained by associating a characteristic with said pixels,
comprising: a graphics processing unit; and software loadable on
the graphics processing unit, the software being operable to: (a)
determining weights for all edges of neighboring pixels for the
image; (b) build a Laplacian matrix L with the weights; (c) solve
the equation .alpha. where .alpha.=arg min .alpha..sup.T L.alpha.
s.t..alpha..sub.i=s.sub.i, .A-inverted..sub.i .di-elect cons. S, S
is the group of selected pixels and s.sub.i is the value indicated
by said characteristic; (d) solve for F and B in the equation
I.sub.i=.alpha..sub.IF.sub.i+(1-.alpha..sub.i)B.sub.i with
additional smoothness assumptions on F and B; and (e) compositing
the foreground object F on a selected background B'.
10. The system according to claim 9, wherein the characteristic
with said pixels is a respective distinctive color.
11. The system according to claim 10, wherein areas of F and B are
each identified by a scribble of a respective distinctive
color.
12. The system according to claim 10, wherein only two distinctive
colors are used to constrain the opacity .alpha. to be 0 or 1.
13. The system according to claim 10, wherein additional colors are
used to identify F and B respectively so as to constrain the value
of .alpha..
14. The system according to claim 11, wherein F and B are
constrained to be constant but unknown within a scribble.
15. The system according to claim 9, wherein the selected
background B' is different from the background B.
16. The system according to claim 9, wherein the input is a video
sequence and the foreground F, background B and opacity .alpha. are
also video sequences.
17. A program storage device readable by machine, tangibly
embodying a program of instructions executable by the machine to
perform a method for matting a foreground object F having an
opacity .alpha. in an image having a background B, the respective
opacity of selected pixels in the foreground object F and the
background B being constrained by associating a characteristic with
said pixels, the method comprising: (a) determining weights for all
edges of neighboring pixels for the image; (b) build a Laplacian
matrix L with the weights; (c) solve the equation .alpha. where
.alpha.=arg min .alpha..sup.T L.alpha. s.t..alpha..sub.i=s.sub.i,
.A-inverted..sub.i .di-elect cons. S, S is the group of selected
pixels, and s.sub.i is the value indicated by said characteristic;
(d) solve for F and B in the equation
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i with
additional smoothness assumptions on F and B; and (e) compositing
the foreground object F on a selected background B'.
Description
RELATED APPLICATONS
[0001] This application claims benefit of provisional applications
Ser. Nos. 60/699,503 filed Jul. 15, 2005 and 60/714,265 filed Sep.
7, 2005 whose contents are included herein by reference.
FIELD OF THE INVENTION
[0002] This invention relates to image matting.
REFERENCES
[0003] The description refers to the following prior art
references, whose contents are incorporated herein by reference.
[0004] [1] N. E. Apostoloff and A. W. Fitzgibbon. "Bayesian video
matting using learnt image priors" In Proc. CVPR, 2004. [0005] [2]
U.S. Pat. No. 6,134,345 A. Berman, P. Vlahos, and A. Dadourian.
"Comprehensive method for removing from an image the background
surrounding a selected object", 2000 [0006] [3] Y. Boykov and M. P.
Jolly "Interactive graph cuts for optimal boundary & region
segmentation of objects in n-d images" In Proc. ICCV, 2001. [0007]
[4] Y. Chuang, A. Agarwala, B. Curless, D. Salesin, and R. Szeliski
"Video matting of complex scenes", ACM Trans. Graph.,
21(3):243-248, 2002. [0008] [5] Y. Chuang, B. Curless, D. Salesin,
and R. Szeliski "A Bayesian approach to digital matting" In Proc.
CVPR, 2001. [0009] [6] L. Grady, T. Schiwietz, S. Aharon, and R.
Westermann "Random walks for interactive alpha-matting" In Proc.
VIIP05. [0010] [7] A. Levin, D. Lischinski, and Y. Weiss
"Colorization using optimization" ACM Transactions on Graphics,
2004. [0011] [8] A. Levin, D. Lischinski, and Y. Weiss "A closed
form solution to natural image matting" Hebrew University Technical
Report, 2006. [0012] [9] Y. Li, J. Sun, C. -K. Tang, and H. -Y.
Shum "Lazy snapping" ACM Trans. Graph., 23(3):303-308, 2004. [0013]
[10] I. Omer and M. Werman "Color lines: Image specific color
representation" In Proc. CVPR 2004, June 2004. [0014] [11] C.
Rother, V. Kolmogorov, and A. Blake "`grabcut`: interactive
foreground extraction using iterated graph cuts" ACM Trans. Graph.,
23(3):309-314, 2004. [0015] [12] M. Ruzon and C. Tomasi "Alpha
estimation in natural images" In Proc. CVPR, 2000. [0016] [13] J.
Shi and J. Malik "Normalized cuts and image segmentation" In Proc.
CVPR, pages 731-737, 1997. [0017] [14] J. Sun, J. Jia, C. -K. Tang,
and H. -Y. Shum "Poisson matting" ACM Trans. Graph., 23(3):315-321,
2004. [0018] [15] J. Wang and M. Cohen "An iterative optimization
approach for unified image segmentation and matting" In Proc. IEEE
Intl. Conf. on Computer Vision, 2005. [0019] [16] L. Zelnik-Manor
and P. Perona "Self-tuning spectral clustering" In Advances in
Neural Information Processing Systems 17. 2005. [0020] [17] A.
Zomet and S. Peleg "Multi-sensor super resolution" In Proceedings
of the IEEE Workshop on Applications of Computer Vision, 2002.
BACKGROUND OF THE INVENTION
[0021] Interactive digital matting, the process of extracting a
foreground object from an image based on limited user input, is an
important task in image and video editing. From a computer vision
perspective, this task is extremely challenging because it is
massively ill-posed--at each pixel we must estimate the foreground
and the background colors, as well as the foreground opacity
("alpha matte") from a single color measurement. Current approaches
either restrict the estimation to a small part of the image,
estimating foreground and background colors based on nearby pixels
where they are known, or perform iterative nonlinear estimation by
alternating foreground and background color estimation with alpha
estimation.
[0022] Natural image matting and compositing is of central
importance in image and video editing. The goal is to extract a
foreground object, along with an opacity map (alpha matte) from a
natural image, based on a small amount of guidance from the user.
Thus, FIG. 1 shows how matting is used to extract a foreground
object from an image shown in FIG. 1(a) and compositing it with a
novel background shown in FIG. 1(e). Traditionally, this has been
done using a trimap interface as shown in FIG. 1(b). As will be
shown in the following description, the invention permits a high
quality matte shown in FIG. 1(d) to be obtained with a sparse set
of scribbles shown in FIG. 1(c).
[0023] What distinguishes matting and compositing from simple "cut
and paste" operations on the image is the challenge of correctly
handling "mixed pixels". These are pixels in the image whose color
is a mixture of the foreground and background colors. Such pixels
occur, for example, along object boundaries or in regions
containing shadows and transparency. While mixed pixels may
represent a small fraction of the image, human observers are
remarkably sensitive to their appearance, and even small artifacts
could cause the composite to look fake. Formally, image matting
methods take as input an image I, which is assumed to be a
composite of a foreground image F and a background image B. The
color of the i-th pixel is assumed to be a linear combination of
the corresponding foreground and background colors,
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i (1) where
.alpha..sub.i is the pixel's foreground opacity. In natural image
matting, all quantities on the right hand side of the compositing
equation (1) are unknown. Thus, for a 3 channel color image, at
each pixel there are 3 equations and 7 unknowns.
[0024] Obviously, this is a severely under-constrained problem, and
user interaction is required to extract a good matte. Most recent
methods expect the user to provide a trimap [1, 2, 4, 5, 12, 14] as
a starting point; an example is shown in FIG. 2(e). The trimap is a
rough (typically hand-drawn) segmentation of the image into three
regions: foreground (shown in white), background (shown in black)
and unknown (shown in gray). Given the trimap, these methods
typically solve for F, B and .alpha. simultaneously. This is
typically done by iterative nonlinear optimization, alternating the
estimation of F and B with that of .alpha.. In practice, this means
that for good results the unknown regions in the trimap must be as
small as possible. As a consequence, trimap-based approaches
typically experience difficulty handling images with a significant
portion of mixed pixels or when the foreground object has many
holes [15]. In such challenging cases a great deal of experience
and user interaction may be necessary to construct a trimap that
would yield a good matte. Another problem with the trimap interface
is that the user cannot directly influence the matte in the most
important part of the image: the mixed pixels. It would clearly be
preferable to provide more direct control over these mixed
regions.
[0025] The requirement of a hand-drawn segmentation becomes far
more limiting when one considers image sequences. In these cases
the trimap needs to be specified over key frames and interpolated
between key frames.
[0026] While good results have been obtained by intelligent use of
optical flow [4], the amount of interaction obviously grows quite
rapidly with the number of frames.
[0027] Another problem with the trimap interface is that the user
cannot directly influence the matte in the most important part of
the image: the mixed pixels. When the matte exhibits noticeable
artifacts in the mixed pixels, the user can refine the trimap and
hope this improves the results in the mixed region.
[0028] As noted above, most existing methods for natural image
matting require the input image to be accompanied by a trimap
[1,2,4,5,12,14], labeling each pixel as foreground, background, or
unknown. The goal of the method is to solve the compositing
equation (1) for the unknown pixels. This is typically done by
exploiting some local regularity assumptions on F and B to predict
their values for each pixel in the unknown region. In the Corel
KnockOut algorithm [2], F and B are assumed to be smooth and the
prediction is based on a weighted average of known foreground and
background pixels (closer pixels receive higher weight). Some
algorithms [5,12] assume that the local foreground and background
come from a relatively simple color distribution. Perhaps the most
successful of these algorithms is the Bayesian matting algorithm
[5], where a mixture of oriented Gaussians is used to learn the
local distribution and then .alpha., F and B are estimated as the
most probable ones given that distribution. Such methods work well
when the color distributions of the foreground and the background
do not overlap, and the unknown region in the trimap is small. As
demonstrated in FIG. 2(b) a sparse set of constraints could lead to
a completely erroneous matte.
[0029] The Bayesian matting approach has been extended to video in
two recent papers. Chuang [4] use optical flow to warp the trimaps
between keyframes and to dynamically estimate a background model.
Apostoloff and Fitzgibbon [1] minimize a global, highly nonlinear
cost function over .alpha., F and B for the entire sequence. Their
cost function includes the mixture of Gaussians log likelihood for
foreground and background along with a term biasing .alpha. towards
0 and 1, and a learnt spatiotemporal consistency prior on .alpha..
The algorithm can either receive a trimap as input, or try to
automatically determine a coarse trimap using background
subtraction.
[0030] The Poisson matting method [14], also expects a trimap as
part of its input, and computes the alpha matte in the mixed region
by solving a Poisson equation with the matte gradient field and
Dirichlet boundary conditions. In the global Poisson matting method
the matte gradient field is approximated as .gradient.I/(F-B) by
taking the gradient of the compositing equation, and neglecting the
gradients in F and B. The matte is then found by solving for a
function whose gradients are as close as possible to the
approximated matte gradient field. Whenever F and B are not
sufficiently smooth inside the unknown region, the resulting matte
might not be correct, and additional local manipulations may need
to be applied interactively to the matte gradient field in order to
obtain a satisfactory solution. This interactive refinement process
is referred to as local Poisson matting.
[0031] Recently, several successful approaches for extracting a
foreground object from its background have been proposed [3,9,11].
These approaches translate simple user-specified constraints (such
as scribbles, or a bounding rectangle) into a min-cut problem.
Solving the min-cut problem yields a hard binary segmentation,
rather than a fractional alpha matte (FIG. 2(c)). The hard
segmentation could be transformed into a trimap by erosion, but
this could still miss some fine or fuzzy features (FIG. 2(d)).
Although Rother [11] do perform border matting by fitting a
parametric alpha profile in a narrow strip around the hard
boundary, this is more akin to feathering than to full alpha
matting, since wide fuzzy regions cannot be handled in this
manner.
[0032] Both the colorization method of Levin [7] and the random
walk alpha matting method of Grady [6] propagate scribbled
constraints to the entire image by minimizing a quadratic cost
function. Another scribble-based interface for interactive matting
was recently proposed by Wang and Cohen [15]. Starting from a few
scribbles indicating a small number of background and foreground
pixels, they use belief propagation to iteratively estimate the
unknowns at every pixel in the image. While this approach has
produced some impressive results, it has the disadvantage of
employing an expensive iterative non-linear optimization process,
which might converge to different local minima.
[0033] Wang's iterative matte optimization attempts to determine
for each pixel all of the unknown attributes (F, B and .alpha.) and
to reduce the uncertainty of these values. Initially, all
user-marked pixels have uncertainty of 0 and their .alpha. and F or
B colors are known. For all other pixels, the uncertainty is
initialized to 1 and .alpha. is set to 0.5. The approach proceeds
iteratively: in each iteration, pixels adjacent to ones with
previously estimated parameters are considered and added to the
estimated set. The process stops once there are no more
unconsidered pixels left and the uncertainty cannot be reduced any
further. Belief Propagation (BP) is used in each iteration. The
optimization goal in each iteration is to minimize a cost function
consisting of a data term and a smoothness term. The data term
describes how well the estimated parameters fit the observed color
at each pixel. The smoothness term is claimed to penalize
"inconsistent alpha value changes between two neighbors", but in
fact it just penalizes any strong change in alpha, because it only
looks at the alpha gradient, ignoring the underlying image values.
This is an iterative non-linear optimization process, so depending
on the initial input scribbles it might converge to a wrong local
minimum. Finally, the cost of this method is quite high: 15-20
minutes for a 640 by 480 image.
SUMMARY OF THE INVENTION
[0034] According to a first aspect of the invention there is
provided a method for matting a foreground object F having an
opacity .alpha. in an image having a background B, the respective
opacity of selected pixels in the foreground object F and the
background B being constrained by associating a characteristic with
said pixels, the method comprising: [0035] (a) determining weights
for all edges of neighboring pixels for the image; [0036] (b) build
a Laplacian matrix L with the weights; [0037] (c) solve the
equation .alpha. where .alpha.=arg min .alpha..sup.T L.alpha.
s.t..alpha..sub.i=s.sub.i, .A-inverted..sub.i .di-elect cons. S, S
is the group of selected pixels, and s.sub.i is the value indicated
by said characteristic; [0038] (d) solve for F and B in the
equation I'.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i
with additional smoothness assumptions on F and B; and [0039] (e)
compositing the foreground object F on a selected background B.
[0040] In accordance with a second aspect of the invention there is
provided a system for matting a foreground object F having an
opacity .alpha. in an image having a background B, the respective
opacity of selected pixels in the foreground object F and the
background B being constrained by associating a characteristic with
said pixels, comprising: a graphics processing unit; and software
loadable on the graphics processing unit, the software being
operable to: [0041] (a) determining weights for all edges of
neighboring pixels for the image; [0042] (b) build a Laplacian
matrix L with the weights; [0043] (c) solve the equation .alpha.
where .alpha.=arg min .alpha..sup.T L.alpha.
s.t..alpha..sub.i=s.sub.i, .A-inverted.i .di-elect cons. S, S is
the group of selected pixels and s.sub.i is the value indicated by
said characteristic; [0044] (d) solve for F and B in the equation
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i with
additional smoothness assumptions on F and B; and [0045] (e)
compositing the foreground object F on a selected background B.
[0046] The invention provides a new closed-form solution for
extracting the alpha matte from a natural image. We derive a cost
function from local smoothness assumptions on foreground and
background colors F and B, and show that in the resulting
expression it is possible to analytically eliminate F and B,
yielding a quadratic cost function in .alpha.. The alpha matte
produced by our method is the global optimum of this cost function,
which may be obtained by solving a sparse linear system. Since our
approach computes .alpha. directly and without requiring reliable
estimates for F and B, a surprisingly small amount of user input
(such as a sparse set of scribbles) is often sufficient for
extracting a high quality matte.
[0047] Furthermore, the closed-form formulation as provided in
accordance with the invention enables one to understand and predict
the properties of the solution by examining the eigenvectors of a
sparse matrix, closely related to matrices used in spectral image
segmentation algorithms. In addition to providing a solid
theoretical basis for our approach, such analysis can provide
useful hints to the user regarding where in the image scribbles
should be placed.
[0048] In contrast to the Bayesian matting algorithm [5], while the
invention also makes certain smoothness assumptions regarding F and
B, it does not involve estimating the values of these functions
until after the matte has been extracted.
[0049] Thus, rather than specifying a trimap, in accordance with
the invention, the user scribbles constraints on the opacity of
certain pixels. The constraints can be of the form "these pixels
are foreground", "these pixels are background" or the user can give
direct constraints on the mixed pixels. The algorithm then
propagates these constraints to the full image sequence based on a
simple cost function--that nearby pixels in space-time with similar
colors should have a similar opacity. The invention shows how to
minimize the cost using simple numerical linear algebra, and
display high quality mattes from natural images and image
sequences.
BRIEF DESCRIPTION OF THE DRAWINGS
[0050] In order to understand the invention and to see how it may
be carried out in practice, an embodiment will now be described, by
way of non-limiting example only, with reference to the
accompanying drawings, in which:
[0051] FIGS. 1(a) to 1(e) compare use of a trimap interface shown
in FIG. 1(b) to extract a foreground object from an image shown in
FIG. 1(a) and compositing it with a novel background shown in FIG.
1(e) with use of a sparse set of scribbles shown in FIG. 1(c) to
achieve a high quality matte shown in FIG. 1(d);
[0052] FIG. 2(a) shows pictorially an image with sparse
constraints: white scribbles indicate foreground, black scribbles
indicate background;
[0053] FIG. 2(b) shows a completely erroneous matte produced by
applying Bayesian matting to the sparse input of FIG. 2(a);
[0054] FIG. 2(c) shows a hard segmentation produced using
foreground extraction algorithms, such as [9,11];
[0055] FIG. 2(d) shows how fine features may be missed using an
automatically generated trimap from a hard segmentation;
[0056] FIG. 2(e) show an accurate hand-drawn trimap that is
required to produce a reasonable matte as shown in FIG. 2(f);
[0057] FIG. 3 shows pictorially results for Bayesian matting
examples. (a) input image; (b) is a prior art trimap; (c) shows
Bayesian matting results; (d) and (e) show respectively scribbles
and results according to the invention;
[0058] FIG. 4 shows pictorially results for Poission matting
examples; (a) is an input image; (b) shows Bayesian matting; (c)
Poisson matting; (d) and (e) show respectively results and
scribbles according to the invention;
[0059] FIG. 5 shows pictorially progressive matting extraction. By
evaluating the matt'ing result in a given stage, scribbles can be
add where further refinement is desired. (a) input scribbles (b)
resulted a-matte;
[0060] FIG. 6 shows pictorially composition examples from the
images in FIGS. 3 and 4: Top row: compositing with a constant
background. Bottom row: natural images compositing;
[0061] FIG. 7 shows additional examples of the invention applied
for shadow and smoke extraction. (a) input image (b) scribbles (c)
extracted matting; (d) and (e) compositions;
[0062] FIGS. 8(a) and 8(c) show pictorially input images with
scribbles;
[0063] FIGS. 8(b) and 8(d) show pictorially extracted mattes
derived according to the invention from the input images shown in
FIGS. 8(a) and 8(c), respectively;
[0064] FIG. 9 show pictorially prior art global sigma eigenvectors
compared with matting eigenvectors according to the invention
derived from a common input image;
[0065] FIGS. 10(a) and 10(b) shows the smallest eigenvectors used
for guiding scribble placement shown in FIG. 10(c) to produce the
matter shown in FIG. 10(d);
[0066] FIGS. 11(a),(c),(e),(g),(i),(k),(m) and (o) show pictorially
alpha mattes extracted by different prior art algorithms;
[0067] FIGS. 11(b),(d),(f),(h),(j),(l) and (n) show pictorially
alpha mattes generated by implementing the respective methods in
accordance with the invention;
[0068] FIG. 12 shows a quantitative comparison using two ground
truth mattes produced using known methods modified in accordance
with the invention;
[0069] FIG. 13(a) shows prior art scribbles and matte;
[0070] FIG. 13(b) shows color ambiguity between foreground and
background using a prior art trimap;
[0071] FIG. 13(c) shows color ambiguity between foreground and
background using scribbles according to the invention similar to
those in FIG. 13(a);
[0072] FIG. 13(d) shows color ambiguity between foreground and
background using a few additional scribbles according to the
invention;
[0073] FIG. 14 shows an example with ambiguity between F and B;
[0074] FIG. 15 shows failure owing to lack of a color model;
[0075] FIG. 16 is a flow diagram showing the principal operations
carried out by a method according to an embodiment of the invention
for matting a foreground object F having an opacity a constrained
by associating a characteristic with selected pixels in an image
having a background B; and
[0076] FIG. 17 is a block diagram showing functionality of a system
according to an embodiment of the invention for matting a
foreground object F having an opacity .alpha. constrained by
associating a characteristic with selected pixels in an image
having a background B.
DETAILED DESCRIPTION OF EMBODIMENTS
Derivation
[0077] For clarity of exposition we begin by deriving a closed-form
solution for alpha matting of grayscale images. This solution will
then be extended to the case of color images.
[0078] As mentioned earlier, the matting problem is severely
under-constrained. Therefore, some assumptions on the nature of F,
B and/or .alpha. are needed. To derive our solution for the
grayscale case we make the assumption that both F and B are
approximately constant over a small window around each pixel. Note
that assuming F and B are locally smooth does not mean that the
input image I is locally smooth, since discontinuities in .alpha.
can account for the discontinuities in I. This assumption, which
will be somewhat relaxed later, allows us to rewrite (1) expressing
.alpha. as a linear function of the image I: .alpha. i .apprxeq. ?
+ ? , ? .di-elect cons. ? .times. .PSI. , .times. ? .times.
indicates text missing or illegible when filed ( 2 ) ##EQU1## where
a = 1 F - B , b = - B F - B ##EQU2## and w is a small image window.
This linear relation is similar to the prior used in [17]. One
object of the invention is to find .alpha., a and b minimizing the
cost function: J .function. ( .alpha. , a , b ) = j .di-elect cons.
J .times. ( i .di-elect cons. ? .times. ( .alpha. i - a j .times. I
i - b j ) 2 + .times. .times. .alpha. i 2 ) , .times. ? .times.
indicates text missing or illegible when filed ( 3 ) ##EQU3## where
w.sub.j is a small window around pixel j.
[0079] The cost function above includes a regularization term on a.
One reason for adding this term is numerical stability. For
example, if the image is constant in the j-th window, a.sub.j and
b.sub.y cannot be uniquely determined without a prior probability.
Also, minimizing the norm of a biases the solution towards smoother
.alpha. mattes (since a.sub.j=0 means that .alpha. is constant over
the j-th window).
[0080] In our implementation, we typically use windows of 3.times.3
pixels. Since we place a window around each pixel, the windows
w.sub.j in (3) overlap. It is this property that enables the
propagation of information between neighboring pixels. The cost
function is quadratic in .alpha., a and b, with 3N unknowns for an
image with N pixels. Fortunately, as we show below, a and b may be
eliminated from (3), leaving us with a quadratic cost: in only N
unknowns: the alpha values of the pixels.
[0081] Theorem 1 Define J(.alpha.) as J .function. ( .alpha. ) =
min ^ .alpha. , b .times. .times. J .function. ( .alpha. , a , b )
. .times. Then .times. .times. J .function. ( .alpha. ) = .alpha. T
.times. L .times. .times. .alpha. , ( 4 ) ##EQU4##
[0082] where L is an N.times.M matrix, whose (i,j)-th entry is:
.delta. ( i , j ) .di-elect cons. ? .times. ( .delta. ij - 1 ?
.times. ( 1 + 1 ? ? + .alpha. k 3 .times. ( I i - .mu. k ) .times.
( I j - .mu. k ) ) ) .times. .times. ? .times. indicates text
missing or illegible when filed ( 5 ) ##EQU5##
[0083] Here .delta..sub.ij is the Kronecker delta, .mu..sub.k and
.sigma..sub.k.sup.2 are the mean and variance of the intensities in
the window w.sub.k around k and |w.sub.k| is the number of pixels
in this window.
[0084] Proof: Rewriting (3) using matrix notation we obtain J
.function. ( .alpha. , a , b ) = k .times. G k .function. [ a k b k
] - .alpha. _ k 2 , ( 6 ) ##EQU6## where for every window w.sub.k,
G.sub.k is defined as a (|w.sub.k|+1).times.2 matrix. For each
i.di-elect cons. w.sub.k, G.sub.k contains a row of the form
[I.sub.i,1], and the last row of G.sub.k is of the form .left
brkt-bot. {square root over (.epsilon.)},0.right brkt-bot.. For a
given matte .alpha. we define .alpha..sub.k as a
(|w.sub.k|+1).times.1 vector, whose entries are .alpha..sub.i for
every i.di-elect cons. w.sub.k, and whose last entry is 0. The
elements in .alpha..sub.k and G.sub.k are ordered
correspondingly.
[0085] For a given matte .alpha. the optimal pair a*.sub.k,b*.sub.k
inside each window w.sub.k is the solution to the least squares
problem: ( a k * , b k * ) = arg .times. .times. min .times. G k
.function. [ a k b k ] - .alpha. _ k ( 7 ) .times. = ( G k T
.times. G k ) - 1 .times. G k T .times. .alpha. _ k ( 8 )
##EQU7##
[0086] Substituting this solution into (6) and denoting
G.sub.k=I-G.sub.k(G.sub.k.sup.TG.sub.k).sup.-1G.sub.k.sup.T we
obtain J .function. ( .alpha. ) = k .times. .alpha. _ k T .times. G
_ k T .times. G _ k .times. .alpha. _ k , ##EQU8## and some further
algebraic manipulations show that the (i, j)-th element of
G.sub.k.sup.TG.sub.k may be expressed as: .delta. ij - 1 w k
.times. ( 1 + 1 ? ? + .alpha. k 2 .times. ( I i - .mu. k ) .times.
( I j - .mu. k ) ) . .times. ? .times. indicates text missing or
illegible when filed ##EQU9##
[0087] Summing over k yields the expression in (5).
Color Images
[0088] A simple way to apply the cost function to color images is
to apply the gray level cost to each channel separately.
Alternatively we can replace the linear model (2), with a 4D linear
model: .alpha. i .apprxeq. ? .times. ? .times. ? + b , .A-inverted.
i .di-elect cons. ? .times. .times. ? .times. indicates text
missing or illegible when filed ( 9 ) ##EQU10##
[0089] The advantage of this combined linear model is that it
relaxes our previous assumption that F and B are constant over each
window. Instead, as we show below, it is enough to assume that in a
small window each of F and B is a linear mixture of two colors; in
other words, the values F.sub.i in a small window lie on a single
line in the RGB color space:
F.sub.i=.beta..sub.iF.sub.1+(1-.beta..sub.i)F.sub.2, and the same
is true for the background values B.sub.iIn what follows we refer
to this assumption as the color line model.
[0090] Such a model is useful since it captures, for example, the
varying shading on a surface with a constant albedo (this being a
property of an object that determines how much light it reflects).
Another example is a situation where the window contains an edge
between two uniformly colored regions both belonging to the
background or the foreground. Furthermore, Omer and Werman [10]
demonstrated that in many natural images the pixel colors in RGB
space tend to form a relatively small number of elongated clusters.
Although these clusters are not straight lines, their skeletons are
roughly linear locally.
[0091] Theorem 2 If the foreground and background colors in a
window satisfy the color line model we can express .alpha. i = c
.times. .times. a c .times. I i c + b , .times. .A-inverted. i
.di-elect cons. m ##EQU11##
[0092] Substituting into (1) the linear combinations
F.sub.i=.beta..sub.iF.sub.1+(1-.beta..sub.i.sup.F)F.sub.2 and
B.sub.i=.beta..sub.i.sup.BB.sub.1+(1-.beta..sub.i.sup.B)B.sub.2,
where F.sub.1,F.sub.2,B.sub.1,B.sub.2 are constant over a small
window, we obtain: I i c = .alpha. i .function. ( .beta. i F
.times. F 1 c + ( 1 - .beta. i F ) .times. F 2 c ) + ( 1 - .alpha.
i ) .times. ( .beta. i B .times. B 1 c + ( 1 - .beta. i F ) .times.
B 2 c ) . ##EQU12##
[0093] Let H be a 3.times.3 matrix whose c-th row is
[F.sub.2.sup.c+B.sub.2.sup.c,F.sub.1.sup.c-F.sub.2.sup.c,B.sub.1.sup.c-B.-
sub.2.sup.c]. Then the above may be rewritten as: H .function. [
.alpha. i .alpha. i .times. .beta. i F ( 1 - .alpha. i ) .times.
.beta. i B ] = I i - B 3 , ##EQU13## where I.sub.i and B.sub.2 are
3.times.1 vectors representing 3 color channels. We denote by
a.sup.1,a.sup.2,a.sup.3 the elements in the first row of H.sup.-1,
and by b the scalar product of first row of H.sup.-1 with the
vector B.sub.2. We then obtain .alpha. i = c .times. .times. a c
.times. I i + b . ##EQU14##
[0094] Using the 4D linear model (9) we define the following cost
function for matting of RGB images: J .function. ( .alpha. , a , b
) = j .di-elect cons. I .times. .times. ( i .di-elect cons. w j
.times. .times. ( .alpha. i - c .times. .times. a c j .times. I i c
- b j ) 2 + .times. c .times. .times. a j c 2 ) ( 10 )
##EQU15##
[0095] Similarly to the grayscale case, a.sup.c and b can be
eliminated from the cost function, yielding a quadratic cost in the
.alpha. unknowns alone: J(.alpha.)=.alpha..sup.TL.alpha. (11)
[0096] Here L is an N.times.N matrix, whose (i,j)-th element is:
.mu. .function. [ i , j ] .times. .times. .epsilon. .times. .times.
N .delta. .times. .times. ( .delta. ij - 1 w k .times. ( 1 + ( I i
- .mu. .delta. ) .times. ( E .delta. + w k .times. I 3 ) - 3
.times. ( I j - .mu. .delta. ) ) ) ( 12 ) ##EQU16## where
.SIGMA..sub.k is a 3.times.3 covariance matrix, .mu..sub.k is a
3.times.1 mean vector of the colors in a window w.sub.k and I.sub.3
is the a 3.times.3 identity matrix.
[0097] We refer to the matrix L in equations (5) and (12) as the
matting Laplacian. Note that the elements in each row of L sum to
zero, and therefore the nullspace of L includes the constant
vector. If .epsilon.=0 is used, the nullspace of L also includes
every color channel of I.
EXAMPLES
[0098] In all examples presented in this section the scribbles used
in our algorithm are presented in the following format: black and
white scribbles are used to indicate the first type of hard
constraints on a. Gray scribbles are used to present the third
constraints class--requiring a and b to be constant (without
specifying their exact value) within the scribbled area. Finally,
red scribbles represent places in which foreground and background
colors where explicitly specified.
[0099] FIG. 3 presents matting results on images from the Bayesian
matting paper [5]. The results are compared with the results
published on their webpage. The results are compatible. While the
Bayesian matting results use a trimap, our results were obtained
using sparse scribbles.
[0100] In FIG. 4 we extract mattes from a few of the more
challenging examples presented in the Poisson matting paper [9].
For comparison, the Poisson and Bayesian matting results as
calculated in [9] are also shown. In the first two examples our
results were obtained using only rough black and white scribbles.
In the third example, which is a much harder one, we have obtained
a high quality matte, but extracting every single hair from the
complex textured background required accurate black scribbles to be
specified. This image seems to be more difficult for other matting
algorithms as well [9].
[0101] The amount of scribbles required depends not only on the
complexity of the input image but also on the required accuracy in
the results. In fact, for the complicated image in the bottom of
FIG. 4, quite good results can be obtained with much coarser
scribbles as shown in FIG. 5. This suggest a progressive process,
in which the artist starts with initial scribbles, evaluates the
resulting matte and adds scribbles where the matting needs
improvement. FIG. 5 demonstrates the matting refinement
process.
[0102] FIG. 6 presents compositing examples using our algorithm for
the images in FIGS. 3, 4 both on a constant background and with
natural images.
[0103] FIG. 7 presents additional applications of our technique. In
particular, the red scribbles specifying the foreground and
background color, can be used to extract shadow and smoke. In the
top figure, the red scribble on the shadow specifcies that the
foreground color is black. In the bottom figure, the red scribble
on the smoke indicates the foreground color is white (in both cases
the background color in the red scribbles was copied from
neighboring, uncovered pixels). These sparse constraints on the
awere then propagated to achieve the final matte. Note that shadow
matting cannot be directly achieved with matting algorithms which
initialize foreground colors using neighboring pixels, since no
neighboring black pixels are present. Note also that the shadow
area captures a significant amount of the image area and it is not
clear how to specify a good trimap in this case. The smoke example
was processed also in [4], but in their case a background model was
calculated using multiple frames.
Constraints and User Interface
[0104] In our system the user-supplied constraints on the matte are
provided via a scribble-based GUI. The user uses a background brush
(black scribbles in our examples) to indicate background pixels
(.alpha.=0) and a foreground brush (white scribbles) to indicate
foreground pixels (.alpha.=1).
[0105] A third type of brush (red scribbles) is sometimes used to
constrain certain mixed pixels to a specific desired opacity value
.alpha.. The user specifies this value implicitly by indicating the
foreground and background colors F and B (for example, by copying
these colors from other pixels in the image). The value of .alpha.
is then computed using (1).
[0106] Another kind of brush (gray scribbles) may sometimes be used
to indicate that F and B satisfy the color line model for all of
the pixels covered by the scribble. Rather than adding an
additional constraint on the value of .alpha., our system responds
to such scribbles by adding an additional window w.sub.k in (12)
containing all of the pixels covered by the scribble (in addition
to the standard 3.times.3 windows).
[0107] To extract an alpha matte matching the user's scribbles we
solve for .alpha.=arg min .alpha..sup.TL.alpha.
s.t..alpha..sub.i=S.sub.i,.A-inverted..sub.i.di-elect cons.S (13)
where S is the group of scribbled pixels and s.sub.i is the value
indicated by the scribble.
[0108] Theorem 3 Let I be an image formed from F and B according to
(1), and let .alpha.* denote the true alpha matte. If F and B
satisfy the color line model in every local window w.sub.k, and if
the user-specified constraints S re consistent with .alpha.*, then
.alpha.* is an optimal solution for the system (13), where L is
constructed with .epsilon.=0.
[0109] Since .epsilon.=0, if the color line model is satisfied in
every window w.sub.k, it follows from the definition (10) that
J(.alpha.*,a,b)=0, and therefore J(.alpha.*)=.alpha.*.sup.T
L.alpha.*=0.
[0110] We demonstrate this in FIG. 8. The first image (FIG. 8(a))
is a synthetic example that was created by compositing
computer-simulated (monochromatic) smoke over a simple background
with several color bands, which satisfies the color line model. The
black and white scribbles show the input constraints. The matte
extracted by our method (FIG. 8(b)) is indeed identical to the
ground truth matte. The second example (FIG. 8(c)) is a real image,
with fairly uniform foreground and background colors. By scribbling
only two black and white points, a high quality matte was extracted
(FIG. 8(d)).
Spectral Analysis
[0111] The matting Laplacian matrix L is a symmetric positive
definite matrix, as evident from theorem 1 and its proof. This
matrix may also be written as L=D-W, where D is a diagonal matrix
D(i,i)=.SIGMA..sub.iW(i,j) and W is a symmetric matrix, whose
off-diagonal entries are defined by (12). Thus, the matrix L is the
graph Laplacian used in spectral methods for segmentation, but with
a novel affinity function given by (12). For comparison, the
typical way to define the affinity function (e.g., in normalized
cuts image segmentation algorithms [13]) is to set ? .times. ( i ,
j ) = e - ? , .times. ? .times. indicates text missing or illegible
when filed ( 14 ) ##EQU17## where .sigma. is a global constant
(typically chosen by hand). This affinity is large for nearby
pixels with similar colors and approaches zero when the color
difference is much greater than .sigma.. The random walk matting
algorithm [6] has used a similar affinity function for the matting
problem, but the color distance between two pixels was taken after
applying a linear transformation to their colors. The
transformation is image-dependent and is estimated using a manifold
learning technique.
[0112] In contrast, by rewriting the matting Laplacian as L=D-W we
obtain the following affinity function, which we refer to as "the
matting affinity": ? .times. ( i , j ) = ? .times. .times. 1 w k
.times. ( 1 + ( I .delta. - .mu. k ) .times. ( k .times. .times. +
w k .times. I 3 ) - 3 .times. ( I j - .mu. k ) ) .times. .times. ?
.times. indicates text missing or illegible when filed ( 15 )
##EQU18##
[0113] To gain intuition regarding the matting affinity, consider
an image patch containing exactly two colors (e.g., an ideal edge).
In this case, it can be shown (see [8] for a proof) that the
affinity between two pixels of the same color decreases with
distance, while the affinity between pixels of different colors is
zero. In general, we obtain a similar situation to that of standard
affinity functions: nearby pixels with similar colors have high
affinity, while nearby pixels with different colors have small
affinity. However, note that the matting affinity does not have a
global scaling parameter .sigma. and rather uses local estimates of
means and variances. As we show subsequently, this adaptive
property leads to a significant improvement in performance. A
similar observation was also made in [16], which suggests that
local adaptation of the scaling parameter improves image
segmentation results.
[0114] To compare the two affinity functions we examine the
smallest eigenvectors of the corresponding Laplacians, since these
eigenvectors are used by spectral segmentation algorithms for
partitioning images.
[0115] To segment an image using the normalized cuts framework [13]
one looks at the smallest eigenvectors of the graph Laplacian.
These eigenvectors tend to be piecewise constant in uniform image
areas and have transitions between smooth areas mainly where edges
in the input images exist. The values of the eigenvectors are used
in order to cluster the pixels in the image into coherent
segments.
[0116] Similarly to the eigenvectors of the normalized cuts
Laplacian, the eigenvectors of the matting Laplacian also tend to
be piecewise constant and may be used to cluster pixels into
segments. However, the eigenvectors of the matting Laplacian also
capture fuzzy transitions between segments. In the case of complex
and fuzzy boundaries between regions, this can be critical. To
demonstrate the difference between the two Laplacians,
[0117] FIG. 9 shows the second smallest eigenvector (the first
smallest eigenvector is the constant image in both cases) for both
Laplacian matrices on two example images. The first example is a
simple image with concentric circles of different color. In this
case the boundaries between regions are very simple, and both
Laplacians capture the transitions correctly. The second example is
an image of a peacock. The global .sigma. eigenvector (used by the
normalized-cuts algorithm) fails to capture the complex fuzzy
boundaries between the peacock's tail feathers and the background.
In contrast, the matting Laplacian's eigenvector separates the
peacock from the background very well. The matting Laplacian in
this case was computed with .epsilon.=0.0001.
The Eigenvectors as Guides
[0118] While the matting problem is ill-posed without some user
input, the matting Laplacian matrix contains a lot of information
on the image even before any scribbles have been provided, as
demonstrated in the previous section.
[0119] This suggests that looking at the smallest eigenvectors of
the matting Laplacian can guide the user where to place scribbles.
For example, the extracted matte tends to be piecewise constant in
the same regions where the smallest eigenvectors are piecewise
constant. If the values inside a segment in the eigenvector image
are coherent, a single scribble within such a segment should
suffice to propagate the desired value to the entire segment. On
the other hand, areas where the eigenvector's values are less
coherent correspond to more "difficult" regions in the image,
suggesting that more scribbling efforts might be required
there.
[0120] Stated somewhat more precisely, the alpha matte can be
predicted by examining the smaller eigenvectors of the matting
Laplacian, since an optimal solution to the matting problem prefers
to place more weight on the smaller eigenvectors than it places on
the larger ones. As a result, a dominant part of the matte tends to
be spanned by the smaller eigenvalues.
[0121] FIG. 10 illustrates how a scribbling process may be guided
by the eigenvectors. By examining the two smallest eigenvectors
(FIG. 10(a-b)) we placed a scribble inside each region exhibiting
coherent eigenvector values (FIG. 10(c)). The resulting matte is
shown in FIG. 10(d). Note that the scribbles in FIG. 10(c) were our
first, and single attempt to place scribbles on this image.
[0122] The resulting matte can be further improved by some more
scribbling (especially in the hair area).
[0123] We show here results of our closed form solution for
extracting alpha mattes by minimizing the matting Laplacian subject
to the scribbled constraints. Since the matting Laplacian is
quadratic in alpha, the minimum can be found exactly by solving a
sparse set of linear equations. We usually define the matting
Laplacian using 3.times.3 windows. When the foreground and
background color distributions are not very complex using wider
windows is helpful. However, using wider windows increases
computation time since the resulting system is less sparse. To
overcome this, we consider the linear coefficients (eq. 9) that
relate an alpha matte to an image. The coefficients obtained using
wide windows on a fine resolution image are similar to those
obtained with smaller windows on a coarser image. Therefore we can
solve for the alpha matte using 3.times.3 windows on a coarse image
and compute the linear coefficients which relate it to the coarse
image. We then interpolate the linear coefficients and apply them
on a finer resolution image. The alpha matte obtained using this
approach is similar to the one that would have obtained by solving
the matting system directly on the fine image with wider windows.
More details are provided in [8].
[0124] For the results shown here we solve the matting system using
Matlab's direct solver (the "backslash" operator) which takes 20
seconds for a 200 by 300 image on a 2.8 GHz CPU. Processing big
images using the Matlab solver is impossible due to memory
limitations. To overcome this we use a coarse-to-fine scheme. We
downsample the image and the scribbles and solve in a lower
resolution. The reconstructed alpha is then interpolated to the
finer resolution, alpha values are thresholded and pixels with
alpha close to 0 or 1 are considered constraints in the finer
resolution. Constraint pixels can be eliminated from the system,
reducing the system size. We have also implemented a multigrid
solver for matte extraction. The multigrid solver runs in a couple
of seconds even for very large images, but with a small degradation
in matte quality.
[0125] We show here only the extracted alpha mattes. Note that for
compositing on a novel background, we also need to solve for F.
After the matte has been found, it is possible to solve for the and
coefficients directly from equation (10) and extract the foreground
and background from them. However, we have found that better
estimations of foreground and background are obtained by solving a
new set of linear equations in F and B, derived by introducing some
explicit smoothness priors on F and B into equation (1). More
information on the foreground reconstruction as well as some
compositing results can be found in [8].
[0126] FIG. 11 shows the mattes extracted using our technique on
two challenging images used in [15] and compares our results to
several other recent algorithms. It can be seen that our results on
these examples are comparable in quality to those of [15], even
though we use a far simpler algorithm. Global Poisson matting
cannot extract a good matte from sparse "scribbles" although its
performance with a trimap is quite good. The random walk matting
algorithm [6] also minimizes a Laplacian but uses an affinity
function with a global scaling parameter and hence has a particular
difficulty with the peacock image.
[0127] To obtain a more quantitative comparison between the
algorithms, we performed an experiment with synthetic composites
for which we have the ground truth alpha matte. We randomly
extracted 2000 subimages from the image shown in FIG. 12(h). We
used each subimage as a background and composited over it a uniform
foreground image using two different alpha mattes: the first matte
is the computer simulated smoke most of which is partially
transparent; the other matte is a part of a circle, mostly opaque
with a feathered boundary. The mattes are shown in FIG. 12(c).
Consequently, we obtained 4000 composite images, two of which are
shown in FIG. 12(a).) On this set of images we compared the
performance of four matting algorithms--Wang and Cohen, global
Poisson matting, random walk matting, and our own (using 3.times.3
windows with no pyramid). All algorithms were provided a trimap as
input. Examples of the trimaps and the results produced by the
different methods are shown in FIGS. 12(a,d-g)). For each
algorithm, we measured the summed absolute error between the
extracted matte and the ground truth. FIGS. 12(i, j) plot the
average error of the four algorithms as a function of the
smoothness of the background (specifically we measured the average
gradient strength, binned into 10 bins). The errors in the smoke
matte are plotted in FIG. 12(i), while errors in the circle matte
are plotted in FIG. 12(j). When the background is smooth, all
algorithms perform well with both mattes. When the background
contains strong gradients, global Poisson matting performs poorly
(recall that it assumes that background and foreground gradients
are negligible). Of the remaining algorithms, our algorithm
consistently produced the most accurate results.
[0128] FIG. 13 shows an example (from [15]), where Wang and Cohen's
method fails to extract a good matte from scribbles due to color
ambiguity between the foreground and the background. The same
method, however, is able to produce an acceptable matte when
supplied with a trimap. Our method produces a cleaner, but also not
perfect matte from the same set of scribbles, but adding a small
number of additional scribbles results in a better matte.
[0129] FIG. 14 shows another example (a closeup of the Koala image
from [14]), where there's an ambiguity between foreground and
background colors. In this case the matte produced by our method is
clearly better than the one produced by the Wang-Cohen method. To
better understand why this is the case, we show an RGB histogram of
representative pixels from the F and B scribbles. Some pixels in
the background fit the foreground color model much better then the
background one (one such pixel is marked red in FIG. 14(b) and
indicated by an arrow in FIG. 14(d)). As a result such pixels are
classified as foreground with a high degree of certainty in the
first stage. Once this error has been made it only reinforces
further erroneous decisions in the vicinity of that pixel,
resulting in a white clump in the alpha matte.
[0130] Since our method does not make use of global color models
for F and B it can handle ambiguous situations such as that in FIG.
14. However, there are also cases where our method fails to produce
an accurate matte for the very same reason. FIG. 15 shows an
actress in front of a background with two colors. Even though the
black B scribbles cover both colors the generated matte includes
parts of the background (between the hair and the shoulder on the
left). In such cases, the user would have to add another B scribble
in that area.
[0131] As mentioned above, our framework applies for video matting
as well. In video, window neighborhoods are defined in the 3D
volume, and therefore constraints scribbled on selected frames are
propagated in time to the rest of the sequence, in the same way
that they are propagated in space. We have tested our algorithm on
the `Amira sequence` from Chuang et al. 2002 [4]. We placed
scribbles on 7 out of 76 frames. For comparison, in [4], a trimap
was drawn over 11 out of 90 frames, and the interpolated trimap was
refined in 2 additional frames. The sequence was captured from the
avi file available on the video matting web page and was
downsampled to avoid compression artifacts.
[0132] The scribble interface can also be useful in matting
problems where the background is known (or estimated separately).
This is particularly useful in the processing of video, where often
it is possible to obtain a good estimate of the background but
obtaining a good matte from this background model is challenging.
To illustrate this use, we ran a modified version of our algorithm
on the `Kim` sequence from [4]. We first used the background model
computed by [4] and automatically used this model to generate
constraints by thresholding the difference between any pixel and
the background model. This is similar to way in which [1]
automatically generated a trimap. When we used our algorithm to
propagate these constraints we obtained a matting that was fine in
most places, but had noticeable artifacts in the mixed pixels
around the top of the head. We then added additional red scribbles
specifying that the foreground color is blond for the top 20 pixels
in all frames. Using this additional user input, we were able to
fix these artifacts. Results are shown in the supplementary
material. For comparison, in addition to using the background
model, [4] specified a trimap at about 11 key frames out of 101,
and refined the interpolated trimap in another 4 frames.
[0133] FIG. 16 is a flow diagram showing the principal operations
carried out by a method according to an embodiment of the invention
for matting a foreground object F having an opacity .alpha.
constrained by associating a characteristic with selected pixels in
an image having a background B. Weights are determined for all
edges of neighboring pixels for the image and used to build a
Laplacian matrix L. The equation .alpha. is solved where
.alpha.=arg min .alpha..sup.T L.alpha.
s.t..alpha..sub.i=s.sub.i,.A-inverted..sub.i .di-elect cons. S, S
is the group of selected pixels, and s.sub.i is the value indicated
by the associated characteristic. The equation
I.sub.i=.alpha..sub.iF.sub.i+(1-.alpha..sub.i)B.sub.i is solved for
F and B with additional smoothness assumptions on F and B; after
which the foreground object F may be composited on a selected
background B'. The selected background B' may be the original
background B or may be a different background, thus allowing
foreground features to be extracted from the original image and
copied to a different background.
[0134] FIG. 17 is a block diagram showing functionality of a system
10 according to an embodiment of the invention for matting a
foreground object F having an opacity .alpha. constrained by
associating a characteristic with selected pixels in an image
having a background B. The system 10 includes a memory 11, and a
weighting unit 12 coupled to the memory for determining weights for
all edges of neighboring pixels for the image. A Laplacian matrix
unit 13 is coupled to the weighting unit 12 for build a Laplacian
matrix L with the weights; and a computation unit 14 is coupled to
the Laplacian matrix unit 13 for solving the equation .alpha. where
.alpha.=arg min .alpha..sup.T L.alpha. s.t..alpha..sub.i=s.sub.i,
.A-inverted..sub.i .di-elect cons. S, S is the group of selected
pixels, and s.sub.i is the value indicated by the characteristic. A
composition unit 15 is coupled to the computation unit 14 for
compositing the foreground object F as computed by the computation
unit 14 on a selected background B' for display by a display unit
16 coupled to the composition unit 15. The selected background B'
may be the original background B or may be a different background,
thus allowing foreground features to be extracted from the original
image and copied to a different background.
Discussion
[0135] Matting and compositing are tasks of central importance in
image and video editing and pose a significant challenge for
computer vision. While this process by definition requires user
interaction, the performance of most existing algorithms
deteriorates rapidly as the amount of user input decreases. The
invention introduces a cost function based on the assumption that
foreground and background colors vary smoothly and showed how to
analytically eliminate the foreground and background colors to
obtain a quadratic cost function in alpha. The resulting cost
function is similar to cost functions obtained in spectral methods
to image segmentation but with a novel affinity function that is
derived from the formulation of the matting problem. The global
minimum of our cost can be found efficiently by solving a sparse
set of linear equations. Our experiments on real and synthetic
images show that our algorithm clearly outperforms other algorithms
that use quadratic cost functions which are not derived from the
matting equations. Our experiments also demonstrate that our
results are competitive with those obtained by much more
complicated, nonlinear, cost functions. However, compared to
previous nonlinear approaches, we can obtain solutions in a few
seconds, and we can analytically prove properties of our solution
and provide guidance to the user by analyzing the eigenvectors of
our operator.
[0136] While our approach assumes smoothness in foreground and
background colors, it does not assume a global color distribution
for each segment. Our experiments have demonstrated that our local
smoothness assumption often holds for natural images. Nevertheless,
it would be interesting to extend our formulation to include
additional assumptions on the two segments (e.g., global models,
local texture models, etc.). The goal is to incorporate more
sophisticated models of foreground and background but still obtain
high quality results using simple numerical linear algebra.
[0137] It will also be understood that the system according to the
invention may be a suitably programmed computer. Likewise, the
invention contemplates a computer program being readable by a
computer for executing the method of the invention. The invention
further contemplates a machine-readable memory tangibly embodying a
program of instructions executable by the machine for executing the
method of the invention.
* * * * *