Establishment method of 3D Saliency Model Based on Prior Knowledge and Depth Weight

Duan; Lijuan ;   et al.

Patent Application Summary

U.S. patent application number 15/406504 was filed with the patent office on 2018-06-28 for establishment method of 3d saliency model based on prior knowledge and depth weight. The applicant listed for this patent is Beijing University of Technology. Invention is credited to Lijuan Duan, Fangfang Liang, Wei Ma, Jun Miao, Yuanhua Qiao.

Application Number20180182118 15/406504
Document ID /
Family ID58832259
Filed Date2018-06-28

United States Patent Application 20180182118
Kind Code A1
Duan; Lijuan ;   et al. June 28, 2018

Establishment method of 3D Saliency Model Based on Prior Knowledge and Depth Weight

Abstract

A method of establishing a 3D saliency model based on 3D contrast and depth weight, includes dividing left view of 3D image pair into multiple regions by super-pixel segmentation method, synthesizing a set of features with color and disparity information to describe each region, and using color compactness as weight of disparity in region feature component, calculating feature contrast of a region to surrounding regions; obtaining background prior on depth of disparity map, and improving depth saliency through combining the background prior and the color compactness; taking Gaussian distance between the depth saliency and regions as weight of feature contrast, obtaining initial 3D saliency by adding the weight of the feature contrast; enhancing the initial 3D saliency by 2D saliency and central bias weight.


Inventors: Duan; Lijuan; (Beijing, CN) ; Liang; Fangfang; (Yichang, CN) ; Qiao; Yuanhua; (Beijing, CN) ; Ma; Wei; (Beijing, CN) ; Miao; Jun; (Beijing, CN)
Applicant:
Name City State Country Type

Beijing University of Technology

Bejijing

CN
Family ID: 58832259
Appl. No.: 15/406504
Filed: January 13, 2017

Current U.S. Class: 1/1
Current CPC Class: G06K 9/00201 20130101; G06K 9/4676 20130101; G06K 9/4652 20130101
International Class: G06T 7/593 20060101 G06T007/593; G06T 17/00 20060101 G06T017/00; G06K 9/52 20060101 G06K009/52; G06K 9/46 20060101 G06K009/46; G06T 15/20 20060101 G06T015/20; G06T 7/194 20060101 G06T007/194

Foreign Application Data

Date Code Application Number
Dec 28, 2016 CN 2016112362977

Claims



1. A method of establishing a 3D saliency model based on 3D contrast and depth weight including the following steps of: step one: extracting 3D feature: dividing left view of 3D image pair into N regions by super-pixel segmentation method, labeling as R.sub.i, where i takes value 1 to N, N is an integer; defining a region feature f=[l, a, b, d] for region R.sub.i, wherein l=.SIGMA..sub.i=1.sup.N.sup.il.sub.i/N.sub..epsilon., a=.SIGMA..sub.i=1.sup.N.sup.ia.sub.i/N.sub.i, b=.SIGMA..sub.i=1.sup.N.sup.ib.sub.i/N.sub.i, d = i = 1 N i d i / N i _ , ##EQU00009## N.sub.i is number of pixels in the region R.sub.i, and l.sub.i, a.sub.i, b.sub.id.sub.i is value of l, a, b and disparity of pixel in the region R.sub.i, respectively; step two: calculating feature contrast: representing feature contrast between regions by matrix C, then c.sub.ij represents norm distance between regional features of the region R.sub.i and region R.sub.j, which is calculated as: c.sub.ij=.parallel.u.sub.if.sub.i-u.sub.jf.sub.j.parallel..sub.2, wherein u is weight of region feature f, u=[1,1,1,q], and variable q represents color compactness of N regions in the left view; step three: designing weight of the feature contrast: (1) obtaining depth saliency map s.sub.s by depth domain analysis method on disparity map, then depth saliency s.sub.d of the region R.sub.i is calculated by using a formula as: s.sub.d(i)=s.sub.s(i)e.sup.-kt.sup.i; (2) calculating background prior on the disparity map; (3) optimizing depth saliency through the background prior, by using specific process including the step of: for the region R.sub.i, using mean disparity d.sub.i of the region R.sub.i on the disparity map to determine whether it appears to have no correlation with background range and the depth saliency s.sub.d(i) is within background range, and the depth saliency is determined by using a formula: s d ( i ) = { s d ( i ) , d _ i < thresh 0 , d _ i .gtoreq. thresh , ##EQU00010## wherein threshold thresh is minimum disparity of portion marked as background on the disparity map of depth background B.sub.d; (4) designing the weight of feature contrast: the weight of feature contrast of the region R.sub.i and the region R.sub.j represented by a variable w.sub.i,j, and there are: w.sub.i,j=exp (-Dst(i,j)/.sigma..sup.2)a(i)s.sub.d(i), wherein .alpha.(i) is size of the region R.sub.i, exp (-Dst(i, j)/.sigma..sup.2) represents Gaussian distance between the region R.sub.i and the region R.sub.j; step four: calculating initial 3D saliency: saliency value of the region R.sub.i is s.sup.r(i)=e.sup.-kt.sup.i.SIGMA..sub.i.noteq.jw.sub.i,jc.sub.i,j, then calculation formula of the initial 3D saliency s.sup.p(i) of the region R.sub.i is S.sup.p(i)=.SIGMA..sub.j=1.sup.N.sup.iexp (-(.alpha..parallel.clr.sub.i-clr.sub.j.parallel..sup.2+.beta..parallel.p- .sub.i-p.sub.j.parallel..sup.2))S.sup.r(j), wherein .alpha.=0.33, .beta.=0.33, are two parameters to control the sensitivity color distance (clr.sub.i-clr.sub.j) and position distance (p.sub.i-p.sub.j), respectively, N.sub.i is the number of pixels in the region R.sub.i; step five: enhancing initial 3D saliency: final 3D saliency s(i) of the region R.sub.i is s(i)=CBW(i)*s.sub.pca.sup.r(i)*s.sup.p(i), wherein S.sub.pca.sup.r (i) is 2D saliency of the region R.sub.i, S.sub.pca.sup.r(i)=.SIGMA..sub.p.di-elect cons.r.sub.iS.sub.pca(p)/N.sub.i, S.sub.pca(p) is saliency at pixel level, CBW ( i ) = { 0 , p i .di-elect cons. B , exp ( - ( DstToCt ( i ) ) / ( 2 .sigma. xy 2 ) , ##EQU00011## wherein DstToCt(i) is Euclidean distance for pixel to center coordinate, B=(B.sub.b.orgate.B.sub.d), .sigma..sub.xy= {square root over (H*H+W*W)}/2, H and W are width and height of the left view, B.sub.d represents depth background, and B.sub.b represents boundary background.

2. The method according to claim 1, being characterized in that: in said step two, q=e.sup.-kt.sup.i, k is Gaussian scale factor, k=4, t.sub.i is calculated as: t.sub.i=.SIGMA..sub.j=1.sup.N.parallel.p.sub.j-.mu..sub.i.parallel..sup.2- dis.sub.ij.sup.clr.sup.i, dis.sub.ij.sup.clr.sup.i is color distance of RGB mean of the region R.sub.i and the region R.sub.j, dis ij clr = exp ( 1 2 .sigma. c 2 clr i - clr j 2 ) p j ##EQU00012## is center coordinate of centroid of the region R.sub.j, and .mu..sub.i is weight position of color clr.sub.i, .mu..sub.i.SIGMA..sub.j=1.sup.Ndis.sub.ij.sup.clrp.sub.j.

3. The method according to claim 1, being characterized in that: in said step (2), specific process for calculating background prior on disparity map includes the steps of: (a) defining initial background image: B.sub.d=0; (b) initializing the furthest background, first, finding coordinate of the largest disparity in disparity map I.sub.d, P.sub.xy=Pos(max(I.sub.d)); then setting initial value O(P.sub.xy)=1; (c) calculating background propagation: B.sub.d=Contour (O(P.sub.xy)), wherein symbol Contour represents segmentation based on active contour, pixel of background portion in the depth background B.sub.d is denoted as 1, and pixel of foreground portion are represented as 0.

4. The method according to claim 2, being characterized in that: in said step (2), specific process for calculating background prior on disparity map includes the steps of: (a) defining initial background image: B.sub.d=0; (b) initializing the furthest background, first, finding the coordinate of the largest disparity in disparity map I.sub.d, P.sub.xy=Pos(max(I.sub.d)); then setting initial value O(P.sub.xy)=1; (c) calculating background propagation: B.sub.d=Contour(O(P.sub.xy)), wherein symbol Contour represents segmentation based on active contour, the pixel of background portion in the depth background image B.sub.d is denoted as 1, and the pixel of foreground portion are represented as 0.
Description



[0001] This application claims priority to Chinese Patent Application Ser. No. CN2016112362977 filed 28 Dec. 2016.

TECHNICAL FIELD

[0002] The present invention relates to a field of visual saliency, in particular to a method of establishing a 3D saliency model based on 3D contrast and depth weight.

BACKGROUND

[0003] The selection of important information in a multi-objective scene is an important function of the human visual system. The use of computer to model the above mechanism is research direction of visual saliency, which also provides basis for applications of target segmentation, quality evaluation, etc. In recent years, study of 3D stereoscopic saliency is great significance because of wide application of 3D display technology.

[0004] When people watch 3D movies, brain gains depth knowledge and produces three-dimension through binocular disparity translation produced by stereo channel separation technology, which led to change of in human visual observation behavior. Therefore, stereoscopic saliency model different from 2D saliency model design should also consider feature of depth channel (such as contrast of depth, etc.) in addition to common features of color, brightness, texture and orientation in 2D saliency model. At present, acquisition method of depth image contains: obtaining depth image from camera and obtaining disparity map (disparity and depth show inverse relationship) through matching algorithm.

[0005] Human beings are influenced by prior knowledge when they are interested in the target of interest, so prior knowledge in both 3D and 2D saliency models can be used to supple saliency model. Common prior knowledge includes two kinds. The first is central bias that is the information of human visual preference for central image. The second is the boundary background prior, where the boundary pixels of image can be used as reference for the saliency model.

[0006] In summary, establishment method of 3D saliency model more close to the human eye fixation is necessary.

DESCRIPTION

[0007] In view of drawbacks of the prior art, a purpose of the present invention is to provide a method of establishing a 3D saliency model based on 3D contrast and depth weight. Feature is not only from 2D color information, but also is from depth channel information, where prior knowledge of background prior, and color tightness are both considered, which make the 3D saliency model established by the present invention more close to human fixation effect.

[0008] For achieving the above purpose, the present invention is realized through the following technical solution:

[0009] A method of 3D establishing a saliency model based on 3D contrast and depth weight includes the following steps of :

[0010] Step one: extracting 3D feature:

[0011] Dividing left view of 3D image pair into N regions by super-pixel segmentation method, labeling as R.sub.i, where i takes value 1 to N; defining a region feature f=[l, a, b, d] for region R.sub.i, wherein l=.SIGMA..sub.i=1.sup.N.sup.il.sub.i/N.sub.i, a=.SIGMA..sub.i=1.sup.N.sup.i a.sub.i/N.sub.i, b=.SIGMA..sub.i=1.sup.N.sup.ib.sub.i/N.sub.i,

d = i = 1 N i d i / N i _ , ##EQU00001##

N.sub.i number of pixels in the region R.sub.i, and l.sub.i, a.sub.i, b.sub.i, d.sub.i is value of l, a , b and disparity of pixel in the region R.sub.i, respectively;

[0012] Step two: calculating feature contrast:

[0013] Representing feature contrast between regions by matrix C, then c.sub.ij represents norm distance between regional features of the region R.sub.i and region R.sub.j, which is calculated as: c.sub.ij=.parallel.u.sub.if.sub.i-u.sub.jf.sub.j.parallel..sub.2, Wherein u is weight of region feature f, u=[1, 1, 1,q], and variable q represents color compactness of N regions in the left view;

[0014] Step three: designing weight of feature contrast:

[0015] (1) obtaining depth saliency map s.sub.s by depth domain analysis method on disparity map, then depth saliency S.sub.d of the region R.sub.i is calculated by using a formula as: s.sub.d(i)=s.sub.s(i)=s.sub.s(i)e.sup.-kt.sup.i;

[0016] (2) calculating background prior on disparity map;

[0017] (3) optimizing depth saliency through the background prior, by using specific process including the step of:

[0018] For the region R.sub.i, using mean disparity of d.sub.t of the region R.sub.i on the disparity map to determine whether it appears to have no correlation with background range and the depth saliency s.sub.d(i) is within background range, and the depth saliency is determined by using a formula:

s d ( i ) = { s d ( i ) , d _ i < thresh 0 , d _ i .gtoreq. thresh , ##EQU00002##

[0019] Wherein threshold thresh is minimum disparity of portion marked as background on the disparity map of depth background B.sub.d;

[0020] (4) designing weight of feature contrast: weight of feature contrast of the region R.sub.i and the region R.sub.j is represented by a variable w.sub.i,j. There are:

w.sub.i,j=exp(-Dst(i,j)/.sigma..sup.2)a(i)s.sub.d(i),

[0021] Wherein a(i) is the size of the region R.sub.i, exp (-Dst(i,j)/.sigma..sup.2) represents Gaussian distance between the region R.sub.i and the region R.sub.j;

[0022] Step four: calculating initial 3D saliency:

[0023] Saliency value of the region R.sub.i is s.sup.r(i)=e.sup.-kt.sup.i.SIGMA..sub.i.noteq.jc.sub.i,j, then calculation formula of the initial 3D saliency s.sup.p(i) of the region R.sub.i is

s.sup.p(i)=.SIGMA..sub.j=1.sup.N.sup.iexp(-(.alpha..parallel.clr.sub.i-c- lr.sub.j.parallel..sup.2+.beta..parallel.p.sub.i-p.sub.j.parallel..sup.2))- s.sup.p(j),

[0024] Wherein .alpha.=0.33, .beta.=0.33, are two parameters to control the sensitivity color distance (clr.sub.i-clr.sub.j) and position distance (p.sub.i-p.sub.j), respectively, N.sub.i is the number of pixels in the region R.sub.i.

[0025] Step five: enhancing initial 3D saliency:

[0026] Final 3D saliency s(i) of the region R.sub.i is S(i)=CBW(i)*S.sub.pca.sup.r(i)*S.sup.p(i), wherein S.sub.pca.sup.r(i) is 2D saliency of the region R.sub.i, and S.sub.pca.sup.r(i)=.SIGMA..sub.penS.sub.pca(p)/N.sub.i, S.sub.pca(p) is saliency at pixel level,

CBW ( i ) = { 0 , p i .di-elect cons. B , exp ( - ( DstToCt ( i ) ) / ( 2 .sigma. xy 2 ) , ##EQU00003##

[0027] Wherein DstToCt(i) is Euclidean distance for pixel to center coordinate. B=(B.sub.b.orgate.B.sub.d), .sigma..sub.xy= {square root over (H*H+W*W)}/2. H and W are width and height of the left view. B.sub.d represents depth background. B.sub.b represents boundary background.

[0028] Preferably, in said step two, q=e.sup.-kt.sup.i, k is Gaussian scale factor. k=4, t.sub.i is calculated as: t.sub.i=.SIGMA..sub.j=1.sup.N.parallel.p.sub.j-.mu..sub.i.parallel..sup.2- dis.sub.ij.sup.clr.sup.i. dis.sub.ij.sup.clr.sup.i is color distance of RGB mean of the region R.sub.i and the region R.sub.j,

dis ij clr = exp ( 1 2 .sigma. c 2 clr i - clr j 2 ) . ##EQU00004##

p.sub.j is center coordinate of centroid of the region R.sub.j. .mu..sub.i is weight position of color clr.sub.i, .mu..sub.i=.SIGMA..sub.j=1.sup.Ndis.sub.ij.sup.clrp.sub.j.

[0029] Preferably, in said step (2), specific process for calculating background prior on disparity map includes the steps of:

[0030] (a) defining initial background image: B.sub.d=0;

[0031] (b) initializing the furthest background, first, finding coordinate of the largest disparity in disparity map I.sub.d, P.sub.xy=Pos(max(l.sub.d)); then setting initial value O(P.sub.xy)=1;

[0032] (c) calculating background propagation: B.sub.d=Contour (O(P.sub.xy)),, wherein symbol Contour represents segmentation based on active contour, pixel of background portion in the depth background. B.sub.d is denoted as 1, and the pixel of foreground portion are represented as 0.

[0033] Preferably, in said step (2), specific process for calculating background prior on disparity map includes the steps of:

[0034] (a) defining initial background image: B.sub.d=0;

[0035] (b) initializing the furthest background, first, finding the coordinate of the largest disparity in disparity map I.sub.d, P.sub.xy=Pos(max(I.sub.d)); then setting initial value O(P.sub.xy)=1;

[0036] (c) calculating background propagation: B.sub.d=Contour (O(P.sub.xy)), wherein symbol Contour represents segmentation based on active contour. The pixel of background portion in the depth background image B.sub.d is denoted as 1, and the pixel of foreground portion are represented as 0.

[0037] The advantages of the present invention are as follows.

[0038] 1. In feature extraction aspect of the present invention, region where color contrast and disparity contrast are strong can obtain high saliency value;

[0039] 2. The invention utilizes color compactness (i.e., color distribution in the 2D image) to calculate feature contrast, thereby increasing saliency value;

[0040] 3. The present invention not only takes into account the prior of boundary background, but also obtains background prior from the 3D disparity map, utilizing the background prior to optimize the depth saliency so as to remove background interference in the 3D saliency model;

[0041] 4. In the invention, spatial Gaussian distance between the depth saliency and region are used as weight of the feature contrast, and initial 3D saliency is enhanced by structural dissimilarity in 2D image, thereby enhancing significant area in the depth and reducing saliency value of background part with low correlation value in 3D image.

BRIEF DESCRIPTION OF THE DRAWINGS

[0042] FIG. 1 is a flow diagram of the inventive establishment method of 3D saliency model based on 3D contrast and depth weight;

[0043] FIG. 2a is a display diagram of ROC (Receiver operating feature) curve performance, wherein abscissa is False Positive Rate(FPR), ordinate is True Positive Rate(TPR).

[0044] FIG. 2b is PR(Precision-Recall) curve, wherein abscissa is recall rate. Ordinate is predicted precision. Icon DWRC (depth-weighted region contrast) is abbreviation of the present invention method in the FIG. 2a and FIG. 2b;

[0045] FIG. 3a is a left side view of the 3D image pair in one embodiment of the present invention. FIG. 3b is a right side view of the 3D image pair;

[0046] FIG. 3b is a right side view of the 3D image pair in one embodiment of the present invention;

[0047] FIG. 3c is a disparity map in one embodiment of the present invention;

[0048] FIG. 3d is an initial 3D saliency map in one embodiment of the present invention;

[0049] FIG. 3e is a target graph (i.e., final 3D saliency map) in one embodiment of the present invention.

[0050] FIG. 4a is a left side view of the 3D image pair. FIG. 4b is a right side view of the 3D image pair in another embodiment of the present invention.

[0051] FIG. 4b is a right side view of the 3D image pair in another embodiment of the present invention.

[0052] FIG. 4c is a disparity map in another embodiment of the present invention.

[0053] FIG. 4d is an initial 3D saliency map in another embodiment of the present invention.

[0054] FIG. 4e is a target graph (i.e., final 3D saliency map) in another embodiment of the present invention.

DETAILED DESCRIPTION

[0055] The present invention will now be described in further detail with reference to the accompanying drawings as required:

[0056] As shown in FIG. 1, the present invention provides a method of establishing a 3D saliency model based on 3D contrast and depth weight, including: dividing left view of 3D image pair into multiple regions by super-pixel segmentation method, synthesizing a set of features with color and disparity information to describe each region, and using color compactness as weight of disparity in region feature component, calculating feature contrast of a region to surrounding regions; obtaining background prior on depth of disparity map, and improving depth saliency through combining the background prior and the color compactness; taking Gaussian distance between the depth saliency and regions as weight of feature contrast, obtaining initial 3D saliency by adding the weight of the feature contrast; enhancing the initial 3D saliency by 2D saliency and central bias weight. The 3D saliency model established by the invention has closer to the human gazing effect.

[0057] Specifically, the method of establishing a 3D saliency model of the present invention includes the steps of:

[0058] Step one: extracting 3D feature:

[0059] Dividing left view of 3D image pair into N regions by super-pixel segmentation method (SLIC), labeling as R.sub.i, where i takes value 1 to N; defining a region feature using CIELab color and disparity, namely, defining a region feature f=[l, a, b, d] for R.sub.i (features of discriminant R.sub.i are expressed as L*a*b mean and disparity mean of the color image in this region), wherein l=.SIGMA..sub.i=1.sup.N.sup.il.sub.i/N.sub.i, a=.SIGMA..sub.i=1.sup.N.sup.ia.sub.i/N.sub.i, b=.SIGMA..sub.i=1.sup.N.sup.ib.sub.i/N.sub.i,

d = i = 1 N i d i / N i _ , ##EQU00005##

N.sub.i is the number of pixels in the region R.sub.i, and l.sub.i, a.sub.i, b.sub.i, d.sub.i is value of l, a b and disparity of pixel in the region R.sub.i, respectively;

[0060] Step two: calculating feature contrast:

[0061] Representing feature contrast between regions by matrix C, then c.sub.ij represents norm distance between regional features of the region R.sub.i and the region R.sub.j, which is calculated as:

c.sub.ij=.parallel.u.sub.if.sub.i-u.sub.jf.sub.j.parallel..sub.2,

[0062] Wherein u is the weight of region feature f, u=[1,1,1,q],

[0063] Variable q represents color compactness of N regions in the left view, and is used to indicate distribution of colors of each region in the left view, q=e.sup.-kt.sup.i, wherein k is Gaussian scale factor. k=4, t.sub.i is calculated as t.sub.i=.SIGMA..sub.j=1.sup.N.parallel.p.sub.j-.mu..sub.i.parallel..sup.2- dis.sub.ij.sup.clr.sup.i.

[0064] Wherein dis.sub.ij.sup.clr.sup.i is color distance of RGB mean of the region R.sub.i and the region R.sub.j,

dis ij clr = exp ( 1 2 .sigma. c 2 clr i - clr j 2 ) . ##EQU00006##

p.sub.j is center coordinate of centroid of the region R.sub.j, and .mu..sub.i is weight position of color cir.sub.i, .mu..sub.i=.SIGMA..sub.j=1.sup.Ndis.sub.ij.sup.clrp.sub.j.

[0065] Step three: designing weight of feature contrast:

[0066] After calculating the feature contrast C of each region, the weight of the feature contrast is represented by matrix W. w.sub.ij represents corresponding weight of C.sub.ij.

[0067] The weight of the feature contrast takes into account depth saliency, Gaussian distance exp (-Dst(i,j)/.sigma..sup.2) between region size a(i) and regions. Wherein calculation process of the depth saliency s.sub.d is: obtaining result s.sub.I through domain analysis on the disparity map, and then using background prior (including depth background B.sub.d and boundary background B.sub.d) and color tightness (formula e.sup.-kt.sup.i) to improve. The detailed process is as follows:

[0068] (1) calculating depth saliency map

[0069] Obtaining the depth saliency map s.sub.s by depth domain analysis method on the disparity map, obtaining s.sub.d through color tightness enhancement, then depth saliency s.sub.d of the region R.sub.i is calculated as: s.sub.d(i)=s.sub.s(i)e.sup.-kt.sup.i;

[0070] 2) calculating background prior on the disparity map:

[0071] There are two stages to extract the background prior on the disparity map: background initialization and background propagation. Specific steps include:

[0072] (a) defining initial background image: B.sub.d=0;

[0073] (b) initializing the furthest background, first, finding coordinate of the largest disparity in disparity map I.sub.d, P.sub.xy=Pos(max(I.sub.d)); then setting initial value O(P.sub.xy)=1;

[0074] (c) calculating background propagation: B.sub.d=Contour (O(P.sub.xy)), , wherein symbol Contour represents segmentation based on active contour, pixel of background portion in the depth background B.sub.d is denoted as 1, and pixel of foreground portion are represented as 0.

[0075] (3) optimizing depth saliency through the background prior, specific process includes the step of:

[0076] For the region R.sub.i, using mean disparity d.sub.i of the region R.sub.i on the disparity map to determine whether it appears to have no correlation with background range and the depth saliency s.sub.d(i) is not within background range, the depth saliency is determined by using a formula:

s d ( i ) = { s d ( i ) , d _ i < thresh 0 , d _ i .gtoreq. thresh , ##EQU00007##

[0077] Wherein threshold thresh is the minimum disparity of portion marked as background on the disparity map of the depth background B.sub.d namely, thresh=min(I.sub.d(q)), q .di-elect cons.{B.sub.d<0}.

[0078] The boundary background is B.sub.b. Background area in the boundary background is represented by 1, and the other areas are represented by 0. If the R.sub.i region is at the position of the boundary background, the saliency s.sub.d(i) is marked as 0, otherwise it is not changed.

[0079] (4) designing weight of feature contrast:

[0080] Weight of feature contrast of the region R.sub.i and the region R.sub.j is represented by a variable w.sub.ij. There are:

w.sub.ij=exp (-Dst(i, j)/.sigma..sup.2)a(i)s.sub.d(i),

[0081] Wherein .alpha.(i) is the size of the region R.sub.i, exp (-Dst(i,j)/.sigma..sup.-*) represents Gaussian distance between the region R.sub.i and the region R.sub.j.

[0082] Step four: calculating initial 3D saliency:

[0083] After completing calculation of the feature contrast c.sub.i,j and the weight w.sub.i,j of the region R.sub.i, saliency value of the region R.sub.i, can be calculated by the following formula:

S.sup.r(i)=e.sup.-kt.sup.i.SIGMA..sub.i=jw.sub.ijc.sub.ij,

[0084] In order to eliminate effect of super-pixel segmentation errors, saliency (i.e., initial 3D significance) of super-pixel of each region is obtained by saliency linear combination of its surrounding regions. Saliency of super-pixel of the region R.sub.i calculation formula is:

S.sup.p(i)=.SIGMA..sub.j=1.sup.V.sup.iexp (-(.alpha..parallel.clr.sub.i-clr.sub.j.parallel..sup.2+.beta..parallel.p- .sub.i-p.sub.j.parallel..sup.2))S.sup.r(j),

[0085] Wherein .alpha. and .beta. are respectively parameters of control color distance (|clr.sub.i-clr.sub.j.parallel.) and position distance (.parallel.p.sub.i-p.sub.j.parallel.), .alpha.=0.33, .beta.=0.33 , are two parameters to control the sensitivity color distance (clr.sub.i-clr.sub.j) and position distance (p.sub.i-p.sub.j), respectively, N.sub.i is the number of pixels in the region R.sub.i.

[0086] Step five: enhancing the initial 3D saliency:

[0087] After calculating the initial 3D saliency S.sup.p(i), performing enhancement through 2D saliency and central bias weight. Final 3D saliency of super-pixel of the region R.sub.i is:

s(i)=CBW(i)*S.sub.pca.sup.r(i)*S.sup.p(i),

[0088] Wherein S.sub.pca.sup.r(i) is 2D saliency of the region R.sub.i, S.sub.pca.sup.r(l)=.SIGMA..sub.penS.sub.pca(p)/N.sub.i, S.sub.pca(p) is saliency at pixel level. CBW(i) (central bias weight) is a Gaussian function modified with background prior, and is calculated by the following formula:

CBW ( i ) = { 0 , p i .di-elect cons. B , exp ( - ( DstToCt ( i ) ) / ( 2 .sigma. xy 2 ) , ##EQU00008##

[0089] Wherein DstToCt(i) is Euclidean distance for pixel to center coordinate, B=(B.sub.b.orgate.B.sub.d), .sigma..sub.xy= {square root over (H*H+W*W)}/2. H and W are the width and height of the left view. B.sub.d represents depth background. B.sub.b represents boundary background.

[0090] Referring to FIG. 2a and FIG. 2b, near upper left corner point in curve of the FIG. 2a, AUC (area under roc curve) value is 0.89 calculated according to showing result in FIG. 2a; recall rate increase in FIG. 2b do not cause sharp decrease of accuracy, and F.sub.(.beta..times.0.3)=0.61 calculated as shown in FIG. 2b. That is, the present invention can obtain a 3D saliency model close to the human eye gaze.

[0091] Referring to FIG. 3a to FIG. 3e, FIG. 4a to FIG. 4e, in both embodiments, the establishment method of 3D saliency model according to the present invention is used to obtain 3D significance model close to the human eye gaze. Seeing from FIG. 3e and FIG. 4e, regions of color contrast and disparity contrast have high saliency values, and background interference is eliminated, then target's saliency is improved.

[0092] In the present inventive method, features are taken from color image and disparity map, and the feature contrast is calculated by using color compactness. In addition to conventional boundary background prior, also using background prior extracted from the disparity map according to the distance from object to observer, and object compactness in color image as supplement of depth saliency, depth saliency of the disparity map is taken as weight of the feature contrast to obtain the initial 3D saliency. Then, performing enhancement for initial 3D saliency through using 2D saliency and the central bias weight. Because feature is not only from the color information of the 2D image, but also contains information of the depth channel, in combination with prior knowledge such as background and color compactness, the 3D saliency model of the present invention has closer to the human gazing effect.

[0093] Although the embodiments of the present invention have been disclosed above, they are not limited to the applications previously mentioned in the specification and embodiments, and can be applied in various fields suitable for the present invention. For ordinary skilled person in the field, other various changed model, formula and parameter may be easily achieved without creative work according to instruction of the present invention, changed, modified and replaced embodiments without departing the general concept defined by the claims and their equivalent are still included in the present invention. The present invention is not limited to particular details and illustrations shown and described herein.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed