Image processing method, image processing apparatus, and electronic camera

Saito; Ikuya

Patent Application Summary

U.S. patent application number 13/067772 was filed with the patent office on 2011-10-20 for image processing method, image processing apparatus, and electronic camera. This patent application is currently assigned to NIKON CORPORATION. Invention is credited to Ikuya Saito.

Application Number20110254971 13/067772
Document ID /
Family ID39706720
Filed Date2011-10-20

United States Patent Application 20110254971
Kind Code A1
Saito; Ikuya October 20, 2011

Image processing method, image processing apparatus, and electronic camera

Abstract

An image processing method of the present invention includes a detection step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, the characteristic area having an image significantly different from the other shot images, and a combining step extracting a partial image located in the characteristic area from each of the three or more shot images and combining these partial images into one image.


Inventors: Saito; Ikuya; (Kawasaki-shi, JP)
Assignee: NIKON CORPORATION
TOKYO
JP

Family ID: 39706720
Appl. No.: 13/067772
Filed: June 24, 2011

Related U.S. Patent Documents

Application Number Filing Date Patent Number
12068431 Feb 6, 2008
13067772

Current U.S. Class: 348/222.1 ; 348/E5.024; 382/284
Current CPC Class: G06T 5/50 20130101; H04N 5/272 20130101
Class at Publication: 348/222.1 ; 382/284; 348/E05.024
International Class: H04N 5/225 20060101 H04N005/225; G06K 9/36 20060101 G06K009/36

Foreign Application Data

Date Code Application Number
Feb 15, 2007 JP 2007-034912

Claims



1. An image processing method, comprising: a detecting step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, said characteristic area having an image significantly different from the other shot images; and a combining step extracting a partial image located in said characteristic area from each of said three or more shot images and combining these partial images into one image.

2. The image processing method according to claim 1, wherein said detecting step assumes, in each of said three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as said characteristic area.

3. The image processing method according to claim 1, wherein: said detecting step generates a distribution map of said characteristic area detected from each of said three or more shot images; and said combining step performs said extracting according to said distribution map.

4. The image processing method according to claim 3, wherein said detecting step performs filter processing on said distribution map for smoothing distribution boundaries.

5. The image processing method according to claim 1, wherein said combining step performs weighted average on the partial image extracted from each of said three or more shot images and partial images extracted from the same areas in the other shot images.

6. The image processing method according to claim 5, wherein said combining step sets a weight of said weighted averaging to be a value specified by a user.

7. The image processing method according to claim 1, wherein said detecting step performs said detecting, instead of using said three or more shot images, using reduced-size versions thereof.

8. An image processing apparatus, comprising: a detecting unit detecting a characteristic area from each of three or more shot images, said characteristic area having an image significantly different from the other shot images; and a combining unit extracting a partial image located in said characteristic area from each of said three or more shot images and combines these partial images into one image.

9. The image processing apparatus according to claim 8, wherein said detecting unit assumes, in each of said three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as said characteristic area.

10. The image processing apparatus according to claim 8, wherein: said detecting unit generates a distribution map of said characteristic area detected from each of said three or more shot images; and said combining unit performs said extracting according to said distribution map.

11. The image processing apparatus according to claim 10, wherein said detecting unit performs filter processing on said distribution map for smoothing distribution boundaries.

12. The image processing apparatus according to claim 8, wherein said combining unit performs weighted average on the partial image extracted from each of said three or more shot images and partial images extracted from the same areas in the other shot images.

13. The image processing apparatus according to claim 12, wherein said combining unit sets a weight of said weighted averaging to be a value specified by a user.

14. The image processing apparatus according to claim 8, wherein said detecting unit performs said detecting, instead of using said three or more shot images, using reduced-size versions thereof.

15. An electronic camera, comprising: an imaging unit that shoots an object to obtain a shot image; and an image processing apparatus according to claim 8 that processes three or more shot images obtained by said imaging unit.
Description



CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application is a continuation based upon and claims the benefit of priority from U.S. application Ser. No. 12/068,431 filed on Feb. 6, 2008 and Japanese Patent Application No. 2007-034912, filed on Feb. 15, 2007, the entire contents of which is incorporated herein by reference.

BACKGROUND

[0002] 1. Field

[0003] The present invention relates to an image processing method for image combining, an image processing apparatus provided with an image combining function, and an electronic camera provided with an image combining function.

[0004] 2. Description of the Related Art

[0005] Patent reference 1 (Japanese Unexamined Patent Application Publication No. 2001-28726) discloses an electronic camera provided with an image combining function. The principle of the function is additive average combining of multiple shot images obtained by, for example, continuous shooting. Shooting continuously a moving object (dynamic body) using a tripod and image-combining the obtained multiple shot images allow a still background and a trajectory of the dynamic body to be included in one image.

[0006] It should be noted that this image combining does not discriminate the dynamic body from the background and the background is also included in the dynamic body area, and thereby the dynamic body appears to be transparent. To perform combining to make a dynamic body opaque (hereinafter, called "opaque combining"), it is necessary to detect a presence area of dynamic body from each of the shot images and to connect the areas.

[0007] However, it is difficult to detect a presence area of dynamic body automatically, and therefore it is currently difficult to realize automatic opaque combining.

SUMMARY

[0008] The present invention provides an image processing method capable of performing opaque combining of shot images without fail. The present invention further provides an image processing apparatus and an electronic camera capable of performing opaque combining of shot images without fail.

[0009] An image processing method of the present invention includes a detecting step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, the characteristic area having an image significantly different from the other shot images, and a combining step extracting a partial image located in the characteristic area from each of the three or more shot images and combining these partial images into one image.

[0010] Here, the detecting step preferably assumes, in each of the three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as the characteristic area.

[0011] Also, the detecting step preferably generates a distribution map of the characteristic area detected from each of the three or more shot images, and the combining step preferably performs the extraction according to the distribution map.

[0012] Also, the detecting step preferably performs filter processing on the distribution map for smoothing distribution boundaries.

[0013] Also, the combining step may perform weighted average on the partial image extracted from each of the three or more shot images and partial images extracted from the same areas in the other shot images.

[0014] Also, the combining step may set a weight of the weighted averaging to be a value specified by a user.

[0015] Also, the detecting step preferably performs the detection, instead of using the three or more shot images, using reduced-size versions thereof.

[0016] Further, an image processing apparatus of the present invention includes a detecting unit that detects a characteristic area from each of three or more shot images, the characteristic area having an image significantly different from the other shot images, and a combining unit that extracts a partial image located in the characteristic area from each of the three or more shot images and combines these partial images into one image.

[0017] Also, the detecting unit preferably assumes, in each of the three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as the characteristic area.

[0018] Also, the detecting unit preferably generates a distribution map of the characteristic area detected from each of the three or more shot images and the combining unit preferably performs the extraction according to the distribution map.

[0019] Also, the detecting unit preferably performs filter processing on the distribution map for smoothing distribution boundaries.

[0020] Also, the combining unit may perform weighted average on the partial image extracted from each of the three or more shot images and partial images extracted from the same areas in the other shot images.

[0021] Also, the combining unit may set a weight of the weighted averaging to be a value specified by a user.

[0022] Also, the detecting unit preferably performs the detection, instead of using the three or more shot images, using reduced-size versions thereof.

[0023] Further, an electronic camera of the present invention includes an imaging unit that shoots an object to obtain a shot image and any one of the image processing apparatus of the present invention to process three or more shot images obtained by the imaging unit.

BRIEF DESCRIPTION OF THE DRAWINGS

[0024] FIG. 1 is a diagram illustrating a configuration of an electronic camera;

[0025] FIG. 2 is a diagram illustrating examples of shot images specified by a user;

[0026] FIG. 3 is an operational flowchart of a combine-processing part 17A;

[0027] FIG. 4 is a diagram illustrating generating steps of an index image (up to difference image calculation);

[0028] FIG. 5 is a diagram illustrating generating steps of an index image (up to index image calculation);

[0029] FIG. 6 is a diagram illustrating an index image Y.sub.index;

[0030] FIG. 7 is a diagram illustrating a true index image I.sub.index;

[0031] FIG. 8 is a diagram illustrating the true index image I.sub.index after filter processing;

[0032] FIG. 9 is a diagram illustrating a combining method based on the true index image I.sub.index (in an opacity of 1); and

[0033] FIG. 10 is a diagram illustrating a combining method based on the true index image I.sub.index (in an arbitrary opacity).

DETAILED DESCRIPTION OF EMBODIMENTS

[0034] Hereinafter, an embodiment of the present invention will be described. The present embodiment is an embodiment for an electronic camera.

[0035] First, a configuration of an electronic camera will be described.

[0036] FIG. 1 is a diagram illustrating the configuration of the electronic camera. As shown in FIG. 1, an electronic camera 10 includes a shooting lens 11, an imaging sensor 12, an A/D converter 13, a signal processing circuit 14, a timing generator (TG) 15, a buffer memory 16, a CPU 17, an image processing circuit 18, a displaying circuit 22, a rear monitor 23, a card interface (card I/F) 24, an operating button 25, etc., and a card memory 24A is attached to the card interface 24. Among these, the CPU 17 is capable of performing image-combine processing (to be described below). In the following description, it is assumed that a combine-processing part 17A performing the image-combine processing is included in the CPU 17.

[0037] The CPU 17 is connected to the buffer memory 16, the image processing circuit 18, the displaying circuit 22, and the card interface 24 via a bus 19. The CPU 17 sends image data to the image processing circuit 18 by use of the bus 19 and thereby performs normal image processing on a shot image such as pixel interpolation processing and color conversion processing. Also, the CPU 17 sends the image data to the rear monitor 23 via the displaying circuit 22, and thereby displays various images on the rear monitor 23. Further, the CPU 17 writes an image file into the card memory 24A and reads the image file out of the card memory 24A via the card interface 24.

[0038] Further, the CPU 17 is connected to the timing generator 15 and the operating button 25. The CPU 17 drives the imaging sensor 12, the A/D converter 13, the signal processing circuit 14, etc. via the timing generator 15, and also recognizes various indications such as mode switching provided by a user via the operating button 25. Hereinafter, the electronic camera 10 is assumed to have a shooting mode, an editing mode, and the like.

[0039] Next, operation of the CPU 17 in the shooting mode will be described.

[0040] When the electronic camera 10 is set to be in the shooting mode, the user enters a shooting indication into the CPU 17 by manipulating the operating button 25. The shooting indications include a single shooting indication and a continuous shooting indication, and the CPU 17 discriminates the both indications based on a push-down period length of the operating button 25, and the like.

[0041] When the single shooting indication is entered, the CPU 17 drives the imaging sensor 12, the A/D converter 13 and the signal processing circuit 14 once, and obtains an image signal (image data) for one frame of a shot image. The obtained image data is stored in the buffer memory 16.

[0042] At this time, the CPU 17 sends the image data of the shot image stored in the buffer memory 16 to the image processing circuit 18 and performs the normal image processing on the shot image, and also generates an image file of the processed shot image to write the image file into the card memory 24A. Thereby the single shooting is completed. Here, the image file has an image storing area and a tag area, and the image data of the shot image is written in the image storing area and information accompanying the shot image is written in the tag area. The accompanying information includes a reduced-size version of image data of a shot image (image data of a thumbnail image).

[0043] On the other hand, when the continuous shooting indication is entered, the CPU 17 continuously drives the imaging sensor 12, the A/D converter 13, and the signal processing circuit 14 multiple times, and obtains an image signal (image data) for multiple frames of shot images. The obtained image data is stored in the buffer memory 16.

[0044] At this time, the CPU 17 sends the image data of the shot images stored in the buffer memory 16 to the image processing circuit 18 and performs the normal image processing on the shot image, and also generates an image file of the processed shot image to write the image file into the card memory 24A. After this operation has been performed for multiple frames of the shot images, the continuous shooting is completed. Here, each of the image files has an image storing area and a tag area, and the image data of the shot image is written in the image storing area and information accompanying the shot image is written in the tag area. The accompanying information includes a reduced-size version of image data of a shot image (image data of a thumbnail image).

[0045] Next, operation of the CPU 17 in the editing mode will be described.

[0046] When the electronic camera 10 is set to be in the editing mode, the CPU 17 displays a menu of the editing mode on the rear monitor 23. One of the menu items is "image combining". This image combining is an image combining capable of the opaque combining.

[0047] While the menu is displayed, the user manipulates the operating button 25 to specify a desired item of the menu to the CPU 17. When the user specifies "image combining", the CPU 17 reads out the image files in the card memory 24A, and sends the image data of the thumbnail images added to the image files to the displaying circuit 22. Thereby, shot images previously obtained are reproduced and displayed on the rear monitor 23. In this reproducing display, a plurality of shot images is preferably displayed in parallel at the same time on the rear monitor 23 so that the user can compare a plurality of the shot images to one another.

[0048] While the shot images are reproduced and displayed, the user manipulates the operating button 25 to specify to the CPU 17 desired N shot images I.sub.k (k=1, 2, . . . , N) among the shot images reproduced and displayed. Then, the user manipulates the operating button 25 to specify an opacity .alpha..sub.k (k=1, 2, . . . , N) of a dynamic body included in each of the specified N shot images I.sub.k (k=1, 2, . . . , N). The opacity .alpha..sub.k is an opacity of a dynamic body M.sub.k included in the shot image I.sub.k.

[0049] Here, the N shot images I.sub.k (k=1, 2, . . . , N) specified by the user have a part common to one another (background) except for a dynamic bodies M.sub.k (k=1, 2, . . . , N) as shown in FIG. 2. Such shot images I.sub.k (k=1, 2, . . . , N) are obtained under common shooting conditions (shooting sensitivity, shutter speed, aperture value, and framing).

[0050] In the present embodiment, these N shot images I.sub.k (k=1, 2, . . . , N) need not include a shot image in which the dynamic body M.sub.k is not present, but, instead, the number of shot images N is required to be three or more.

[0051] Also, pixel coordinates of the dynamic bodies M.sub.k (k=1, 2, . . . , N) included in each of the shot images I.sub.k (k=1, 2, . . . , N) preferably do not overlap with one another. This is because image combining calculation in the present embodiment assumes that pixel coordinates do no overlap.

[0052] Also, a range of an opacity .alpha..sub.k (k=1, 2, . . . , N) which a user can specify is 0.ltoreq..alpha..sub.k.ltoreq.1. For example, for the opaque combining, the user only needs to specify the opacity .alpha..sub.k as .alpha..sub.1=.alpha..sub.2=.alpha..sub.3= . . . =.alpha..sub.N=1, and for erasing all the dynamic bodies from a combined image, specify the opacity .alpha..sub.k as .alpha..sub.1=.alpha..sub.2=.alpha..sub.3= . . . =.alpha..sub.N=0.

[0053] When the opacity .alpha..sub.k is specified as above, the CPU 17 sends image data of the specified N shot images I.sub.k (k=1, 2, . . . , N) to the combine-processing part 17A and performs image-combine processing on the shot images. The CPU 17 obtains one combined image by this image-combine processing, and then displays the combined image on the rear monitor 23. The CPU 17 also newly generates an image file of the combined image and writes the image file into the memory card 24A.

[0054] Next, operation of the combine-processing part 17A will be described in detail.

[0055] For simplicity of the description, specified shot images are assumed to be three shot images I.sub.1, I.sub.2, and I.sub.3 shown in FIG. 2. In this case, the number of shot images N is three. Also, each shot image I.sub.k is assumed to have a Y component, a Cb component, and a Cr component.

[0056] Accordingly, in the description, the Y component of a shot image I.sub.k is denoted by Y.sub.k, the Cb component of a shot image I.sub.k is denoted by Cb.sub.k, and the Cr component of a shot image I.sub.k is denoted by Cr.sub.k. Also in the description, a pixel value of an arbitrary image X at a pixel coordinates (i,j) is denoted by X(i,j) and the origin of the pixel coordinates (i,j) is determined to be at upper left corner of an image.

[0057] FIG. 3 is an operational flowchart of the combine-processing part 17A. Each step thereof will be described in sequence as follows.

[0058] (Steps S1 to S3)

[0059] The combine-processing part 17A performs size reduction processing on each of the shot images I.sub.1, I.sub.2, and I.sub.3 in order to improve processing speed of the image-combine processing. For achieving a size reduction ratio of 16, the size reduction processing is performed using the following formulas, for example.

Y k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Y k ( x , y ) ) / 16 Cb k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Cb k ( x , y ) ) / 16 Cr k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Cr k ( x , y ) ) / 16 ( Formula 1 ) ##EQU00001##

[0060] (Step S4)

[0061] The combine-processing part 17A generates an average image I.sub.ave of the shot images I.sub.1, I.sub.2, and I.sub.3 after the size reduction processing. Y.sub.ave: the Y component of the average image I.sub.ave, Cb.sub.ave: the Cb component of the average image I.sub.ave, and Cr.sub.ave: the Cr component of the average image I.sub.ave, are calculated by the following formulas.

Y ave ( i , j ) = ( k = 1 N Y k ( i , j ) ) / N Cb ave ( i , j ) = ( k = 1 N Cb k ( i , j ) ) / N Cr ave ( i , j ) = ( k = 1 N Cr k ( i , j ) ) / N ( Formula 2 ) ##EQU00002##

[0062] (Step S5)

[0063] The combine-processing part 17A generates an index image Y.sub.index of the Y component, an index image Cb.sub.index of the Cb component, and an index image Cr.sub.index of the Cr component as provisional index images, respectively.

[0064] Representing these index images, a generation step of the index image Y.sub.index will be described.

[0065] First, the combine-processing part 17A focuses on the shot image I.sub.1 as shown in FIG. 4, and calculates a difference image .DELTA.Y.sub.1 between the Y component image Y.sub.1 of the focused shot image I.sub.1 and an average image of the Y components Y.sub.2 and Y.sub.3 of the other shot images I.sub.2 and I.sub.3.

[0066] Similarly, the combine-processing part 17A focuses on the shot image I.sub.2, and calculates a difference image .DELTA.Y.sub.2 between the Y component image Y.sub.2 of the focused shot image I.sub.2 and an average image of the Y components Y.sub.1 and Y.sub.3 of the other shot images I.sub.1 and I.sub.3.

[0067] Similarly, the combine-processing part 17A focuses on the shot image I.sub.3, and calculates a difference image .DELTA.Y.sub.3 between the Y component image Y.sub.3 of the focused shot image I.sub.3 and an average image of the Y components Y.sub.1 and Y.sub.2 of the other shot images I.sub.1 and I.sub.2.

[0068] Thereby, the difference image .DELTA.Y.sub.1 regarding the shot image I.sub.1, the difference image .DELTA.Y.sub.2 regarding the shot image I.sub.2, and the difference image .DELTA.Y.sub.3 regarding the shot image I.sub.3 are obtained. Here, the difference images .DELTA.Y.sub.1, .DELTA.Y.sub.2, and .DELTA.Y.sub.3 are calculated using the following formula.

.DELTA.Y.sub.k(i,j)=|(Y.sub.ave(i,j).times.N-Y.sub.k(i,j)/(N-1)-Y.sub.k(- i,j)| (Formula 3)

[0069] In this formula, "k" is an image number of a focused shot image, and (Y.sub.ave.times.N-Y.sub.k)/(N-1) is the average image of the other shot images.

[0070] Subsequently, the combine-processing part 17A compares the difference images .DELTA.Y.sub.1, .DELTA.Y.sub.2, and .DELTA.Y.sub.3 for every pixel coordinates as shown in FIG. 5.

[0071] Then, the combine-processing part 17A determines an area as a characteristic area A.sub.1, the area being in the difference image .DELTA.Y.sub.1 and whose pixel value is larger than the pixel values of the same areas in the other difference images .DELTA.Y.sub.2, and .DELTA.Y.sub.3. This characteristic area A.sub.1 is an area having an outstanding pixel value in the shot image I.sub.1 compared with the other shot images I.sub.2 and I.sub.3. Therefore, this characteristic area A.sub.1 can be assumed to be a presence area of a dynamic body M.sub.1.

[0072] Also, the combine-processing part 17A determines an area as a characteristic area A.sub.2, the area being in the difference image .DELTA.Y.sub.2 and whose pixel value is larger than the pixel values of the same areas in the other difference images .DELTA.Y.sub.1, and .DELTA.Y.sub.3. This characteristic area A.sub.2 is an area having an outstanding pixel value in the shot image I.sub.2 compared with the other shot images I.sub.1 and I.sub.3. Therefore, this characteristic area A.sub.2 can be assumed to be a presence area of a dynamic body M.sub.2.

[0073] Also, the combine-processing part 17A determines an area as a characteristic area A.sub.3, the area being in the difference image .DELTA.Y.sub.3 and whose pixel value is larger than the pixel values of the same areas in the other difference images .DELTA.Y.sub.1, and .DELTA.Y.sub.2. This characteristic area A.sub.3 is an area having an outstanding pixel value in the shot image I.sub.3 compared with the other shot images I.sub.1 and I.sub.2. Therefore, this characteristic area A.sub.3 can be assumed to be a presence area of a dynamic body M.sub.3.

[0074] Then, the combine-processing part 17A generates one index image Y.sub.index as a distribution map of these characteristic areas A.sub.1, A.sub.2, and A.sub.3.

[0075] The characteristic area A.sub.1 in this index image Y.sub.index is provided with a pixel value "1" as same as the image number of the shot image I.sub.1 and the dynamic body M.sub.1, the characteristic area A.sub.2 in the index image Y.sub.index is provided with a pixel value "2" as same as the image number of the shot image I.sub.2 and the dynamic body M.sub.2, and the characteristic area A.sub.3 in the index image Y.sub.index is provided with a pixel value "3" as same as the image number of the shot image I.sub.3 and the dynamic body M.sub.3.

[0076] Such an index image Y.sub.index is calculated by use of the following formula.

Y.sub.index(i,j)=kofmax[.DELTA.Y.sub.1(i,j), . . . ,.DELTA.Y.sub.N(i,j)] (Formula 4)

[0077] In this formula, kofmax[x.sub.1, x.sub.2, . . . , x.sub.N] is an element number k of the largest element x.sub.k among N elements x.sub.1, x.sub.2, . . . , x.sub.N.

[0078] FIG. 6 is a diagram illustrating the index image Y.sub.index. As shown in FIG. 6, in the index image Y.sub.index, many pixels located in the presence area of the dynamic body M.sub.1 (refer to FIG. 2) have a pixel value "1", many pixels located in the presence area of the dynamic body M.sub.2 (refer to FIG. 2) have a pixel value "2", and many pixels located in the presence area of the dynamic body M.sub.3 (refer to FIG. 2) have a pixel value "3". Meanwhile, in the other areas in the index image Y.sub.index, a pixel having a pixel value "1", a pixel having a pixel value "2", and a pixel having a pixel value "3" are mixed.

[0079] Therefore, a distribution relationship among the dynamic bodies M.sub.1, M.sub.2, and M.sub.3 (refer to FIG. 2) is reflected to the index image Y.sub.index with a certain accuracy. Using this index image Y.sub.index enables the dynamic bodies M.sub.1, M.sub.2, and M.sub.3 to be extracted from the shot images I.sub.1, I.sub.2, and I.sub.3, respectively.

[0080] However, in this index image Y.sub.index there exists an indefinite portion such as a portion enclosed by a dotted line in FIG. 6. The reason is probably that the luminance of a part of the dynamic body M.sub.3 (refer to FIG. 2) is close to the luminance of the background and false detection occurred so that this part was determined to be the characteristic area A.sub.1 or the characteristics area A.sub.2, rather than determined to be the characteristic area A.sub.3.

[0081] Accordingly, the combine-processing part 17A in the present step performs the following processing when detecting the characteristic areas A.sub.1, A.sub.2, and A.sub.3 in the calculation of the index image Y.sub.index (refer to FIG. 5).

[0082] That is, the combine-processing part 17A compares pixel values in the characteristic area A.sub.1 of the difference image .DELTA.Y.sub.I, the characteristic area A.sub.2 of the difference image .DELTA.Y.sub.2, and the characteristic area A.sub.3 of the difference image .DELTA.Y.sub.3 with a threshold value thY, and assumes an area having pixel value smaller than the threshold value thY to be a particular area A.sub.0 which does not belong to any of the characteristic areas A.sub.1, A.sub.2, and A.sub.3. Then, the combine-processing part 17A assigns the particular area A.sub.0 of the index image Y.sub.index with a particular value except for "1", "2", and "3" (hereinafter, "0"). This particular value is replaced with an appropriate value in the next step.

[0083] In the present step, the index image Cb.sub.index of the Cb component and the index image Cr.sub.index of the Cr component are also generated, and this is performed for the purpose of this replacement.

[0084] Here, the index image Cb.sub.index is calculated by use of the following formulas.

.DELTA.Cb.sub.k(i,j)=|(Cb.sub.ave(i,j).times.N-Cb.sub.k(i,j))/(N-1)-Cb.s- ub.k(i,j)|

Cb.sub.index(i,j)=kofmax[.DELTA.Cb.sub.1(i,j), . . . ,.DELTA.Cb.sub.N(i,j)] (Formula 5)

[0085] Also, when calculating this index image Cb.sub.index, the combine-processing part 17A performs the following processing.

[0086] That is, the combine-processing part 17A compares pixel values in a characteristic area A.sub.1 of a difference image .DELTA.Cb.sub.1, a characteristic area A.sub.2 of a difference image .DELTA.Cb.sub.2, and a characteristic area A.sub.3 of a difference image .DELTA.Cb.sub.3 with a threshold value thCb, and assumes an area having a pixel value smaller than the threshold value thCb to be a particular area A.sub.0 which does not belong to any of the characteristic areas A.sub.1, A.sub.2, and A.sub.3. Then the combine-processing part 17A assigns the particular area A.sub.0 in the index image Cb.sub.index with a particular value except for "1", "2", and "3" (hereinafter, "0").

[0087] Also, the index image Cr.sub.index is calculated by use of the following formulas.

.DELTA.Cr.sub.k(i,j)=|(Cr.sub.ave(i,j).times.N-Cr.sub.k(i,j))/(N-1)-Cr.s- ub.k(i,j)|

Cr.sub.index(i,j)=kofmax[.DELTA.Cr.sub.1(i,j), . . . ,.DELTA.Cr.sub.N(i,j)] (Formula 6)

[0088] Also, when calculating this index image Cr.sub.index, the combine-processing part 17A performs the following processing.

[0089] That is, the combine-processing part 17A compares pixel values in a characteristic area A.sub.1 of a difference image .DELTA.Cr.sub.1, a characteristic area A.sub.2 of a difference image .DELTA.Cr.sub.2, and a characteristic area A.sub.3 of a difference image .DELTA.Cr.sub.3 with a threshold value thCr, and assumes an area having a pixel value smaller than the threshold value thCr to be a particular area A.sub.0 which does not belong to any of the characteristic areas A.sub.1, A.sub.2, and A.sub.3. Then the combine-processing part 17A assigns the particular area A.sub.0 in the index image Cr.sub.index with a particular value except for "1", "2", and "3" (hereinafter, "0").

[0090] (Step S6)

[0091] The combine processing part 17A determines whether a pixel having the particular value "0" exists in the index image Y.sub.index, and, if it exists, replaces the pixel value with a pixel value at the same pixel coordinates in the index image Cb.sub.index.

[0092] Further, the combine-processing part 17A determines whether a pixel having the particular value "0" remains in the index image Y.sub.index after the replacement, and if it remains, replaces the pixel value with a pixel value at the same coordinates in the index image Cr.sub.index. The index image Y.sub.index after these replacements is determined to be a true index image I.sub.Index (refer to FIG. 7).

[0093] Accordingly, in the present step, indefiniteness of the index image Y.sub.index is compensated by the other index images Cb.sub.index and Cr.sub.index, and the more definite true index image I.sub.index is obtained.

[0094] Therefore, even if the luminance of a part of a dynamic body M.sub.k included in a certain shot image I.sub.k is close to the luminance of a background thereof, the true index image I.sub.index is compensated to become more definite as long as the color of the part is different from the color of the background.

[0095] Here, even in the true index image I.sub.index after the compensation, there still remains a possibility that a pixel having a particular value "0" exists. This is because there can exist a part of the dynamic body which is similar to the background thereof in both luminance and color.

[0096] (Step S7)

[0097] The combine-processing part 17A performs filter processing on the true index image I.sub.index. A filter used at this time is a filter smoothing boundary lines of the characteristic areas A1, A2, and A3, and, for example, a majority filter of 3.times.3 pixels. The majority filter has a function to replace a pixel value of a focused pixel with a mode value of pixel values of peripheral pixels thereof.

[0098] Here, when the mode value becomes the particular value "0" during this filter processing, the combine-processing part 17A replaces the pixel value of the focused pixel with a second mode value, not the mode value. This replacement eliminates a pixel having the particular value "0" from the true index image I.sub.index.

[0099] The filter processing on the true index image I.sub.index may be performed once, but preferably performed appropriate number of times of two or more. As the filter processing is repeated, pixels having the same pixel value (each of the characteristic areas A.sub.1, A.sub.2, and A.sub.3) on the true index image I.sub.Index gather together gradually. Therefore, in the true index image I.sub.index after the filter processing, the characteristic area A.sub.1 covers the entire presence area of the dynamic body M.sub.1 (refer to FIG. 2), the characteristic area A.sub.2 covers the entire presence area of the dynamic body M.sub.2 (refer to FIG. 2), and the characteristic area A.sub.3 covers the entire presence area of the dynamic body M.sub.3 (refer to FIG. 2), as shown in FIG. 8.

[0100] In such a true index image I.sub.index, boundary lines between the characteristic areas A.sub.1, A.sub.2, and A.sub.3 are not always located at a boundary area between the background and the dynamic body M.sub.1, a boundary area between the background and the dynamic body M.sub.2, or a boundary area between the background and the dynamic body M.sub.3, but located definitely in a boundary area between the dynamic body M.sub.1 and the dynamic body M.sub.2, a boundary area between the dynamic body M.sub.2 and the dynamic body M.sub.3, and a boundary area between the dynamic body M.sub.1 and the dynamic body M.sub.3.

[0101] Here, in the present step, instead of increase in the number of executions of the filter processing, a filter diameter for the filter processing may be increased.

[0102] (Step S8)

[0103] The combine-processing part 17A performs size enlargement processing on the true index image I.sub.index after the filter processing, and makes a size thereof as same as that of the original shot images I.sub.1, I.sub.2, and I.sub.3. If the size reduction ratio in the steps S1 to S3 described hereinabove is 16, the size enlargement processing uses, for example, the following formulas, where a true index image after the enlargement processing is denoted by I'.sub.index. In fact, the size enlargement processing using the following formulas is a padding processing.

I index ' ( 4 j , 4 i ) = I index ( i , j ) I index ' ( 4 j , 4 i + 1 ) = I index ( i , j ) I index ' ( 4 j , 4 i + 2 ) = I index ( i , j ) I index ' ( 4 j , 4 i + 3 ) = I index ( i , j ) I index ' ( 4 j + 1 , 4 i ) = I index ( i , j ) I index ' ( 4 j + 1 , 4 i + 1 ) = I index ( i , j ) I index ' ( 4 j + 3 , 4 i + 3 ) = I index ( i , j ) ( Formula 7 ) ##EQU00003##

[0104] Here, the true index image after the enlargement processing I'.sub.index is used in the next step as a source map for extracting a partial image from each of the shot images I.sub.1, I.sub.2, and I.sub.3.

[0105] For this purpose, a pixel value in the true index image I'.sub.index should be any one of "1", "2", and "3" and should not be an intermediate value among "1", "2", and "3". Therefore, in the size enlargement processing in the present step, an average interpolation should not be applied, and, if interpolation is applied, it should be a majority interpolation.

[0106] (Step S9)

[0107] The combine-processing part 17A, on the basis of the true index image I'.sub.index, extracts a partial image I.sub.1A.sub.1 located in the characteristic area A.sub.1 from the shot image I.sub.1, a partial image I.sub.2A.sub.2 located in the characteristic area A.sub.2 from the shot image I.sub.2, and a partial image I.sub.3A.sub.3 located in the characteristic area A.sub.3 from the shot image I.sub.3, as shown in FIG. 9. Then, the combine-processing part 17A combines the three extracted partial images I.sub.1-A.sub.1, I.sub.2A.sub.2, and I.sub.3A.sub.3 to obtain one combined image I. Here, a partial image located in an area A of an image X is represented by XA.

[0108] At this time, the combine-processing part 17A sets an opacity of the partial image I.sub.1A.sub.1 an opacity of the partial image I.sub.2A.sub.2, and an opacity of the partial image I.sub.3A.sub.3 to be .alpha..sub.1, .alpha..sub.2, and .alpha..sub.3, respectively. These values .alpha..sub.1, .alpha..sub.2, and .alpha..sub.3 are the opacities specified by the user for the dynamic bodies M.sub.1, M.sub.2, and M.sub.3 (refer to FIG. 2). This concept is illustrated as shown in FIG. 10.

[0109] That is, regarding the characteristic area A.sub.1, weighted averaging is performed on the shot image I.sub.1 and an average image (I.sub.2+I.sub.3)/2 of the other shot images I.sub.2 and I.sub.3. A weight ratio of the weighted averaging is set to be .alpha..sub.1:(1-.alpha..sub.1) according to the user's specification.

[0110] Focusing on this characteristic area A.sub.1, there exists not only the dynamic body M.sub.1 but also the background portion in the shot image I.sub.1, but the graphic pattern of the background portion is as same as that of the average image {(I.sub.2+I.sub.3)/2} of the other shot images I.sub.2 and I.sub.3. Therefore, a graphic pattern of the characteristic area A.sub.1 in the combined image I becomes the background on which the dynamic body M.sub.1 is superimposed with an opacity of .alpha..sub.1.

[0111] Also, regarding the characteristic area A.sub.2, weighted averaging is performed on the shot image I.sub.2 and an average image (I.sub.1+I.sub.3)/2 of the other shot images I.sub.1 and I.sub.3. A weight ratio of the weighted averaging is set to be .alpha..sub.2:(1-.alpha..sub.2) according to the user's specification.

[0112] Focusing on this characteristic area A.sub.2, there exists not only the dynamic body M.sub.2 but also the background portion in the shot image I.sub.2, but the graphic pattern of the background portion is as same as that in the average image {(I.sub.1+I.sub.3)/2} of the other shot images I.sub.1 and I.sub.3. Therefore, a graphic pattern of the characteristic area A.sub.2 in the combined image I becomes the background on which the dynamic body M.sub.2 is superimposed on with an opacity of .alpha..sub.2.

[0113] Also, regarding the characteristic area A.sub.3, weighted averaging is performed on the shot image I.sub.3 and an average image (I.sub.1+I.sub.2)/2 of the other shot images I.sub.1 and I.sub.2. A weight ratio of the weighted averaging is set to be .alpha..sub.3:(1-.alpha..sub.3) according to the user's specification.

[0114] Focusing on this characteristic area A.sub.3, there exists not only the dynamic body M.sub.3 but also the background portion in the shot image I.sub.3, but the graphic pattern of the background portion is as same as that in the average image {(I.sub.1+I.sub.2)/2} of the other shot images I.sub.1 and I.sub.2. Therefore, a graphic pattern of the characteristic area A.sub.3 in the combined image I becomes the background on which the dynamic body M.sub.3 is superimposed with an opacity of .alpha..sub.3.

[0115] Here, the combining of the shot images I.sub.1, I.sub.2, and I.sub.3 in the present step is performed for each component of the shot images I.sub.1, I.sub.2, and I.sub.3, and the common true index image I'.sub.index is used for the combining of each component. When a Y component of the combined image I is denoted by Y, a Cb component of the combined image I is denoted by Cb, and a Cr component of the combined image I is denoted by Cr, combining each component uses the following formulas.

k(i,j)=I'.sub.index(i,j)

Y(i,j)=.alpha..sub.k(i,j).times.Y.sub.k(i,j)(i,j)+(1-.alpha..sub.k(i j)).times.(Y.sub.ave(i,j).times.N-Y.sub.k(i,j)(i,j))/(N-1)

Cb(i,j)=.alpha..sub.k(i,j).times.Cb.sub.k(i,j)(i,j)+(1-.alpha..sub.k(i,j- )).times.(Cb.sub.ave(i,j).times.N-Cb.sub.k(i,j)(i,j))/(N-1)

Cr(i,j)=.alpha..sub.k(i,j).times.Cr.sub.k(i,j)(i,j)+(1-.alpha..sub.k(i,j- )).times.(Cr.sub.ave(i,j).times.N-Cr.sub.k(i,j)(i,j))/(N-1) (Formula 8)

[0116] That is, a pixel value Y (i,j) at a certain pixel coordinates (i,j) of the combined image I is determined to be a weighted-average of a pixel value Y.sub.1 (i,j), Y.sub.2 (i,j), and Y.sub.3 (i,j) at the same pixel coordinates in the shot images I.sub.1, I.sub.2, and I.sub.3 according to a pixel value I'.sub.index (i,j) at the same pixel coordinates (i,j) in the true index image I'.sub.index. For example, when the pixel value I'.sub.index (i,j) of the true index image I'.sub.index is "1", a weighted average of the pixel value Y.sub.1 (i,j) and an average value of the Y.sub.2 (i,j) and Y.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.1:(1-.alpha..sub.1). Also, for example, when the pixel value I'.sub.index (i, j) of the true index image I'.sub.index is "2", a weighted average of the pixel value Y.sub.2 (i,j) and an average value of the Y.sub.1 (i,j) and Y.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.2:(1-.alpha..sub.2).

[0117] Similarly, a pixel value Cb(i,j) of a certain pixel of the combined image I is determined to be a weighted-average of a pixel value Cb.sub.1 (i,j), Cb.sub.2 (i,j), and Cb.sub.3 (i,j) at the same pixel coordinates in the shot images I.sub.1, I.sub.2, and I.sub.3 according to a pixel value I'.sub.index (i,j) at the same pixel coordinates (i,j) in the true index image I'.sub.index. For example, when the pixel value I'.sub.index (i,j) of the true index image I'.sub.index is "1", a weighted average of the pixel value Cb.sub.1 (i,j) and an average value of the Cb.sub.2 (i,j) and Cb.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.1:(1-.alpha..sub.1). Also, for example, when the pixel value I'.sub.index (i,j) of the true index image I'.sub.index is "2", a weighted average of the pixel value Cb.sub.2 (i,j) and an average value of the Cb.sub.1 (i, j) and Cb.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.2:(1-.alpha..sub.2).

[0118] Similarly, a pixel value Cr(i,j) of a certain pixel of the combined image I is determined to be a weighted-average of a pixel value Cr.sub.1 (i,j), Cr.sub.2 (i,j), and Cr.sub.3 (i,j) at the same pixel coordinates in the shot images I.sub.1, I.sub.2, and I.sub.3 according to a pixel value I'.sub.index (i,j) at the same pixel coordinates (i,j) in the true index image I'.sub.index. For example, when the pixel value I'.sub.index (i,j) of the true index image I'.sub.index is "1", a weighted average of the pixel value Cr.sub.1 (i,j) and an average value of the Cr.sub.2 (i,j) and Cr.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.1:(1-.alpha..sub.1). Also, for example, when the pixel value I'.sub.index (i,j) of the true index image I'.sub.index is "2", a weighted average of the pixel value Cr.sub.2 (i,j) and an average value of the Cr.sub.1 (i,j) and Cr.sub.3 (i,j) is calculated with a weight ratio of .alpha..sub.2:(1-.alpha..sub.2) (That is the description of the step S9.).

[0119] As a result, the combine-processing part 17A, while managing opacities of the dynamic bodies M1, M2, and M3 included in the shot images I.sub.1, I.sub.2, and I.sub.3 to be .alpha..sub.1, .alpha..sub.2, and .alpha..sub.3 specified by the user, respectively, can combine these shot images I.sub.1, I.sub.2, and I.sub.3 correctly. When the opacities are set to be "1", the combining may become the opaque combining.

[0120] Here, while the user in the present embodiment specified the opacities of all the dynamic bodies, the opacities of some or all of the dynamic bodies may not be specified. In this case, the CPU 17 sets the opacity which is not specified to be a default value (e.g., "1").

[0121] Also, although the combine-processing part 17A in the present embodiment eliminated the particular value "0" from the true index image I.sub.index during the filtering processing (step S7), the processing associated with this elimination may be omitted. Note that, in this case, the combine-processing part 17A needs to replace the particular value "0" with a non-particular value (any one of "1", "2", and "3") in the end of the step S7.

[0122] For example, the combine-processing part 17A replaces the particular value "0" with "1" when the shot image I.sub.1 takes priority, the particular value "0" with "2" when the shot image I.sub.2 takes priority, and the particular value "0" with "3" when the shot image I.sub.3 takes priority. Here, which of the shot images I.sub.1, I.sub.2, and I.sub.3 takes priority may be specified by the user in advance or determined automatically by the combine-processing part 17A through its evaluation of the shot images I.sub.1, I.sub.2, and I.sub.3.

[0123] Also, while the combine-processing part 17A in the present embodiment performed the size reduction processing (steps S1 to S3) on the shot images to improve the processing speed in the image-combine processing, the size reduction processing (steps S1 to S3) may be omitted and, instead, the thumbnail image of the shot image (stored in the same image file as the shot image) may be used.

[0124] Also, while the combine-processing part 17A in the present embodiment performed the size reduction processing (steps S1 to S3) on the shot images to improve the processing speed in the image-combine processing, the size reduction processing (steps S1 to S3) may be omitted when a slow processing speed does not matter. In this case, the size enlargement processing (step S8) is also omitted.

[0125] Although, in the electronic camera 10 of the present embodiment, the whole image-combine processing was performed by the CPU 17, a part or whole of the image-combine processing may be performed by a dedicated circuit except for the CPU 17 or the image processing circuit 18.

[0126] Also, some or all of the functions of the image-combine processing in the electronic camera 10 of the present embodiment may be provided in the other apparatus having a user interface and a monitor such as an image storage or a printer.

[0127] Also, some or all of the functions of the image-combine processing in the electronic camera 10 of the present embodiment may be performed by a computer. When performed by a computer, a program for the purpose (image-combine processing program) is stored in a memory of the computer (hard disk drive or the like). Install of the image-combine processing program into the hard disk drive is performed, for example, via the Internet or a recording medium such as a CD-ROM.

[0128] The many features and advantages of the embodiments are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the embodiments that fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the inventive embodiments to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope thereof.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed