Rendering images utilizing adaptive error diffusion

Gorian , et al. June 21, 2

Patent Grant RE42473

U.S. patent number RE42,473 [Application Number 11/847,894] was granted by the patent office on 2011-06-21 for rendering images utilizing adaptive error diffusion. This patent grant is currently assigned to Senshin Capital, LLC. Invention is credited to Izrail S. Gorian, Richard A. Pineau, Jay E. Thornton.


United States Patent RE42,473
Gorian ,   et al. June 21, 2011
**Please see images for: ( Certificate of Correction ) **

Rendering images utilizing adaptive error diffusion

Abstract

An adaptive halftoning method where the difference between a digital image and a filtered digital image is introduced into the system on a pixel by pixel basis is disclosed. In this method, each input difference pixel has a corresponding error value of the previous pixel added to the input value at a summing node, resulting in modified image difference data; the modified image difference data is passed to a threshold comparator where the modified image difference data is compared to a threshold value, the threshold value varying according to the properties of the digital image, to determine the appropriate output level; the output level is subtracted from the modified image difference value to produce the input to an error filter; the output of the error filter is multiplied by an adaptation coefficient, where the adaptation coefficient varies according to the properties of the digital image, to generate the error level for the subsequent input pixel; and, the cyclical processing of pixels is continued until the end of the input data is reached.


Inventors: Gorian; Izrail S. (Watertown, MA), Thornton; Jay E. (Watertown, MA), Pineau; Richard A. (North Andover, MA)
Assignee: Senshin Capital, LLC (Wilmington, DE)
Family ID: 25355598
Appl. No.: 11/847,894
Filed: August 30, 2007

Related U.S. Patent Documents

Application Number Filing Date Patent Number Issue Date
Reissue of: 09870537 May 30, 2001 6937365 Aug 30, 2005

Current U.S. Class: 358/1.9; 382/252; 358/3.03; 358/3.05; 358/3.04
Current CPC Class: H04N 1/4053 (20130101)
Current International Class: H04N 1/405 (20060101)
Field of Search: ;358/3.03,3.04,3.05,1.9,2.99,3.01,3.06 ;382/252

References Cited [Referenced By]

U.S. Patent Documents
3820133 June 1974 Adorney et al.
3864708 February 1975 Allen
4070587 January 1978 Hanakata
4072973 February 1978 Mayo
4089017 May 1978 Buldini
4154523 May 1979 Rising et al.
4168120 September 1979 Freier et al.
4284876 August 1981 Ishibashi et al.
4309712 January 1982 Iwakura
4347518 August 1982 Williams et al.
4364063 December 1982 Anno et al.
4385302 May 1983 Moriguchi et al.
4391535 July 1983 Palmer
4415908 November 1983 Sugiura
4443121 April 1984 Arai
4447818 May 1984 Kurata et al.
4464669 August 1984 Sekiya et al.
4514738 April 1985 Nagato et al.
4524368 June 1985 Inui et al.
4540992 September 1985 Moteki et al.
4563691 January 1986 Noguchi et al.
4607262 August 1986 Moriguchi et al.
4638372 January 1987 Leng et al.
4686549 August 1987 Williams et al.
4688051 August 1987 Kawakami et al.
4704620 November 1987 Ichihashi et al.
4738526 April 1988 Larish
4739344 April 1988 Sullivan et al.
4777496 October 1988 Maejima et al.
4805033 February 1989 Nishikawa
4809063 February 1989 Moriguchi et al.
4884080 November 1989 Hirahara et al.
4907014 March 1990 Tzeng et al.
4933709 June 1990 Manico et al.
4962403 October 1990 Goodwin et al.
5006866 April 1991 Someya
5045952 September 1991 Eschbach
5046118 September 1991 Ajewole et al.
5066961 November 1991 Yamashita
5086306 February 1992 Sasaki
5086484 February 1992 Katayama et al.
5109235 April 1992 Sasaki
5115252 May 1992 Sasaki
5130821 July 1992 Ng
5132703 July 1992 Nakayama
5132709 July 1992 West
5162813 November 1992 Kuroiwa et al.
5184150 February 1993 Sugimoto
5208684 May 1993 Itoh
5244861 September 1993 Campbell et al.
5248995 September 1993 Izumi
5268706 December 1993 Sakamoto
5285220 February 1994 Suzuki et al.
5307425 April 1994 Otsuka
5323245 June 1994 Rylander
5333246 July 1994 Nagasaka
5422662 June 1995 Fukushima et al.
5450099 September 1995 Stephenson et al.
5455685 October 1995 Mori
5469203 November 1995 Hauschild
5479263 December 1995 Jacobs et al.
5497174 March 1996 Stephany et al.
5521626 May 1996 Tanaka et al.
5539443 July 1996 Mushika et al.
5569347 October 1996 Obata et al.
5576745 November 1996 Matsubara
5602653 February 1997 Curry
5617223 April 1997 Burns et al.
5623297 April 1997 Austin et al.
5623581 April 1997 Attenberg
5625399 April 1997 Wiklof et al.
5642148 June 1997 Fukushima et al.
5644351 July 1997 Matsumoto et al.
5646672 July 1997 Fukushima
5664253 September 1997 Meyers
5668638 September 1997 Knox
5694484 December 1997 Cottrell et al.
5703644 December 1997 Mori et al.
5706044 January 1998 Fukushima
5707082 January 1998 Murphy
5711620 January 1998 Sasaki et al.
5719615 February 1998 Hashiguchi et al.
5721578 February 1998 Nakai et al.
5724456 March 1998 Boyack et al.
5729274 March 1998 Sato
5757976 May 1998 Shu
5777599 July 1998 Poduska, Jr.
5781315 July 1998 Yamaguchi
5784092 July 1998 Fukuoka
5786837 July 1998 Kaerts et al.
5786900 July 1998 Sawano
5800075 September 1998 Katsuma et al.
5808653 September 1998 Matsumoto et al.
5809164 September 1998 Hultgren, III
5809177 September 1998 Metcalfe et al.
5818474 October 1998 Takahashi et al.
5818975 October 1998 Goodwin et al.
5835244 November 1998 Bestmann
5835627 November 1998 Higgins et al.
5841461 November 1998 Katsuma
5859711 January 1999 Barry et al.
5870505 February 1999 Wober et al.
5880777 March 1999 Savoye et al.
5889546 March 1999 Fukuoka
5897254 April 1999 Tanaka et al.
5913019 June 1999 Attenberg
5956067 September 1999 Isono et al.
5956421 September 1999 Tanaka et al.
5970224 October 1999 Salgado et al.
5978106 November 1999 Hayashi
5995654 November 1999 Buhr et al.
5999204 December 1999 Kojima
6005596 December 1999 Yoshida et al.
6028957 February 2000 Katori et al.
6069982 May 2000 Reuman
6104421 August 2000 Iga et al.
6104468 August 2000 Bryniarski et al.
6104502 August 2000 Shiomi
6106173 August 2000 Suzuki et al.
6108105 August 2000 Takeuchi et al.
6128099 October 2000 Delabastita
6128415 October 2000 Hultgren, III et al.
6133983 October 2000 Wheeler
6157459 December 2000 Shiota et al.
6172768 January 2001 Yamada et al.
6186683 February 2001 Shibuki
6204940 March 2001 Lin et al.
6208429 March 2001 Anderson
6226021 May 2001 Kobayashi et al.
6233360 May 2001 Metcalfe et al.
6243133 June 2001 Spaulding et al.
6263091 July 2001 Jain et al.
6282317 August 2001 Luo et al.
6293651 September 2001 Sawano
6402283 June 2002 Schulte
6425699 July 2002 Doval
6447186 September 2002 Oguchi et al.
6456388 September 2002 Inoue et al.
6462835 October 2002 Loushin et al.
6501566 December 2002 Ishiguro et al.
6537410 March 2003 Arnost et al.
6563945 May 2003 Holm
6567111 May 2003 Kojima et al.
6577751 June 2003 Yamamoto
6583852 June 2003 Baum et al.
6608926 August 2003 Suwa
6614459 September 2003 Fujimoto et al.
6628417 September 2003 Naito et al.
6628823 September 2003 Holm
6628826 September 2003 Gilman et al.
6628899 September 2003 Kito
6650771 November 2003 Walker
6661443 December 2003 Bybell et al.
6671063 December 2003 Iida
6690488 February 2004 Reuman
6694051 February 2004 Yamazoe et al.
6711285 March 2004 Noguchi
6760489 July 2004 Kuwata
6762855 July 2004 Goldberg et al.
6771832 August 2004 Naito et al.
6819347 November 2004 Saquib et al.
6826310 November 2004 Trifonov et al.
6842186 January 2005 Bouchard et al.
6906736 June 2005 Bouchard et al.
6937365 August 2005 Gorian et al.
6956967 October 2005 Gindele et al.
6999202 February 2006 Bybell et al.
7050194 May 2006 Someno et al.
7092116 August 2006 Calaway
7127108 October 2006 Kinjo et al.
7129980 October 2006 Ashida
7154621 December 2006 Rodriguez et al.
7154630 December 2006 Nimura et al.
7167597 January 2007 Matsushima
7200265 April 2007 Imai
7224476 May 2007 Yoshida
7260637 August 2007 Kato
7272390 September 2007 Adachi et al.
7283666 October 2007 Saquib
7336775 February 2008 Tanaka et al.
7548260 June 2009 Yamaguchi
7557950 July 2009 Hatta et al.
2003/0021478 January 2003 Yoshida
2003/0038963 February 2003 Yamaguchi
2004/0073783 April 2004 Ritchie
2004/0179226 September 2004 Burkes et al.
2004/0207712 October 2004 Bouchard et al.
2005/0005061 January 2005 Robins
2005/0219344 October 2005 Bouchard
2007/0036457 February 2007 Saquib
2008/0017026 January 2008 Dondlinger
2009/0128613 May 2009 Bouchard et al.
Foreign Patent Documents
0 204 094 Apr 1986 EP
0 454 495 Oct 1991 EP
0 454 495 Oct 1991 EP
0 619 188 Oct 1994 EP
0 625 425 Nov 1994 EP
0 626 611 Nov 1994 EP
0 791 472 Feb 1997 EP
0 762 736 Mar 1997 EP
0 773 470 May 1997 EP
0 939 359 Sep 1999 EP
1 004 442 May 2000 EP
1 056 272 Nov 2000 EP
1 078 750 Feb 2001 EP
1 137 247 Sep 2001 EP
1 201 449 Oct 2001 EP
1 392 514 Sep 2005 EP
0 933 679 Apr 2008 EP
1 393 544 Feb 2010 EP
2 356 375 May 2001 GB
58-164368 Sep 1983 JP
59-127781 Jul 1984 JP
63-209370 Aug 1988 JP
01 040371 Feb 1989 JP
02-248264 Oct 1990 JP
02-289368 Nov 1990 JP
03-024972 Feb 1991 JP
03-222588 Oct 1991 JP
04-008063 Jan 1992 JP
4-119338 Apr 1992 JP
05-136998 Jun 1993 JP
06 183033 Jul 1994 JP
06 266514 Sep 1994 JP
06-292005 Oct 1994 JP
6-308632 Nov 1994 JP
06-350888 Dec 1994 JP
08-3076999 Nov 1996 JP
9-138465 May 1997 JP
09 167129 Jun 1997 JP
10-285390 Oct 1998 JP
11-055515 Feb 1999 JP
11 505357 May 1999 JP
11-275359 Oct 1999 JP
2000-050077 Feb 2000 JP
2000-050080 Feb 2000 JP
2000-184270 Jun 2000 JP
2001-160908 Jun 2001 JP
2001-273112 Oct 2001 JP
2002 199221 Jul 2002 JP
2002 247361 Aug 2002 JP
2003-008986 Jan 2003 JP
2001-0037684 May 2001 KR
WO 9734257 Sep 1997 WO
WO 99 53415 Oct 1999 WO
WO 00/04492 Jan 2000 WO
WO 01/01669 Jan 2001 WO
WO 01/031432 May 2001 WO
WO 02/078320 Oct 2002 WO
WO 02/096651 Dec 2002 WO
WO 02/098124 Dec 2002 WO
WO 03/071780 Aug 2003 WO
WO 04/077816 Sep 2004 WO
WO 05 006200 Jan 2005 WO

Other References

Bhukhanwala et al., "Automated Global Enhancement of Digitalized Photographs," IEEE Transactions on Consumer Electronics, Feb. 1994. cited by other .
Hann, R.A. et al., "Chemical Technology in Printing and Imaging Systems", The Royal Society of Chemistry, Special Publication. 133 (1993), pp. 73-85. cited by other .
Hann, R.A. et al., "Dye Diffusion Thermal Transfer (D2T2) Color Printing", Journal of Imaging Technology., 16 (6). (1990), pp. 238-241. cited by other .
Kearns et al., "Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation," XP-002299710, Jan. 1997, 1-20. cited by other .
Taguchi et al., "New Thermal Offset Printing Employing Dye Transfer Technology (Tandem TOP-D)," NIP17: International Conference on Digital Printing Technologies, Sep. 2001, vol. 17, pp. 499-503. cited by other .
Weston et al., "Adaptive Margin Support Vector Machines," Advances in Large Margin Classifiers, 2000, 281-296. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated Sep. 30, 2003, U.S. Appl. No. 10/078,644, filed Feb. 19, 2002. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated Oct. 2, 2003, U.S. Appl. No. 10/080,883, filed Feb. 2, 2002. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Oct. 2, 2004, U.S. Appl. No. 10/080,833, filed Feb. 22, 2003. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Sep. 22, 2003, U.S. Appl. No. 10/078,644, filed Feb. 19, 2002. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated Sep. 23, 2004, U.S. Appl. No. 10/080,883, filed Feb. 22, 2003. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Nov. 29, 2004, U.S. Appl. No. 09/817,932, filed Mar. 27, 2001. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Nov. 29, 2004, U.S. Appl. No. 09/870,537, filed May 30, 2001. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated Feb. 22, 2005, U.S. Appl. No. 10/078,644, filed Feb. 19, 2002. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated May 9, 2005, U.S. Appl. No. 09/870,537, filed May 30, 2001. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated Aug. 31, 2005, U.S. Appl. No. 09/817,932, filed Mar. 27, 2001. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Jul. 13, 2006, U.S. Appl. No. 10/375,440, filed Feb. 27, 2003. cited by other .
United States Patent and Trademark Office: Final Office Action dated Dec. 4, 2006, U.S. Appl. No. 10/375,440, filed Feb. 27, 2003. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated May 29, 2007, U.S. Appl. No. 10/375,440, filed Feb. 27, 2003. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated Jun. 29, 2007, U.S. Appl. No. 10/611,737, filed Jul. 1, 2003. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated Sep. 4, 2007, U.S. Appl. No. 10/844,286, filed May 12, 2004. cited by other .
United States Patent and Trademark Office: Notice of Allowance dated Sep. 6, 2007, U.S. Appl. No. 10/375,440, filed Feb. 27, 2003. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Oct. 4, 2007, U.S. Appl. No. 10/611,737, filed Jul. 1, 2003. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Nov. 14, 2007, U.S. Appl. No. 10/844,286, filed May 12, 2004. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Mar. 20, 2008, U.S. Appl. No. 10/844,286, filed May 12, 2005. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Jun. 18, 2008, U.S. Appl. No. 10/611,737, filed Jul. 1, 2003. cited by other .
United States Patent and Trademark Office: Final Office Action dated Sep. 12, 2008, U.S. Appl. No. 10/844,286, filed May 12, 2004. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated Oct. 8, 2008, U.S. Appl. No. 11/546,633, filed Oct. 12, 2006. cited by other .
United States Patent and Trademark Office: Final Office Action dated Jan. 28, 2009, U.S. Appl. No. 10/611,737, filed Jul. 1, 2003. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Jan. 30, 2009, U.S. Appl. No. 11/546,633, filed Oct. 12, 2006. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated May 21, 2009, U.S. Appl. No. 10/844,286, filed May 12, 2004. cited by other .
United States Patent and Trademark Office: Restriction Requirement dated May 26, 2009, U.S. Appl. No. 11/546,633, filed Oct. 12, 2006. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Jun. 10, 2009, U.S. Appl. No. 10/611,737, filed Jul. 1, 2003. cited by other .
United States Patent and Trademark Office: Final Office Action dated Jul. 9, 2009, U.S. Appl. No. 11/546,633, filed Oct. 12, 2006. cited by other .
United States Patent and Trademark Office: Non-Final Office Action dated Jul. 31, 2009, U.S. Appl. No. 12/031,151, filed Feb. 14, 2008. cited by other .
United States Patent and Trademark Office: U.S. Appl. No. 12/031,151, filed Feb. 14, 2008, Bybell. cited by other .
International Preliminary Examination Report (IPER) dated Jun. 30, 2003, PCT/US02/015546. cited by other .
EP Communication issued by the Examining Division Apr. 2, 2004, EP1392514. cited by other .
International Preliminary Examination Report (IPER) issued Sep. 2, 2005, PCT/US04/004964. cited by other .
EP Communication issued by the Examining Division Jan. 11, 2006, EP1597911. cited by other .
EP Communication issued by the Examining Division May 23, 2006, EP1597911. cited by other .
International Preliminary Examination Report (IPER) issued Jan. 3, 2006, PCT/US04/020981. cited by other .
International Preliminary Examination Report (IPER) dated Sep. 17, 2003, PCT/US02/015913. cited by other .
EP Communication issued by the Examining Division May 29, 2009, EP1479220. cited by other .
International Preliminary Examination Report (IPER) dated Jan. 29, 2003, PCT/US02/008954. cited by other .
EP Communication issued by the Examining Division Jul. 7, 2009, EP1374557. cited by other .
EPC Application No. 1597911: Communication issued by the Examining Division dated May 26, 2010, 8 pages. cited by other .
EPC Application No. 1393544: Communication issued by the Examining Division dated Jan. 15, 2009, 7 pages. cited by other .
International Application No. PCT/US02/015913: International Search Report mailed Oct. 11, 2002, 2 pages. cited by other .
International Application No. PCT/US02/018528: International Search Report mailed Oct. 31, 2002, 3 pages. cited by other .
International Application No. PCT/US02/18528: International Preliminary Examination Report (IPER) dated Apr. 4, 2003, 2 pages. cited by other .
International Application No. PCT/US04/020981: International Search Report mailed Mar. 15, 2005, 6 pages. cited by other .
Japanese Application No. 2003-501190: Notice of Reasons of Rejection dated Dec. 15, 2006, 5 pages. cited by other .
Japanese Application No. 2008-096460: Notice of Reasons of Rejection dated Jul. 30, 2010, 4 pages. cited by other .
Japanese Application No. 2008-213280: Notice of Reasons of Rejection dated Feb. 5, 2010, 6 pages. cited by other .
Ulichney, R., "Digital Halftoning," MIT Press, 1987, 239-319 p. 341. cited by other .
Pratt, W.K., "Digital Image Processing," Wiley & Sons, 1978, 311-318. cited by other .
Gonzalez et al., "Digital Image Processing," Addison-Wesley, 1977, 119-126. cited by other .
Wong, P.W., "Adaptive Error Diffusion and Its Application in Multiresolution Rendering," IEEE Trans. On Image Processing, 1996, 5(7), 1184-1196. cited by other .
Damera-Venkata et al., "Adaptive Threshold Modulation for Error Diffusion Halftoning," IEEE Trans. On Image Processing, 2001, 10(1), 104-116. cited by other .
Know et al., "Threshold Modulation In Error Diffusion," SPIE, 1993, 2(3), 185-192. cited by other .
"Digital Halftoning", R. Ulichney, pp. 239-319, pp. 341, 1987, Cambridge, MA, MIT Press. cited by other .
"Digital Image Processing", W.K. Pratt, pp. 311-318, 1978, New York, NY, J. Wiley & Sons. cited by other .
"Digital Image Processing", R. C. Gonzalez and P. Wintz, pp. 119-126, 1977, Reading, MA, Addison-Wesley. cited by other .
"Adaptive Error Diffusion And Its Application In Multiresolution Rendenring", P. W. Wong, pp. 1184-1196, Jul. 1996, IEEE Trans. On Image Processing, vol. 5, No. 7, IEEE. cited by other .
"Adaptive Threshold Modulation For Error Diffusion Halftoning", N. Damera-Venkata and B. L. Evans, pp. 104-116, Jan. 2001, IEEE Trans. On Image Processing, vol. 10, No. 1, IEEE. cited by other .
"Threshold Modulation In Error Diffusion", K. T. Know and R. Eschbach,pp. 185-192, Jul. 1993, vol. 2, No. 3, SPIE. cited by other.

Primary Examiner: Lee; Thomas D
Attorney, Agent or Firm: Woodcock Washburn LLP

Claims



What is claimed is:

1. A method of generating a halftone image from an input digital image, said .Iadd.input .Iaddend.digital image represented by a multiplicity of pixels, each pixel having a given value, .[.said values being stored in a memory,.]. said method comprising .[.the steps of.].: .[.(A).]. determining .[.the.]. .Iadd.one or more .Iaddend.properties .[.including local properties.]. of the .Iadd.input .Iaddend.digital image; .[.(B).]. filtering the input digital image, said filtering having as output a filtered value at each pixel; .[.(C).]. obtaining the difference between the value at .[.the.]. .Iadd.a .Iaddend.pixel and the filtered value at the pixel, said difference being a threshold input; .[.(D).]. generating .[.the.]. .Iadd.an .Iaddend.output state for the pixel depending upon the relationship of the .[.value of said.]. threshold input relative to a threshold; .[.(E).]. producing an error value, said error value being indicative of the deviation of said threshold input from the output state; .[.(F).]. multiplying said error value by a coefficient, the result of said multiplication being stored; .[.(G).]. combining the stored value with the difference between the next pixel value and the next filtered value to produce a new threshold input; .[.(H).]. repeating .[.steps (D) through (G).]. .Iadd.the generating an output state, the producing an error value, the multiplying said error value, and the combining the stored error value .Iaddend.for each pixel in the .Iadd.input .Iaddend.digital image thereby producing a halftone image; .[.and.]. varying the threshold according to .Iadd.the one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image; and selectively changing the coefficient .[.in step (E).]. according to the .[.local.]. .Iadd.one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image.

2. The method of claim 1 further comprising .[.the step of.].: performing a histogram modification of the image pixels, before .[.step (B).]. .Iadd.filtering the input digital image.Iaddend..

3. The method of claim 1 further comprising the step of: performing a histogram modification of the difference between the value at the pixel and the filtered value at the pixel, before .[.step (D).]. .Iadd.generating the output state.Iaddend..

4. The method of claim 1 wherein the selectively changing of the coefficient comprises: dividing a first function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image by a second function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image; and multiplying the absolute value of the result of said division by a first parameter; and adding a second parameter to the result of the multiplication, thereby obtaining the coefficient.

5. The method of claim 4 wherein said first function is the difference between the value at the pixel and the filtered value at the pixel and said second function is the filtered value at the pixel.

6. The method of claim 4 wherein the threshold is a third function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image.

7. The method of claim 6 wherein said third function is a linear function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image.

.[.8. The method of claim 6 wherein said third function is a linear function of the local values of the digital image..].

9. The method of claim 4 wherein the threshold is the filtered value at the pixel multiplied by a third parameter.

10. The method of claim 9 wherein the .[.filter in step (B) is.]. .Iadd.filtering comprises using .Iaddend.a filter of finite extent, the extent of the filter, the first .Iadd.parameter.Iaddend., .Iadd.the .Iaddend.second .[.parameters.]. .Iadd.parameter .Iaddend.and .Iadd.the .Iaddend.third .[.parameters.]. .Iadd.parameter .Iaddend.being selected to produce .[.the.]. .Iadd.an .Iaddend.image of highest perceptual quality at a specific output device.

11. The method of claim 9 further comprising .[.the step of.].: performing a histogram modification of the difference between the value at the pixel and the filtered value at the pixel, before .[.step (D).]. .Iadd.generating the output state.Iaddend..

12. The method of claim 1 wherein the input digital image is a monochrome image.

13. The method of claim 1 wherein the input digital image is a color image.

14. A system for generating a halftone image from an input digital image, said .Iadd.input .Iaddend.digital image represented by a multiplicity of pixels, each pixel having a given value, .[.said values being stored in a memory,.]. said .[.apparatus.]. .Iadd.system .Iaddend.comprising: means for determining .[.the.]. .Iadd.one or more .Iaddend.properties .[.including local properties.]. of said .Iadd.input .Iaddend.digital image; and means for retrieving the pixel values; and means for filtering the input digital image, said filtering having as output a filtered value at each pixel; and means for obtaining the difference between the value at .[.the.]. .Iadd.a .Iaddend.pixel and the filtered value at the pixel, said difference being a threshold input; and means for producing an error value, said error value being indicative of the deviation of said threshold input from .[.the.]. .Iadd.an .Iaddend.output state; and means for multiplying said error value by an adaptation coefficient to obtain a diffused value and means for storing the diffused value and delaying said stored .Iadd.diffused .Iaddend.value by one pixel; and means for combining the stored delayed diffused value with the difference between the pixel value and the filtered value; and means for varying .[.the.]. .Iadd.a .Iaddend.threshold according to the .Iadd.one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image at the pixel value; and means for selectively changing the adaptation coefficient according to the .[.local.]. .Iadd.one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image.

15. The system of claim 14 further comprising: means performing a histogram modification of the image pixels.

16. The system of claim 14 further comprising: means for performing a histogram modification of the difference between the value at the pixel and the filtered value at the pixel.

17. The system of claim 14 wherein the means for selectively changing of the adaptation coefficient comprise: means for dividing a first function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image by a second function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image; and means for multiplying the absolute value of the result of said division by a first parameter; and adding a second parameter to the result of the multiplication, thereby obtaining the .Iadd.adaptation .Iaddend.coefficient.

18. A computer program product comprising: a computer usable .Iadd.storage .Iaddend.medium having computer readable code embodied therein for generating a halftone image from an input digital image, said .Iadd.input .Iaddend.digital image represented by a multiplicity of pixels, each pixel having a given value, .[.said values being stored in a memory,.]. said code .[.causing.]. .Iadd.comprising instructions for .Iaddend.a computer system .[.to:.]..Iadd., the instructions comprising:.Iaddend. .Iadd.instructions to .Iaddend.determine .[.the.]. .Iadd.one or more .Iaddend.properties .[.including local properties.]. of said .Iadd.input .Iaddend.digital image; and .Iadd.instructions to .Iaddend.retrieve the pixel values; and .Iadd.instructions to .Iaddend.filter the .Iadd.input .Iaddend.digital image, said filtering having as output a filtered value at each pixel; and .Iadd.instructions to .Iaddend.obtain the difference between the value at .[.the.]. .Iadd.a .Iaddend.pixel and the filtered value at the pixel, said difference being a threshold input; and .Iadd.instructions to .Iaddend.produce an error value, said error value being indicative of the deviation of said threshold input from .[.the.]. .Iadd.an .Iaddend.output state; and .Iadd.instructions to .Iaddend.multiply said error value by an adaptation coefficient to obtain a diffused value; and .Iadd.instructions to .Iaddend.store the diffused value and .[.delaying.]. .Iadd.delay .Iaddend.said stored .Iadd.diffused .Iaddend.value by one pixel; and .Iadd.instructions to .Iaddend.combine the stored delayed diffused value with the difference between the pixel value and the filtered value; and .Iadd.instructions to .Iaddend.vary .[.the.]. .Iadd.a .Iaddend.threshold according to the .Iadd.one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image at the pixel value; and .Iadd.instructions to .Iaddend.selectively change the adaptation coefficient according to the .[.local.]. .Iadd.one or more .Iaddend.properties of the .Iadd.input .Iaddend.digital image.

19. The computer program product of claim 18 .[.where, the computer readable code further causes the computer system to.]. .Iadd.wherein the instructions further comprise.Iaddend.: .Iadd.instructions to .Iaddend.perform a histogram modification of the image pixels.

20. The computer program product of claim 18 .[.where, the computer readable code further causes the computer system to.]. .Iadd.wherein the instructions further comprise.Iaddend.: .Iadd.instructions to .Iaddend.perform a histogram modification of the difference between the value at the pixel and the filtered value at the pixel.

21. The computer program product of claim 18 .[.where, the computer readable code in causing the computer system.]. .Iadd.wherein the instructions .Iaddend.to selectively change the adaptation coefficient.[., further causes the computer system to.]. .Iadd.comprise.Iaddend.: .Iadd.instructions to .Iaddend.divide a first function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image by a second function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image; and .Iadd.instructions to .Iaddend.multiply the absolute value of the result of said division by a first parameter; and .Iadd.instructions to .Iaddend.add a second parameter to the result of the multiplication, thereby obtaining the .Iadd.adaptation .Iaddend.coefficient.

22. The computer program product of claim 21 wherein said first function is the difference between the value at the pixel and the filtered value at the pixel and said second function is the filtered value at the pixel.

23. The computer program product of claim 22 wherein .[.said.]. the threshold is the filtered value at the pixel multiplied by a third parameter.

24. The computer program product of claim 23 wherein the filter used to filter the .Iadd.input .Iaddend.digital image is a filter of finite extent, the extent of the filter, the first .Iadd.parameter.Iaddend., .Iadd.the .Iaddend.second .[.parameters.]. .Iadd.parameter .Iaddend.and third .[.parameters.]. .Iadd.parameter .Iaddend.being selected to produce .[.the.]. .Iadd.an .Iaddend.image of highest quality at a specific output device.

25. The computer program product of claim .[.25 where, the computer readable code further causes the computer system to.]. .Iadd.18 wherein the instructions further comprise.Iaddend.: .Iadd.instructions to .Iaddend.perform a histogram modification of the difference between the value at the pixel and the filtered value at the pixel.

26. The computer program product of claim 21 wherein the threshold is a third function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image.

27. The computer program product of claim 26 wherein said third function is a linear function of the .[.local.]. .Iadd.pixel .Iaddend.values of the .Iadd.input .Iaddend.digital image.

.[.28. The computer program product of claim 26 wherein said third function is a linear function of the local values of the digital image..].

29. The computer program product of claim 18 wherein the input digital image is a color image.

30. The computer program product of claim 18 wherein the input digital image is a monochrome image.

.Iadd.31. The system of claim 14, further comprising: a rendering device..Iaddend.

.Iadd.32. The system of claim 31, wherein said rendering device is a binary output device..Iaddend.

.Iadd.33. The system of claim 31, wherein said rendering device is a M-ary display or a M-ary rendering device..Iaddend.

.Iadd.34. The system of claim 31, wherein said rendering device is a mobile phone display..Iaddend.

.Iadd.35. A mobile device capable of generating a halftone image from an input digital image, said input digital image represented by a multiplicity of pixels, each pixel having a given value, said mobile device comprising: means for determining one or more properties of said input digital image; means for retrieving the pixel values; means for filtering the input digital image, said filtering having as output a filtered value at each pixel; means for obtaining the difference between the value at a pixel and the filtered value at the pixel, said difference being a threshold input; means for producing an error value, said error value being indicative of the deviation of said threshold input from an output state; means for multiplying said error value by an adaptation coefficient to obtain a diffused value and means for storing the diffused value and delaying said stored diffused value by one pixel; means for combining the stored delayed diffused value with the difference between the pixel value and the filtered value; means for varying a threshold according to the one or more properties of the input digital image at the pixel value; means for selectively changing the adaptation coefficient according to the one or more properties of the input digital image; and a rendering device..Iaddend.
Description



BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to the rendering of digital image data, and in particular, to the binary or multilevel representation of images for printing or display purposes

2. Background Description

Since images constitute an effective means of communicating information, displaying images should be as convenient as displaying text. However, many display devices, such as laser and ink jet printers, print only in a binary fashion. Furthermore, some image format standards only allow binary images. For example, the WAP1.1 (Wireless Application Protocol) protocol specification allows only for one graphic format, WBMP, a one (1) bit version of the BMP (bitmap) format. Besides allowing only binary images, some image format standards and some displays only allow images of a limited number of pixels. In the WAP 1.1 standard, a WBMP image should not be larger than 150.times.150 pixels. Some WAP devices have screens that are very limited in terms of the number of pixels. For example, one WAP device has a screen that is 96 pixels wide by 65 pixels high. In order to render a digitized continuous tone input image using a binary output device, the image has to be converted to a binary image.

The process of converting a digitized continuous tone input image to a binary image so that the binary image appears to be a continuous tone image is known as digital halftoning.

In one type of digital halftoning processes, ordered dither digital halftoning, the input digitized continuous tone image is compared, on a pixel by pixel basis, to a threshold taken from a threshold array. Many ordered dither digital halftoning methods suffer from low frequency artifacts. Because the human vision system has greater sensitivity at low frequencies (less than 12 cycles/degree), such low frequency artifacts are very noticeable.

The visibility of low frequency artifacts in ordered dither digital halftoning methods has led to the development of methods producing binary images with a power spectrum having mostly higher frequency content, the so called "blue noise methods".

The most frequently used "blue noise method" is the error diffusion method. In an error diffusion halftoning system, an input digital image I.sub.n (the digitized continuous tone input image) is introduced into the system on a pixel by pixel basis, where n represents the input image pixel number. Each input pixel has its corresponding error value E.sub.n-1, where E.sub.n-1 is the error value of the previous pixel (n-1), added to the input value I.sub.n at a summing node, resulting in modified image data. The modified image data, the sum of the input value and the error value of the previous pixel (I.sub.n+E.sub.n-1), is passed to a threshold comparator. The modified image data is compared to the constant threshold value T..sub.O, to determine the appropriate output level O.sub.n. Once the output level O.sub.n is determined, it is subtracted from the modified image value to produce the input to an error filter. The error filter allocates its input, I.sub.n-O.sub.n, to subsequent pixels based upon an appropriate weighting scheme. Various weighting techniques may be used generate the error level E..sub.n for the subsequent input pixel. The cyclical processing of pixels is continued until the end of the input data is reached. (For a more complete description of error diffusion see, for example, "Digital Halftoning", by Robert Ulichney, MIT Press, Cambridge, Mass. and London, England, 1990, pp. 239-319).

Although the error diffusion method presents an improvement over many ordered dither methods, artifacts are still present. There is an inherent edge enhancement in the error diffusion method. Other known artifacts produced by the error diffusion method include artifacts called "worms" and "snowplowing" which degrade image quality.

In U.S. Pat. No. 5,045,952, Eschbach disclosed selectively modifying the threshold level on a pixel by pixel basis in order to increase or decrease the edge enhancement of the output digital image. The improvements disclosed by Eschbach do not allow the control of the edge enhancement by controlling the high frequency portion of the error. Also, the improvements disclosed by Eschbach do not introduce parameters that can be selected to produce the image of the highest perceptual quality at a specific output device.

In U.S. Pat. No. 5,757,976, Shu disclosed utilizing a set of error filters having different sizes for diffusing the input of the error filter among neighboring pixels in predetermined tonal areas of an image and adding "noise" to the threshold in order to achieve a smooth halftone image quality. The improvements disclosed by Shu do not introduce parameters that can be selected to produce the image of the highest perceptual quality at a specific output device.

SUMMARY OF THE INVENTION

It is the primary object of this invention to provide a method for generating a halftone image from a digitized continuous tone input image that provides adjustment of the local contrast of the resulting halftone image, minimizes artifacts and is easily implemented.

It is also an object of this invention to provide a method for generating a halftone image with parameters that can be selected to produce the image of highest quality at a specific output device.

To achieve the objects of this invention, one aspect of this invention includes an adaptive halftoning method where the difference between a digital image and a filtered digital image is introduced into the system on a pixel by pixel basis; each input difference pixel having a corresponding error value, generated from the previous pixels, added to the input value at a summing node, resulting in modified image difference data; the modified image difference data being passed to a threshold comparator where the modified image difference data is compared to a threshold value, the threshold value varying according to the properties of the digital image, to determine the appropriate output level; the output level is subtracted from the modified image difference value to produce the input to an error filter; the output of the error filter is multiplied by a adaptation coefficient, where the adaptation coefficient varies according to the properties of the digital image, to generate the error level for the subsequent input pixel; and, the cyclical processing of pixels is continued until the end of the input data is reached.

In another aspect of this invention, in the method described above, a histogram modification is performed on the image, and the difference between the histogram modified digital image and the filtered digital image is introduced into the system on a pixel by pixel basis.

In still another aspect of this invention, in the method described above, the histogram modification is performed on the difference between the digital image and the filtered digital image and the histogram modified difference is introduced into the system on a pixel by pixel basis.

In a further aspect of this invention, in the method described above, the selectively changing of the adaptation coefficient comprises dividing the difference between the value at the pixel and the filtered value at the pixel by the filtered value at the pixel, multiplying the absolute value of the result of the division by a first parameter, and adding a second parameter to the result of the multiplication, thereby obtaining the coefficient.

In still another aspect of this invention, in the method described above, the threshold calculation comprises multiplying the filtered value at the pixel by a third parameter.

In still another aspect of this invention, in the method described above and including the adaptation coefficient and threshold calculated as in the two preceding paragraphs, where the filter is a filter of finite extent, the extent of the filter, the first, second parameters and third parameters are selected to produce the image of the highest perceptual quality at a specific output device.

The methods, systems and computer readable code of this invention can be used to generate halftone images in order to obtain images of the highest perceptual quality when rendered on displays and printers. The methods, systems and computer readable code of this invention can also be used to for the design of computer generated holograms and for the encoding of the continuous tone input data.

DESCRIPTION OF THE DRAWINGS

The novel features that are considered characteristic of the invention are set forth with particularity in the appended claims. The invention itself, however, both as to its organization and its method of operation, together with other objects and advantages thereof will be best understood from the following description of the illustrated embodiment when read in connection with the accompanying drawings wherein:

FIG. 1a depicts a block diagram of selected components of an embodiment of a system, of this invention for generating a halftone image from a digitized continuous tone input image, where the histogram modification block is included after the summing node; and,

FIG. 1b depicts a block diagram of selected components of an embodiment of a system of this invention for generating a halftone image from a digitized continuous tone input image, where the histogram modification block is included before the summing node; and,

FIG. 1c depicts a block diagram of selected components of an embodiment of a system of this invention for generating a halftone image from a digitized continuous tone input image, where the adaptation coefficient multiplies the input to the error filter block; and

FIG. 2 depicts a block diagram of selected components of another embodiment of the system of this invention for generating a halftone image from a digitized continuous tone input image; and

FIG. 2a depicts a block diagram of selected components of another embodiment of the system of this invention for generating a halftone image from a digitized continuous tone input image, where the adaptation coefficient multiplies the input to the error filter block.

DETAILED DESCRIPTION

A method and system, for generating a halftone image from a digitized continuous tone input image, that provide adjustment of the local contrast of the resulting halftone image, minimizes artifacts, are easily implemented and contain parameters that can be selected on the basis of device characteristics like brightness, dynamic range, and pixel count, to produce the image of highest perceptual quality at a specific output device are disclosed.

A block diagram of selected components of an embodiment of a system of this invention for generating a halftone image from a digitized continuous tone input image (also referred to as a digital image) is shown in FIG. 1a. Referring to FIG. 1a, image input block 10 introduces an input digital image I.sub.n into the system on a pixel by pixel basis, where n represents the input image pixel number. The input image is also provided to the filtering block 20. The output of filtering block 20 has the form Av.sub.n32 h( . . . ,I.sub.k, . . . , I..sub.n, . . . ) (1) where h is a functional form spanning a number of pixels. It should be apparent that the input digital image 10 can be a two dimensional array of pixel values and that the array can be represented as a linear array by using such approaches as raster representations or serpentine representation. For a two dimensional array of pixel values, the filter 20 will also be a two dimensional array of filter coefficients and can also be represented as a linear array. The functional forms will be shown in the one dimensional form for ease of interpretation.

In one embodiment: the output of the filtering block 20 has the form Av.sub.n={.SIGMA..sub.n-N.sup.n+NI.sub.j}/(2N+1) (2) If the filtering block 20 comprises a linear filter, Av.sub.n will be given by a sum of terms, each term comprising the product of an input image pixel value multiplied by a filter coefficient.

It should be apparent that special consideration has to be given to the pixels at the boundaries of the image. For example, the calculations can be started N pixels from the boundary in equation (2). In that case the calculated and halftone image are smaller than the input image. In another case, the image is continued at the boundaries, the continuation pixels having the same value as the boundary pixel. It should be apparent that other methods of taking into account the effect of the boundaries can be used.

The output of the filtering block 20, Av.sub.n, is subtracted from the input digital image I..sub.n at node 25, resulting in a difference value, D.sub.n. In the embodiment in which histogram modification is not included, D.sub.n is the input to a summing node 70. At the summing node 70, a corresponding error value E.sub.n-1, where E.sub.n-1 is the error value accumulated from the previous pixels, is added to the input value D.sub.n resulting in a modified image datum. The modified image data, D.sub.n+E.sub.n-1, is compared to the output of the threshold calculation block 30 in the threshold comparison block 40 to produce the halftoning output, O.sub.n. (In the case of a binary output device, if the modified image datum is above the threshold, the output level is the white level. Otherwise, the output level is the black level.) Once the output level O.sub.n is determined, it is subtracted from the modified image value to produce the input to an error filter block 50. The error filter block 50 allocates its input, D.sub.n+E.sub.n-1-O.sub.n, to subsequent pixels based upon an appropriate weighting scheme. The weighted contributions of the error filter block 50 input are stored and all the contributions to the next input pixel are summed to produce the output of the error filter block 50, the error value. The output of the error filter block 50, the error value, is multiplied by the adaptation coefficient in block 60 to generate the error level E..sub.n for the subsequent input pixel. The cyclical processing of pixels, as further described below, is continued until the end of the input data is reached.

Referring again to FIG. 1, the input image is also provided to the threshold calculation block 30. The output of the threshold calculation block 30 has the form t( . . . , I.sub.k, . . . , I..sub.n, . . . ) (3) where t is a functional form spanning a number of pixels. The form in equation (3) allows the varying of the threshold according to properties of the digital image.

In one embodiment, t( . . . ,I.sub.k, . . . , I..sub.n, . . . )=C.sub.0{.SIGMA..sub.n-N.sup.n+NI.sub.j}/(2N+1) (4) In another embodiment, the output of the threshold calculation block is a linear combination of terms, each term comprising the product of an input image pixel value multiplied by a coefficient. It should be apparent that this embodiment can also be expressed as a function times a parameter. The output of the threshold calculation block 30 is the threshold.

The first pixel value to be processed, I.sub.O, produces a difference value D.sub.O from summing node 25 and produces a value of D.sub.O out of summing node 70 (since E.sub.-1 is equal to 0). D.sub.O is then compared to the threshold producing an output of O.sub.O. At summing node 45, O.sub.O is subtracted from D.sub.O to produce the input to the error filter 50. The error filter 50 allocates its input, D.sub.O-O.sub.O, to subsequent pixels based upon an appropriate weighting scheme which determines how much the current input contributes to each subsequent pixel. Various weighting techniques may be used (see, for example, "Digital Halftoning" by Robert Ulichney, MIT Press, Cambridge, Mass. and London, England, 1990, pp. 239-319). The output of error filter 50 is multiplied by a adaptation coefficient 60. The adaptation coefficient 60 is the output of the coefficient calculation block 80. In one embodiment, the output of the coefficient calculation block 80 has the form C.sub.1+C.sub.2abs{f( . . . ,I.sub.k, . . . , I..sub.n, . . . ,)/g( . . . ,I.sub.k, . . . , I..sub.n, . . . )} (5) where f and g are functional forms spanning a number of pixels. The form of Equation (5) allows the selective changing, of the coefficient according to the local properties of the digital image. C.sub.1 and C.sub.2 and the parameter in the threshold expression can be selected to produce the image of highest perceptual quality at a specific output device.

In another embodiment, the output of the coefficient calculation block 80 has the form C.sub.1+C.sub.2{abs((I..sub.n-({.SIGMA..sub.n-N.sup.n+NI.sub.j}/(2N+1)))/- ({.SIGMA..sub.n-N.sup.n+NI.sub.j}/(2N+1))))} (6)

The input of error filter block 50 is multiplied by weighting coefficients and stored. All the contributions from the stored weighted values to the next pixel are summed to produce the out put of the error filter block 50. The output of the error filter block 50 is multiplied by the adaptation coefficient 60. The delay block 65 stores the result of the product of the adaptation coefficient 60 and the output of the error filter block 50. (In one embodiment, the Floyd-Steinberg filter, the input to the error filter is distributed according to the filter weights to the next pixel in the processing line and to neighboring pixels in the following line.) The output of delay block 65 is E.sub.n-1 and is delayed by one pixel. (When the first pixel is processed, the output of the delay, E.sub.O, is added to the subsequent difference, D.sub.1.)

It should be apparent that the sequence order of error filter block 50 and the adaptation coefficient block 60 can be interchanged with similar results. In the embodiment in which the adaptation coefficient 60 multiplies the difference between the modified image datum and the output level, shown in FIG. 1c, the delay block 65 stores the output of the error filter block.

When the next pixel, I.sub.1, is introduced into the system from the image input block 10, it produces a difference value D.sub.1 from summing node 25 and produce a value of (D.sub.1+E.sub.O) out of summing node 70.

The above steps repeat for each subsequent pixel in the digital image thereby producing a halftone image, the sequence O.sub.O, O.sub.1, . . . , O.sub.n. The modification of the threshold level and the adaptation coefficient allows control of the amount of edge enhancement and provides the opportunity to reduce artifacts.

In the embodiment in which histogram modification is included after the summing node 25, D.sub.n is the input to the histogram modification block 75 and the output of the histogram modification block 75 is the input to the summing node 70. The above description follows if D.sub.n is replaced by the output of the histogram modification block 75. It should be apparent that histogram modification operates on the entire difference image. (Histogram modification is well known to those skilled in the art. For a discussion of histogram modification, see, for example, Digital Image Processing, by William K. Pratt, John Wiley and Sons, 1978, ISBN 0-471-01888-0, pp. 311-318. For a discussion of histogram equalization, a form of histogram modification, see, for example, Digital Image Processing, by R. C. Gonzalez and P. Wintz, Addison-Wesley Publishing Co., 1977, ISBN 0-201-02596-3, pp. 119-126.)

In the embodiment in which histogram modification is included after the image input block 10, D.sub.n is the difference between the output of the histogram modification block 75 (FIG. 1b) and the filtered image. The above description follows if I.sub.n is replaced by the output of the histogram modification block.

The method described above produces improvements of the error diffusion method by utilizing the difference between the digital image and the filtered digital image as input into the system instead of the digital image, by multiplying the .the output of the error filter by the adaptation coefficient, where the adaptation coefficient varies according to the properties of the digital image, and by using a threshold value that varies according to the properties of the digital image to determine the appropriate output level.

Sample Embodiment

In a specific embodiment, shown in FIG. 2, the output of the filtering block 20, Av.sub.n, is given by Equation (2). The threshold calculation 30 is a function of the output of the filtering block 20 and is given by t( . . . ,I.sub.k, . . . , I..sub.n, . . . )=C.sub.OAv.sub.n (7) which is the same function as in Equation 4 when the output of the filtering block 20, Av.sub.n, is given by Equation (2). The output of the coefficient calculation block 80 depends on the output of the filtering block 20, Av.sub.n, and the difference D.sub.n and is given by C.sub.1+C.sub.2{abs((D.sub.n-Av.sub.n)/Av.sub.n)} (8) When the output of the filtering block 20, Av.sub.n, is given by Equation (2), Equation (8) is the same as Equation (6).

Histogram equalization is included after the summing node 25. The processing of the input image pixels 10 occurs as described in the preceding section.

The value of N in Equation (2) (the extent of the filter), C.sub.O, C.sub.1, and C.sub.2 (first, second parameters and third parameters) can be selected to produce the image of highest perceptual quality at a specific output device. For a WBMP image on a specific monochrome mobile phone display, utilizing a Floyd-Steinberg error filter, the following parameters yield images of high perceptual quality: N=7, C.sub.O=-20, C.sub.1=0.05, and C.sub.2=1. In another embodiment, shown in FIG. 2a, the sequence order of error filter block 50 and the adaptation coefficient block 60 are interchanged. In the embodiment of FIG. 2a, in which the adaptation coefficient 60 multiplies the difference between the modified image datum and the output level, the delay block 65 stores the output of the error filter block.

The embodiments described herein can also be expanded to include composite images, such as color images, where each color component might be treated individually by the algorithm. In the case of color input images, the value of N in Equation (2) (the extent of the filter), C.sub.O, C.sub.1, and C.sub.2 (first, second parameters and third parameters) can be selected to control the color difference at a color transition while minimizing any effects on the brightness at that location. Other possible applications of these embodiments include the design of computer generated holograms and the encoding of the continuous tone input data.

Although the embodiments described herein are most easily understood for binary output devices, the embodiments described herein can also be expanded to include rendering an output image when the number of gray levels in the image exceeds that of obtainable in the rendering device. It should be apparent how to expand the embodiments described herein to M-ary displays or M-ary rendering devices (see, for example, "Digital Halftoning" by Robert Ulichney, MIT Press, Cambridge, Mass., and London, England, 1990, p. 341).

It should be appreciated that the various embodiments described above are provided merely for purposes of example and do not constitute limitations of the present invention. Rather, various other embodiments are also within the scope of the claims, such as the following. The filter 20 can be selected to impart the desired functional behavior of the difference. The filter 20 can, for example, be a DC preserving filter. The threshold 40 and the adaptation coefficient 60 can also be selected to impart the desired characteristics of the image.

It should be apparent that Equations (4) and (5) are exemplary forms of functional expressions with parameters that can be adjusted. Functional expressions for the threshold and the adaptation coefficient ,where the expressions include parameters that can be adjusted, will satisfy the object of this invention.

In general, the techniques described above may be implemented, for example, in hardware, software, firmware, or any combination thereof. The techniques described above may be implemented in one or more computer programs executing on a programmable computer including a processor, a storage medium readable by the processor (including, for example, volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to data entered using the input device to perform the functions described and to generate output information. The output information may be applied to one or more output devices.

Elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.

Each computer program within the scope of the claims below may be implemented in any programming language, such as assembly language, machine language, a high-level procedural programming language, or an object-oriented programming language. The programming language may be a compiled or interpreted programming language. Each computer program may be implemented in a computer program product tangibly embodied in a machine-readable storage device for execution by a computer processor. Method steps of the invention may be performed by a computer processor executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output.

The generation of the halftone image can occur at a location remote from the rendering printer or display. The operations performed in software utilize instructions ("code") that are stored in computer-readable media and store results and intermediate steps in computer-readable media.

Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CDROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. Electrical, electromagnetic or optical signals that carry digital data streams representing various types of information are exemplary forms of carrier waves transporting the information.

Other embodiments of the invention, including combinations, additions, variations and other modifications of the disclosed embodiments will be obvious to those skilled in the art and are within the scope of the following claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed