High dynamic range images from low dynamic range images

Daly , et al. November 1, 2

Patent Grant 8050511

U.S. patent number 8,050,511 [Application Number 11/233,551] was granted by the patent office on 2011-11-01 for high dynamic range images from low dynamic range images. This patent grant is currently assigned to Sharp Laboratories of America, Inc.. Invention is credited to Scott J. Daly, Laurence Meylan.


United States Patent 8,050,511
Daly ,   et al. November 1, 2011

High dynamic range images from low dynamic range images

Abstract

A method for displaying an image includes receiving an image having a first luminance dynamic range and modifying the image to a second luminance dynamic range free from being based upon other images, where the second dynamic range is greater than the first dynamic range. The modified image is displayed on a display.


Inventors: Daly; Scott J. (Kalama, WA), Meylan; Laurence (Lausanne, CH)
Assignee: Sharp Laboratories of America, Inc. (Camas, WA)
Family ID: 36386329
Appl. No.: 11/233,551
Filed: September 22, 2005

Prior Publication Data

Document Identifier Publication Date
US 20060104508 A1 May 18, 2006

Related U.S. Patent Documents

Application Number Filing Date Patent Number Issue Date
60628762 Nov 16, 2004
60628794 Nov 16, 2004

Current U.S. Class: 382/274; 382/275; 382/167; 382/254
Current CPC Class: H04N 1/407 (20130101); G06T 5/009 (20130101); G06T 2207/20208 (20130101)
Current International Class: G06K 9/40 (20060101)
Field of Search: ;382/167,254,270,274,275

References Cited [Referenced By]

U.S. Patent Documents
3329474 July 1967 Harris et al.
3375052 March 1968 Kosanke et al.
3428743 February 1969 Hanlon
3439348 April 1969 Harris et al.
3499700 March 1970 Harris et al.
3503670 March 1970 Kosanke et al.
3554632 January 1971 Chitayat
3947227 March 1976 Granger et al.
4012116 March 1977 Yerick
4110794 August 1978 Lester et al.
4170771 October 1979 Bly
4187519 February 1980 Vitols et al.
4384336 May 1983 Frankle et al.
4385806 May 1983 Fergason
4410238 October 1983 Hanson
4441791 April 1984 Hornbeck
4516837 May 1985 Soref et al.
4540243 September 1985 Fergason
4562433 December 1985 Biferno
4574364 March 1986 Tabata et al.
4611889 September 1986 Buzak
4648691 March 1987 Oguchi et al.
4649425 March 1987 Pund
4682270 July 1987 Whitehead et al.
RE32521 October 1987 Fergason
4715010 December 1987 Inoue et al.
4719507 January 1988 Bos
4755038 July 1988 Baker
4758818 July 1988 Vatne
4766430 August 1988 Gillette et al.
4834500 May 1989 Hilsum et al.
4845761 July 1989 Cate et al.
4862270 August 1989 Nishio
4862496 August 1989 Kelly et al.
4885783 December 1989 Whitehead et al.
4888690 December 1989 Huber
4910413 March 1990 Tamune
4917452 April 1990 Liebowitz
4918534 April 1990 Lam et al.
4933754 June 1990 Reed et al.
4954789 September 1990 Sampsell
4958915 September 1990 Okada et al.
4969717 November 1990 Mallinson
4981838 January 1991 Whitehead
4991924 February 1991 Shankar et al.
5012274 April 1991 Dolgoff
5013140 May 1991 Healey et al.
5053888 October 1991 Nomura
5074647 December 1991 Fergason et al.
5075789 December 1991 Jones et al.
5083199 January 1992 Borner
5122791 June 1992 Gibbons et al.
5128782 July 1992 Wood
5138449 August 1992 Kerpchar
5144292 September 1992 Shiraishi et al.
5164829 November 1992 Wada
5168183 December 1992 Whitehead
5187603 February 1993 Bos
5202897 April 1993 Whitehead
5206633 April 1993 Zalph
5214758 May 1993 Ohba et al.
5222209 June 1993 Murata et al.
5224178 June 1993 Madden et al.
5247366 September 1993 Ginosar et al.
5256676 October 1993 Hider et al.
5293258 March 1994 Dattilo
5300942 April 1994 Dolgoff
5305146 April 1994 Nakagaki et al.
5311217 May 1994 Guerin et al.
5313225 May 1994 Miyadera
5313454 May 1994 Bustini et al.
5317400 May 1994 Gurley et al.
5337068 August 1994 Stewart et al.
5339382 August 1994 Whitehead
5357369 October 1994 Pilling et al.
5359345 October 1994 Hunter
5369266 November 1994 Nohda et al.
5369432 November 1994 Kennedy
5386253 January 1995 Fielding
5387929 February 1995 Collier
5394195 February 1995 Herman
5395755 March 1995 Thorpe et al.
5416496 May 1995 Wood
5418895 May 1995 Lee
5422680 June 1995 Lagoni et al.
5426312 June 1995 Whitehead
5436755 July 1995 Guerin
5450498 September 1995 Whitehead
5456255 October 1995 Abe et al.
5461397 October 1995 Zhang et al.
5471225 November 1995 Parks
5471228 November 1995 Ilcisin et al.
5477274 December 1995 Akiyoshi et al.
5481637 January 1996 Whitehead
5537128 July 1996 Keene et al.
5563989 October 1996 Billyard
5570210 October 1996 Yoshida et al.
5579134 November 1996 Lengyel
5580791 December 1996 Thorpe et al.
5592193 January 1997 Chen
5617112 April 1997 Yoshida et al.
5642015 June 1997 Whitehead et al.
5642128 June 1997 Inoue
5650880 July 1997 Shuter et al.
5652672 July 1997 Huignard et al.
5661839 August 1997 Whitehead
5682075 October 1997 Bolleman et al.
5684354 November 1997 Gleckman
5689283 November 1997 Shirochi
5715347 February 1998 Whitehead
5717421 February 1998 Katakura et al.
5717422 February 1998 Fergason
5729242 March 1998 Margerum et al.
5748164 May 1998 Handschy et al.
5751264 May 1998 Cavallerano et al.
5754159 May 1998 Wood et al.
5767828 June 1998 McKnight
5767837 June 1998 Hara
5768442 June 1998 Ahn
5774599 June 1998 Muka et al.
5784181 July 1998 Loiseaux et al.
5796382 August 1998 Beeteson
5808697 September 1998 Fujimura et al.
5809169 September 1998 Rezzouk et al.
5828793 October 1998 Mann
5854662 December 1998 Yuyama et al.
5886681 March 1999 Walsh et al.
5889567 March 1999 Swanson et al.
5892325 April 1999 Gleckman
5901266 May 1999 Whitehead
5912651 June 1999 Bitzakidis et al.
5939830 August 1999 Praiswater
5940057 August 1999 Lien et al.
5959777 September 1999 Whitehead
5963665 October 1999 Kim et al.
5969704 October 1999 Green et al.
5978142 November 1999 Blackham et al.
5982926 November 1999 Kuo et al.
5986628 November 1999 Tuenge et al.
5991456 November 1999 Rahman et al.
5995070 November 1999 Kitada
5999307 December 1999 Whitehead et al.
6008929 December 1999 Akimoto et al.
6024462 February 2000 Whitehead
6025583 February 2000 Whitehead
6038576 March 2000 Ulichney et al.
6043591 March 2000 Gleckman
6050704 April 2000 Park
6061091 May 2000 Van de Poel et al.
6064784 May 2000 Whitehead et al.
6067645 May 2000 Yamamoto et al.
6079844 June 2000 Whitehead et al.
6111559 August 2000 Motomura et al.
6111622 August 2000 Abileah
6118820 September 2000 Reitmeier et al.
6120588 September 2000 Jacobsen
6120839 September 2000 Comiskey et al.
6129444 October 2000 Tognoni
6160595 December 2000 Kishimoto
6172798 January 2001 Albert et al.
6211851 April 2001 Lien et al.
6215920 April 2001 Whitehead
6232948 May 2001 Tsuchi
6243068 June 2001 Evanicky et al.
6267850 July 2001 Bailey et al.
6268843 July 2001 Arakawa
6276801 August 2001 Fielding
6292168 September 2001 Venable et al.
6300931 October 2001 Someya et al.
6300932 October 2001 Albert
6304365 October 2001 Whitehead
6323455 November 2001 Bailey et al.
6323989 November 2001 Jacobsen et al.
6327072 December 2001 Comiskey et al.
RE37594 March 2002 Whitehead
6359662 March 2002 Walker
6377383 April 2002 Whitehead et al.
6384979 May 2002 Whitehead et al.
6400436 June 2002 Komatsu
6414664 July 2002 Conover et al.
6418253 July 2002 Whitehead
6424369 July 2002 Adair et al.
6428189 August 2002 Hochstein
6435654 August 2002 Wang et al.
6437921 August 2002 Whitehead
6439731 August 2002 Johnson et al.
6448944 September 2002 Ronzani et al.
6448951 September 2002 Sakaguchi et al.
6448955 September 2002 Evanicky et al.
6452734 September 2002 Whitehead et al.
6483643 November 2002 Zuchowski
6507327 January 2003 Atherton et al.
6507372 January 2003 Kim
6545677 April 2003 Brown
6559827 May 2003 Mangerson
6573928 June 2003 Jones et al.
6574025 June 2003 Whitehead et al.
6590561 July 2003 Kabel et al.
6597339 July 2003 Ogawa
6608614 August 2003 Johnson
6624828 September 2003 Dresevic et al.
6657607 December 2003 Evanickey et al.
6680834 January 2004 Williams
6690383 February 2004 Braudaway et al.
6697110 February 2004 Jaspers et al.
6700559 March 2004 Tanaka et al.
6700628 March 2004 Kim
6707453 March 2004 Rossin et al.
6753876 June 2004 Brooksby et al.
6757442 June 2004 Avinash
6788280 September 2004 Ham
6791520 September 2004 Choi
6803901 October 2004 Numao
6816141 November 2004 Fergason
6816142 November 2004 Oda et al.
6816262 November 2004 Slocum et al.
6828816 December 2004 Ham
6834125 December 2004 Woodell et al.
6836570 December 2004 Young et al.
6846098 January 2005 Bourdelais et al.
6856449 February 2005 Winkler et al.
6862012 March 2005 Funakoshi et al.
6864916 March 2005 Nayar et al.
6885369 April 2005 Tanahashi et al.
6891672 May 2005 Whitehead et al.
6900796 May 2005 Yasunishi et al.
6932477 August 2005 Stanton
6954193 October 2005 Andrade et al.
6963665 November 2005 Imaizumi et al.
6975369 December 2005 Burkholder
7002546 February 2006 Stuppi et al.
7042522 May 2006 Kim
7110046 September 2006 Lin et al.
7113163 September 2006 Nitta et al.
7113164 September 2006 Kurihara
7123222 October 2006 Borel et al.
7127123 October 2006 Wredenhagen et al.
7161577 January 2007 Hirakata et al.
7512269 March 2009 Golan et al.
7783127 August 2010 Wilensky
2001/0005192 June 2001 Walton et al.
2001/0013854 August 2001 Ogoro
2001/0024199 September 2001 Hughes et al.
2001/0035853 November 2001 Hoelen et al.
2001/0038736 November 2001 Whitehead
2001/0048407 December 2001 Yasunishi et al.
2001/0052897 December 2001 Nakano et al.
2002/0003520 January 2002 Aoki
2002/0003522 January 2002 Baba et al.
2002/0008694 January 2002 Miyachi et al.
2002/0033783 March 2002 Koyama
2002/0036650 March 2002 Kasahara et al.
2002/0044116 April 2002 Tagawa et al.
2002/0057238 May 2002 Nitta et al.
2002/0057253 May 2002 Lim et al.
2002/0057845 May 2002 Fossum et al.
2002/0063963 May 2002 Whitehead et al.
2002/0067325 June 2002 Choi
2002/0067332 June 2002 Hirakata et al.
2002/0070914 June 2002 Bruning et al.
2002/0081022 June 2002 Bhaskar
2002/0093521 July 2002 Daly et al.
2002/0105709 August 2002 Whitehead et al.
2002/0135553 September 2002 Nagai et al.
2002/0149574 October 2002 Johnson et al.
2002/0149575 October 2002 Moon
2002/0154088 October 2002 Nishimura
2002/0159002 October 2002 Chang
2002/0159692 October 2002 Whitehead
2002/0162256 November 2002 Wardle et al.
2002/0171617 November 2002 Fuller
2002/0175907 November 2002 Sekiya et al.
2002/0180733 December 2002 Colmenarez et al.
2002/0190940 December 2002 Itoh et al.
2003/0012448 January 2003 Kimmel et al.
2003/0026494 February 2003 Woodell et al.
2003/0043394 March 2003 Kuwata et al.
2003/0048393 March 2003 Sayag
2003/0053689 March 2003 Watanabe et al.
2003/0072496 April 2003 Woodell et al.
2003/0090455 May 2003 Daly
2003/0107538 June 2003 Asao et al.
2003/0108245 June 2003 Gallagher et al.
2003/0112391 June 2003 Jang et al.
2003/0117654 June 2003 Wredenhagen et al.
2003/0128337 July 2003 Jaynes et al.
2003/0132905 July 2003 Lee et al.
2003/0133035 July 2003 Hatano
2003/0142118 July 2003 Funamoto et al.
2003/0169247 September 2003 Kawabe et al.
2003/0179221 September 2003 Nitta et al.
2003/0197709 October 2003 Shimazaki et al.
2003/0202589 October 2003 Reitmeier et al.
2003/0235342 December 2003 Gindele
2004/0001184 January 2004 Gibbons et al.
2004/0012551 January 2004 Ishii
2004/0041782 March 2004 Tachibana
2004/0051724 March 2004 Elliott et al.
2004/0057017 March 2004 Childers et al.
2004/0239587 December 2004 Murata et al.
2004/0263450 December 2004 Lee et al.
2005/0047654 March 2005 Newman et al.
2005/0073495 April 2005 Harbers et al.
2005/0088403 April 2005 Yamazaki
2005/0089239 April 2005 Brajovic
2005/0117799 June 2005 Fuh et al.
2005/0157298 July 2005 Evanicky et al.
2005/0190164 September 2005 Velthoven et al.
2005/0200295 September 2005 Lim et al.
2005/0200921 September 2005 Yuan et al.
2005/0225561 October 2005 Higgins et al.
2005/0225574 October 2005 Brown et al.
2005/0254722 November 2005 Fattal et al.
2005/0259064 November 2005 Sugino et al.
2006/0013449 January 2006 Marschner et al.
2006/0071936 April 2006 Leyvi et al.
2006/0104508 May 2006 Daly et al.
2006/0120598 June 2006 Takahashi et al.
2006/0208998 September 2006 Okishiro et al.
2007/0052636 March 2007 Kalt et al.
2008/0025634 January 2008 Border et al.
2008/0088560 April 2008 Bae et al.
2010/0104176 April 2010 Hayase
Foreign Patent Documents
0 732 669 Sep 1996 EP
0 829 747 Mar 1998 EP
0 606 162 Nov 1998 EP
0 912 047 Apr 1999 EP
0 963 112 Dec 1999 EP
1 168 243 Jan 2002 EP
1 202 244 May 2002 EP
1 206 130 May 2002 EP
1 313 066 May 2003 EP
1 316 919 Jun 2003 EP
1 453 002 Sep 2004 EP
1 453 030 Sep 2004 EP
2 611 389 Feb 1987 FR
2 388 737 Nov 2003 GB
64-10299 Jan 1989 JP
1-98383 Apr 1989 JP
3-71111 Mar 1991 JP
3-198026 Aug 1991 JP
5-66501 Mar 1993 JP
5-80716 Apr 1993 JP
5-273523 Oct 1993 JP
5-289044 Nov 1993 JP
6-247623 Sep 1994 JP
6-313018 Nov 1994 JP
7-121120 May 1995 JP
9-244548 Sep 1997 JP
10-508120 Aug 1998 JP
11-52412 Feb 1999 JP
2002-099250 Apr 2000 JP
2000-206488 Jul 2000 JP
2000-275995 Oct 2000 JP
2000-321571 Nov 2000 JP
2001-142409 May 2001 JP
2002-91385 Mar 2002 JP
2003-204450 Jul 2003 JP
2003-230010 Aug 2003 JP
3523170 Feb 2004 JP
2004-294540 Oct 2004 JP
10-2004-0084777 Oct 2004 KR
406206 Sep 2000 TW
WO 91/15843 Oct 1991 WO
WO 93/20660 Oct 1993 WO
WO 96/33483 Oct 1996 WO
WO 98/08134 Feb 1998 WO
WO 00/75720 Dec 2000 WO
WO 01/69584 Sep 2001 WO
WO 02/03687 Jan 2002 WO
WO 02/079862 Oct 2002 WO
WO 03/077013 Sep 2003 WO
WO 2004/013835 Feb 2004 WO
WO 2005101309 Oct 2005 WO

Other References

Paul E. Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs, Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, pp. 369-378 (Aug. 1997, Los Angeles, California). Addison Wesley. Edited by Turner Whitted. ISBN 0-89791-896-7. cited by examiner .
DiCarlo, J. M. and Wandell, B. (2000), Rendering high dynamic range images, in Proc. IS&T/SPIE Electronic Imaging 2000. Image Sensors, vol. 3965, San Jose, CA, pp. 392-401. cited by examiner .
Kuang, J., Yamaguchi, H., Johnson, G. M. and Fairchild, M. D. (2004), Testing HDR image rendering algorithms (Abstract), in Proc. IS&T/SID Twelfth Color Imaging Conference: Color Science, Systems, and Application, Scottsdale, AR, pp. 315-320. cited by examiner .
Durand, F. and Dorsey, J. (2002), Fast bilateral filltering for the display of high dynamic-range images, in Proc. ACM SIGGRAPH 2002, Annual Conferenceon Computer Graphics, San Antonio, CA, pp. 257-266. cited by examiner .
Kang, S. B., Uyttendaele, M., Winder, S. And Szeliski, R. (2003), High dynamic range video, ACM Transactions on Graphics 22(3), 319-325. cited by examiner .
Meylan, Laurence ; Daly, Scott ; Susstrunk, Sabine, "The Reproduction of Specular Highlights on High Dynamic Range . Displays", 2006, IS&T/SID 14th Color Imaging Conference (CIC), p. 1-6. cited by examiner .
Meylan, Laurence , "Tone mapping for high dynamic range images",2006, "http://library.epfl.ch/theses/?nr=3588", Institut ISC Institut de systemes de communication, p. 87-124. cited by examiner .
Farid, Hany and Adelson, Edward, "Separating Reflections and Lighting Using Independent Components Analysis", In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1:262-267, 1999. cited by examiner .
G.Ward, High Dynamic Range Imaging, Proc. IS&T/SID 9th Color Imaging Conference, 9-16, 2001. cited by examiner .
Nayar, Shree et al., "Separation of Reflection Components Using Color and Polarization", International Journal of Computer Vision 21(3), 163-186, 1997. cited by examiner .
Larson, Gregory W., "Overcoming Gamut and Dynamic Range Limitations in Digital Images", Color Imaging Conference, Scottsdale, Arizona, 1998. cited by examiner .
Youngshin Kwak and Lindsay W. Macdonald, "Accurate Prediction of Colours on Liquid Crystal Displays," Colour & Imaging Institute, University of Derby, Derby, United Kingdom, IS&T/SID Ninth Color Imaging Conference, pp. 355-359, Date Unknown. cited by other .
A.A.S. Sluyterman and E.P. Boonekamp, "18.2: Architectural Choices in a Scanning Backlight for Large LCD TVs," Phillips Lighting, Bld. HBX-p,PO Box 80020, 5600 JM Eindhoven, The Netherlands SID 05 Digest, p. 996-999. cited by other .
Steven L. Wright, et al., "Measurement and Digital compensation of Crosstalk and Photoleakage in High-Resolution TFTLCDs," IBM T.J. Watson Research Center, PO Box 218 MS 10-212, Yorktown Heights, NY 10598, pp. 1-12, date unknown. cited by other .
Fumiaki Yamada and Yoichi Taira, "An LED backlight for color LCD," Proc. SID, International Display Workshop (IDW'00) Nov., 2000, pp. 363-366. cited by other .
Fumiaki Yamada, Hajime Nakamura, Yoshitami Sakaguchi and Yoichi Taira, "52.2: Invited Paper: Color Sequential LCD Based on OCB with an LED Backlight"; SID'00 Digest, 2000, pp. 1180-1183. cited by other .
N. Cheung et al., "Configurable entropy coding scheme for H.26L," ITU Telecommunications Standardization Sector Study Group 16, Elbsee, Germany, Jan. 2001,11 pages. cited by other .
T. Funamoto, T. Kobayashi, T. Murao, "High-Picture-Quality Technique for LCD televisions: LCD-Al," Proc. SID, International display Workshop (IDW'00), Nov. 2000, pp. 1157-1158. cited by other .
Paul E. Debevec and Jitendra Malik, "Recovering High Dynamic Range Radiance Maps from Photographs," Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, pp. 369-378 (Aug. 1997, Los Angeles, California). Addison Wesley, Edited by Turner Whitted. ISBN 0-89791-896-7. cited by other .
DiCarlo, J.M. and Wandell, B. (2000), "Rendering high dynamic range images," in Proc. IS&T/SPIE Electronic Imaging 2000. Image Sensors, vol. 3965, San Jose, CA, pp. 392-401. cited by other .
Kuang, J., Yamaguchi, H., Johnson, G.M. And Fairchild, M.D. (2004), "Testing HDR image rendering algorithms (Abstract)," in Proc. IS&T/SID Twelfth Color Imaging Conference: Color Science, Systems and Application, Scottsdale, AR, pp. 315-320. cited by other .
Durand, F. and Dorsey, J. (2002), "Fast bilateral filtering for the display of high dynamic-range images," in Proc. ACM SIGGRAPH 2002, Annual Conference on Computer Graphics, San Antonia, CA, pp. 257-266. cited by other .
Kang, S.B., Uyttendaele, M., Winder, S. And Szeliski, R. (2003), "High Dynamic Range Video," ACM Transactions on Graphics 22(3), 319-325. cited by other .
Brian A. Wandell and Louis D. Silverstein, "The Science of Color," 2003, Elsevier Ltd, Ch. 8 Digital Color Reproduction, pp. 281-316. cited by other.

Primary Examiner: Repko; Jason M
Assistant Examiner: Thirugnanam; Gandhi
Attorney, Agent or Firm: Chernoff Vilhauer McClung & Stenzel LLP

Parent Case Text



CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Ser. No. 60/628,762 filed Nov. 16, 2004 entitled Using Spatial Assessment to Increase the Dynamic Range of Imagery and claims the benefit of U.S. Ser. No. 60/628,794 filed Nov. 16, 2004 entitled Generating High Dynamic Range Image Data From Low Dynamic Range Image Data by the use of Spatial Operators.
Claims



We claim:

1. A method for displaying a color image comprising: (a) receiving an image having a first dynamic range; (b) modifying said image with a computing device to a second dynamic range, wherein said second dynamic range is greater than said first dynamic range; wherein said modification includes the steps of: (i) converting a received said image which is in a substantially gamma domain to a substantially linear luminance domain; (ii) processing at least two color channels of said image, in said substantially linear luminance domain to assign a first luminance value as a maximum diffuse luminance of said image; (iii) said modifying is based upon a scaling operation that scales luminance values using a pair of linear functions each linear in a luminance domain of said image and joined together at a second luminance value of said substantially linear luminance domain in the respectively processed said at least two color channels, where said second luminance value is less than said first luminance value by an amount calculated as a function of said first luminance value; and (c) displaying said modified image on a display.

2. The method of claim 1 wherein the lower range of said first dynamic range is mapped to the lower range of said second dynamic range with a first function, wherein the higher range of said first dynamic range is mapped to the higher range of said second dynamic range with a second function, wherein said first function has a denser mapping than said second function.

3. The method of claim 2 wherein said first dynamic range is a first luminance dynamic range, and said second dynamic range is a second luminance dynamic range.

4. The method of claim 1 wherein said at least two color channels are compared to one another.

5. The method of claim 4 wherein said comparing occurs proximate a region of clipped highlights.

6. The method of claim 1 wherein said modifying is based upon said function without discontinuities.

7. The method of claim 1 wherein said first dynamic range is a first luminance dynamic range, and said second dynamic range is a second luminance dynamic range.

8. The method of claim 6 wherein said first dynamic range is a first luminance dynamic range, and said second dynamic range is a second luminance dynamic range.

9. A method for displaying an image having specular highlights, said method comprising: (a) receiving an image having a first dynamic range; (b) modifying said image with a processing device to a second dynamic range using a transformation that allocates a specular portion of said second dynamic range to display said specular highlights and a diffuse portion of said second dynamic range to display diffuse tones of said image, wherein said second dynamic range is greater than said first dynamic range, and wherein said specular portion of said second dynamic range has a size calculated as a function of a maximum diffuse luminance value received from said image; (c) displaying said modified image a display.

10. The method of claim 9 wherein said transformation uses a plurality of piecewise functions where the slope of at least one said piecewise function is adaptively determined.
Description



BACKGROUND OF THE INVENTION

The present application relates to increasing the dynamic range of images.

Many scenes existing in the real world inherently have extremely high dynamic range. For example, white paper in full sunlight has a luminance level of 30,000 cd/m^2, while white paper in the full moon has a luminance level of 0.01 cd/m^2, giving a dynamic range of 3 million to one (or 6.4 log units). The human eye can see even dimmer levels than 0.01 cd/m^2, so the visible range is even greater. In most situations, the dynamic range of a single scene is usually not this great, but it is frequently in excess of 5 log units. The human eye can only see 2-3 log units at a given instant, but is able to adjust the range via light adaptation, which can be less than a few seconds for the smaller adjustments, such as being able to go from reading a magazine in the sun to looking into the shadow under a car. More extreme range changes, such as going into a movie theatre from daylight, can take more than a minute.

Since traditional displays (both soft copy and hard copy) are not capable of displaying the full range luminances of the real world, a luminance mapping transfer is used to map from the dynamic range of the real world to the lower dynamic range of the display. Generally this mapping is performed in the image capture stage, and examples include the shoulder of D-Log-E curve for film, saturation for CCD sensors, or clipping in the A/D stages of such capture processes. These mapping functions are generally point processes, that is, ID functions of luminance that are applied per pixel (in the digital versions).

Computer graphics can generate images in floating point that match the luminances of the real world (generally, radiance approaches). In addition, some digital cameras similarly capture images with 12 to 16 bits per color. These are usually represented in a 16-bit format (examples: Radiance XYZ, OpenEXR, scRGB). But these digital images cannot be traditionally displayed without conversion to the lower dynamic range of the display. Generally the mapping algorithms for conversion from a greater to a lower dynamic range for the display capabilities are referred to as Tone Mapping Operators (TMO).

Tone Mapping Operators can be point processes, as mentioned for film and digital capture, but they can include spatial processes as well. Regardless of the type of TMO, all the approaches have traditionally been designed to go from a high dynamic range (HDR) image to a lower dynamic range (LDR) display (this term encompasses standard dynamic range, SDR).

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

FIG. 1 illustrates a comparison of dynamic images for standard and high dynamic range displays.

FIG. 2 illustrates a standard dynamic range luminance versus high dynamic range luminance.

FIG. 3 illustrates a gamma adjusted standard dynamic range luminance versus high dynamic range luminance.

FIG. 4 illustrates standard dynamic range code value verses high dynamic range code values.

FIG. 5 illustrates a mapping of dynamic images for standard and high dynamic range displays.

FIG. 6 illustrates luminance profiles of diffuse and glossy curved surfaces.

FIG. 7 illustrates low dynamic range glossy surface luminance profile.

FIG. 8 illustrates low dynamic range image of glossy surface luminance profile using tone scale highlight compression.

FIG. 9 illustrates standard dynamic range code values verses high dynamic range code values with a modified mapping.

FIG. 10 illustrates low dynamic range image where diffuse maximum is clipped.

FIG. 11 illustrates low pass filtering to estimate specular highlight.

FIG. 12 illustrates a global technique for low dynamic range to high dynamic range mapping.

FIG. 13 illustrates another local technique for low dynamic range to high dynamic range mapping.

FIG. 14 illustrates a mapping of standard dynamic range code values to high dynamic range code values.

FIG. 15 illustrates linearly scaled low dynamic range image (top left), specular highlight candidate I (top right), specular highlight candidate 2 (bottom left), and image re-scaled with piecewise linear technique (bottom right).

FIG. 16A illustrates a fixed range allocated to specular highlight region.

FIG. 16B illustrates a fixed slow allocated to diffuse image.

FIG. 17 illustrates adaptive slope parameters.

FIG. 18 illustrates an adaptive slope technique.

FIG. 19 illustrates tone scaling.

FIG. 20 illustrates a linearly scaled image (left) and a piece wise linearly scaled image (right).

FIG. 21A illustrates mixed layer clipping of specular highlights.

FIG. 21B illustrates a technique for using color ratios if one of the colors is not clipped.

FIG. 22 illustrates range allocation using standard dynamic range white.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Newly developed displays have been made which have substantially higher dynamic range than the traditional state of the art displays. The general difference in dynamic ranges for the newly developed displays 110 and the traditional displays 120 is shown in FIG. 1 for a log luminance scale. Some current state of the art standard dynamic range displays may have a range of 500 cd/m^2 to 0.7 cd/m^2. The newly developed "high dynamic range" displays may have a range from 3000 cd/m^2 to 0.05 cd/m^2, or even lower. In existing display technologies the image data is displayed on the display with its existing dynamic range.

The present inventors came to the realization that the image being presented on the display could be subjectively improved if the dynamic range of the image data is effectively increased. Since most images are already represented in a LDR (low dynamic range) format, a technique is desirable to convert the image from LDR up to HDR (high dynamic range).

One technique suitable to perform a mapping from a lower dynamic range image to a higher dynamic range image suitable for display on a higher dynamic range 130 display is shown in FIG. 2. The technique includes a linear stretch from the lower dynamic range shown on the horizontal axis, to the higher dynamic range 140, shown on the vertical axis. The horizontal axis is shown as shorter than the vertical axis to convey the smaller range. On the left, the axes are in terms of actual luminances.

The technique illustrated in FIG. 2 tends to result in a somewhat `flat` contrast in the modified image. To improve the contrast, referring to FIG. 3, a nonlinear mapping 150 using a gamma function, or another suitable function, is used to increase the contrast. The axes are shown in units of luminance.

The technique illustrated in FIG. 4 shows a linear stretch where the axes are in code values. Since the code values are generally nonlinear in luminance, this is equivalent to a nonlinear mapping, such as is shown in FIG. 3. In the illustrations shown in FIGS. 2, 3, and 4, the TMOs may be non-adaptive "point processing" approaches. They do not use spatial processes, nor do they change depending on the contents of the image. It is to be understood that the processes may be spatial processes and change depending on the content of the image, if desired.

For HDR displays that have high dynamic range at the pixel resolution, the linear stretch technique increases the amplitude gray level resolution (i.e., more actual bits, rather than just adding 0s or 1s to the LSBs, which typically occurs in the linear scaling approach). For other HDR displays, such as multiband versions, where two modulating layers are used that have differing resolution, the increase in actual bits is not necessary, if desired.

In many cases, the black point and the white point for different image sources are different. In addition, the black point and the white point for different displays are different. It is desirable to adjust the mapping for a lower dynamic range 160, 165 to a higher dynamic range 170 (e.g., luminance and/or digital values) to account for the different image sources and/or the different displays. Referring to FIG. 5, an illustration shows a grey level range for an exemplary high dynamic range 170, as well as the endpoints (light end and dark end) mappings that may be used, as expressed in luminances, for different types of standard 160, 165 (i.e. lower) dynamic ranges. Further, these SDR ranges may result from the storage format of the input digital image (e.g., 8 bits/color sRGB format), or from the image capture process (film has around 5-10 stops or a 1.7-3 log unit range, radiographs can be as high as 4 log units). However, for display purposes they are usually digitized and represented with less than 2 log units of range.

The preferred embodiment may use the properties of the physical specular highlights, as well as the typical ways in which the specular highlight is modified by the image capture (or representation format), to re-create the physical specular highlight properties on the display. FIG. 6 shows an exemplary luminance profile 180 of a diffuse surface. The physical surface may be curved or flat, as both may lead to this general shape. A flat surface will lead to this general shape if it is Lambertian (a type of diffuse) and the light source is not at infinity. A curved surface will likewise lead to this type of general profile even if the light source is at infinity. FIG. 6 also shows an exemplary physical luminance profile 190 if the surface is glossy. The narrow region with high luminance 200 is referred to as the specular highlight, and it primarily has the color of the light source, as opposed to the color of the diffuse object. In many cases there is some mixing of the object color and the light source color.

The amplitude of the specular highlight is very high in the physical world. It can be as high as 100 times the luminance of the diffuse reflected luminance. This can occur even if the diffuse object is white. This large amplitude specular highlight is not captured in the low dynamic range image. If it was, then since the LDR image range is usually less than 2.5 log units and since the specular range can be as high as 2 log units, most of the image will be concentrated in the lower 0.5 log units, and will be nearly black. So in a LDR image, the specular highlight is reduced in amplitude in several ways.

One of the ways the specular highlight is reduced is by clipping. Values out of range are simply set to the maximum value 210, as shown in FIG. 7. This occurs in a good exposure, where care was taken not to set the image maximum to the diffuse maximum. That is, some shape of the specular highlight will be visible in the image (generally correct in position and shape), but the specular highlight won't be as bright relative to the object as it is in the physical world. The consequence is that the image looks less dazzling, and less realistic.

Referring to FIG. 8, another way the specular highlight can be reduced is via tonescale compression. An example is that resulting from the s-shaped tone response curve of film. Here the position, spatial shape, and even relative luminance profile of the specular highlight 220 is preserved, but the actual amplitude is reduced (as in the clipped case).

As illustrated, the preferred technique includes a linear scaling of the LDR image to the HDR image. The scaling may likewise include a decontouring technique to generate additional bit depth. Other techniques may likewise be used to generate additional bit depth. In addition non-linear scaling may be used.

The characterization of the specular highlight, as previously discussed, may be used at least in part to expand the tone scale of the image. Referring to FIG. 9, the image region 230 where the luminance falls below the diffuse maximum 240 is tone mapped with a lower slope than the image regions 250 that have captured the specular highlight. The specular highlight maximum value 260, as found in the image, Specular.sub.max, as well as the diffuse region maximum value 240, Diffuse.sub.max, appear on the horizontal axes. This axis corresponds to the input image digital code values. It is to be understood that these values may likewise be approximate. In general, the range from zero to Diffuse.sub.max (or another appropriate value) has a greater amount of the code value range or otherwise a greater range of luminance than the range from Diffuse.sub.max, to Specular.sub.max. As a general matter, the lower range of luminance values of the input image are mapped with a first function to the luminance values of the high dynamic range image, and the upper range of luminance values of the input image are mapped with a second function to the luminance values of the high dynamic range image, where the first function results in a denser mapping than the second function. One way to characterize the denser mapping is that the first function maps the diffuse to lower values with a lower tonescale slope than the specular image for the second function.

The two gray levels extracted from the LDR input image are mapped to the HDR display to its luminances, as shown. The Specular.sub.max is mapped to the HDR displays' maximum value 270, while the Diffuse.sub.max is mapped to a value referred to as D.sub.HDR 280, referring to the diffuse max point as displayed on the HDR display. One can see a certain amount of the dynamic range of the HDR display is allocated for the diffusely reflecting regions, and a certain amount is allocated to the specular. The parameter D.sub.HDR determines this allocation. Allocating more to the specular highlight makes the highlights more dazzling, but results in a darker overall image. The decision is affected by the actual range of the HDR display. For very bright and high ranging HDR displays, more of the range can be allocated to the specular region without having the image appear dark.

In some images with a poor exposure, even the diffuse maximum value 290 is clipped, as shown in FIG. 10. In these cases there is a complete loss of any specular highlight info. That is, the position, the shape, and the luminance profile of the highlight is substantially missing. In those cases the system may selectively determine that there is no need to attempt to restore the highlight.

In order to most effectively use the tonescale of FIG. 9, the systems determines the Specular.sub.max and Diffuse.sub.max values from the image. This may be done by first finding the maximum of the image, and assume it is the specular maximum. This is generally not the case if there is a large amount of noise or if the image contains no specular highlights.

The system also determines the diffuse maximum from the image. The technique involves removal or otherwise attenuate the specular highlight. In general the specular highlight has anticipated characteristics, such as it may be a small isolated region, it may be relatively narrow with high amplitude in the physical scene, but in the LDR image, it tends to be narrow with small amplitude. The system may use a low-pass filter 300 to reduce the specular highlight 310, as shown in FIG. 11. For example, the low pass filter may be large enough so that the result is too blurry 320 to be used for actual viewing. That is, the LPF step is used to identify the diffuse maximum of the diffuse image.

For the case where even the diffuse maximum has been clipped (see FIG. 10), then the image maximum and the LPF image maximum will be substantially the same. This is also true in cases where there is no significant specular highlight. The maximum found is then assumed to be the diffuse maximum. In both cases, then the tone mapper does not place any image regions in the region with increased slope for specular highlights. It can then use the tone mapper from FIG. 9 (where the found image max is set as the diffuse max) or the general linear stretch from FIG. 4 where the found image max sets the image max.

FIG. 12 illustrates another technique for performing this image modification. The input SDR image 400 is used estimate the diffuse max 402 and specular max 404 using low pass filtering 406 and maximum operators 408. These parameters are input to a process 410 for determining a tone mapping operator (TMO). The TMO from process 410 is applied to the input image 400 to provide an image for the HDR display 414.

In many cases existing high dynamic range data formats are in the linear space and high dynamic range displays are designed to operate in the linear space. In contrast, many low dynamic range data formats are represented in the gamma domain (e.g., sRGB). While either the linear space (substantially linear) or the gamma space (substantially non-linear) may be used for image processing, it is preferable to use the linear space because the understanding of the physics of specular highlights is more understood in the linear space. If the input image is not in the preferred format, such as linear domain or gamma domain, then the input image may be converted to the preferred domain.

While the system functions well, it turns out that the techniques sometimes do not detect some of the specular highlights. After further examination it was determined that some specular highlights are difficult to detect because the specular highlights are not always saturated (e.g., clipped), the specular highlights can be in 1, 2, and/or 3 of the color channels (e.g., in the case of three color channels), the size of the specular highlights is usually small in a scene and varies on how the picture was obtained, and that the specular highlights are often of a regular shape but not always circular in nature primarily due to the projection of the image on the image plane.

It has been determined that since the specular highlights are not always saturated, a fixed threshold may have a tendency to miss specular highlights. Based upon this observation, the system preferably uses an adaptive threshold. The preferred technique computes a low-pass filtered image and assumes that it corresponds to the diffuse image.

Initially the specular image specI is defined as follows: Ti=max(lowpass(I)) specI=I>T1

The size of the low-pass filter is preferably based on the assumption that specular highlights are small and bright. An example includes 11 taps for an image of size 1024 vertical (e.g., XGA), and is scaled accordingly for different image sizes. Additional morphological operations may then be used in order to include the pixels that were rejected by the threshold but are likely to be part of the specular highlights. An exemplary technique is shown in FIG. 13.

Specular highlights tend to be very bright and they can be as bright as 2 log units or more over the diffuse maximum value. Even allocating 1 log unit to the specular highlights mean that about 1/10.sup.th of the dynamic range should be allocated to diffuse image while 9/10.sup.th should be allocated to specular highlights. That is not generally feasible, with the exception of very bright HDR displays. Accordingly, achieving the actual max possible dynamic range of 2 logs for specular highlights may not be desirable in many cases.

Based upon an understanding that the range allocated to the specular highlights will be less than that of the physical world, a study was conducted to estimate what range should be allocated to specular highlights using images that were segmented by hand so that an assumption could be made based upon ideal specular highlight detection. With these isolated specular highlights, two primary different methods for scaling were investigated, namely, ratio scaling and piecewise linear scaling.

With respect to ratio scaling, the motivation for this technique is that in some cases, not all three color channels have clipped specular highlights (i.e., image's specular highlights are a mixture of FIGS. 7 and 8 across the color bands). Since the specular highlight generally reflects the color of the light source, this situation occurs with non-white light sources. The principle is to look for the ratio or other relationship between color channels in the region just outside where clipping has occurred, and deduce the value of the clipped channels by generally maintaining the RGB ratios relative to the unclipped channel. However, in the case that all three color channels are saturated (clipped) this technique not especially suitable. Moreover, it has been determined that the ratio can differ drastically from one pixel to the other along the specular highlight contour. Further, even if one of the specular highlights is not clipped, there is a good chance it is has been compressed via an s-shaped tonescale (see FIG. 8).

With respect to piecewise linear scaling, this technique scales the image with a two slope function whose slopes are determined by the maximum diffuse white. The function parameters are the maximum diffuse white, specular highlight max (usually the image max) and the range allocated to the specular highlights. It is possible to use a fixed slope and/or a fixed range. However, to reduce visible artifacts between diffuse and specular parts of an image, it is preferable to use an adaptive function that changes the allocated range from 3/4 to 1/3 depending on the maximum diffuse white.

Referring to FIG. 13, the image 500 processing include the following:

1) Low pass filter with filter F1 502; a. detemiine maximum 506 of the low passed image 502; b. use maximum 506 to determine threshold T1 508; c. use the threshold T1 508 to modify the image 500 with a threshold operation 504;

The process preferably uses a low-pass filter that is about 1/100.sup.th of the image dimension (11 pixels for a 1024 image) based upon the luminance.

2) Low pass filter with lowpass filter F2 510, (F2>F1, spatially) a. determine maximum 512 of the low pass image 510; b. use maximum 512 to determine threshold T2 514; c. use the threshold T2 514 to modify the image 500 with a dilation operation 516;

The process preferably uses a low-pass filter that is about 1/50.sup.th of the image dimension (21 pixels for a 1024 image) based upon the luminance.

3) The threshold operation 504 of the image 500 with T1 514 determines the 1.sup.st specular highlights candidates 520, which may be in the form of a binary map, if desired.

4) Refine binary map 520 with an erosion morphological operator 522 to provide SH candidate 2 524. a. the erosion 522 removes single pixels (parameter set as such) and also reduces false SH candidates 520 due to noise that was clipped.

5) Refine binary map 524 with the dilation morphological operator 516 to provide SH candidate 3 530; a. the dilation 516 is constrained by T2 514; b. If pixel>T2 and 4 neighbors are specular highlight candidates->pixel=SH candidate c. threshold T2 514 serves as a constraint to limit the expansion.

6) Mask 540 the input image 500 with the specular highlight map 530; a. i.e. if pixel not SH, then ignore by masking out the pixel value to provide a masked image 542.

7) Find maximum diffuse white (MDW) 544 by taking the minimum 546 of the masked image 542; a. this provides the minimum of image in specular highlight region; b. due to the constrained morphological operator, it is likely that the maximum of the diffuse image be larger than the minimum specular images. This reduces the bright areas of the diffuse image to be boosted up as well.

8) Generate tonescale (tone mapping operator, TMO) 550 using MDW 544 and range desired for specular highlight; a. an adaptive slope technique is preferred.

9) Process the input image 500 based upon the TMO 550 by applying the tone mapping 552; a. one approach is to run entire image through the single TMO; b. other approach is to use different TMOs for each class of pixel (specular highlight and non-SH) using the binary map.

10) Output image 560 is sent to the HDR display.

This technique may presume that the specular highlights are small and bright for improved effectiveness. That means that large bright light source such as sun will not likely be detected by this technique.

In FIG. 9, the tone-mapping operator has a sharp transition between the two regions. In practice it is better to have a smooth transition 600 between the two regions, such as illustrated in FIG. 14.

In FIG. 15, results are shown comparing the technique of FIG. 13 to linear scaling. The results are best viewed on a HDR display. Nevertheless, one can observe the difference in the images and how the specular highlights are much brighter than their underlying diffuse regions.

Parameters may be set to determine what dynamic range should be allocated to the specular highlights. Because the specular highlights are often very bright (2 log units) and the detection part determines the maximum diffuse white (MDW) making it image dependent, the scaling function may include an additional slope based parameter (fixed slope/fixed range). As a result the system may include an adaptive slope technique.

The prevent inventors considered optimization originally to include determining a range to be allocated to the specular highlights. However, it was determined that the scaling function could significantly change with different maximum diffuse white (MDW) resulting from the detection functionality. FIG. 16A shows two examples of scaling functions with variable MDW. FIG. 16A shows the scaling functions obtained by varying MDW while keeping the range allocated to the specular image (R) constant. FIG. 16B shows the scaling functions obtained by varying MDW while keeping the lower slope constant. It was determined that using an adaptive slope technique reduces the variability of the scaling functions.

With the adaptive slope technique, the allocated range depends on the MDW value as illustrated in FIG. 17. The motivation is to have less variability with different mdw values:

1. the allocated range depends on mdw value.

2. the binary map is used for the scaling; a. The specular image is computed with the steepest part of the function; b. The diffuse image is scaled with the first slope (even if the diffuse white is brighter than the maximum diffuse white).

3. With the adaptive slope method, the allocated range depends on mdw value; slope=(R.sub.max-R.sub.min)/(Mdw.sub.max-Mdw.sub.min); R=slope-Mdw-(Mdw.sub.min.times.slope-R.sub.min).

The preferred embodiment values, as illustrated in FIG. 17, are R.sub.max=170, R.sub.min=64, Mdw.sub.max=230, Mdw.sub.min=130.

FIG. 18 illustrates an exemplary adaptive slope set of curves.

Since the maximum of the diffuse part is larger than the minimum of the specular image, the specular candidate binary map should be taken into account during the tone scaling operation. The scaling takes into account spatial information, as illustrated in FIG. 19 where the line 510 shows the scaling function that is applied on the diffuse image and the line 512 shows the scaling function that is applied to the specular image. This is the two TMO approach mentioned previously.

One technique to assess the quality of the processed images is to compare it with an image that was scaled using a simple linear method. FIG. 20 compares a linearly scaled image with an image scaled by piece-wise linear technique. Note how the specular highlights look brighter on the right image. Of course these two images should be compared on a high dynamic range display for most effectiveness.

Some alternatives and modifications include: 1. Selection of D.sub.HDR, the luminance of the image diffuse maximum as displayed on the HDR display 2. The width of the transition region. 3. The size and shape of the low-pass filter F1, which can affect Diffuse.sub.max. 4. The size and shape of the low-pass filter F2, which can affect Diffuse.sub.max. 5. Use of a noise-removing low pass 3.times.3 filter is already applied. 6. Whether nonlinearities, such as gamma correction, are used on the two-tone mapping regions for diffuse and specular highlights.

Referring to FIGS. 21A and 21B, a method that uses the color ratio method to predict the clipped specular highlights is illustrated. The image profile of partially clipped specular highlights is shown in FIG. 21A and the technique for reconstruction of these is shown in FIG. 21B. Note that this technique can be used as a preprocessor for the principal technique (FIG. 13), for both outcomes.

Another technique uses the HDR display so that its white point matches SDR displays (i.e., a given benchmark SDR display), and then the rest of the brightness capability of the HDR is allocated to the specular highlights. This approach works with "real" values instead of the ratio for the scaling. Instead of using the adaptive slope method to compute the range allocated to the specular highlight, the system could use the white point of standard LCDs display to define the range R. Then, all values brighter values brighter than R are specular highlights and will only be visible when displayed on the HDR monitor. This is illustrated in FIG. 22.

Another technique does not use the nonlinear TMO. This idea is based on the fact that if an image is scaled linearly from 8 to 16 bits, the contouring artifacts typically appear. In that case, the decontouring algorithm can be used to provide good HDR image from one LDR image. However, due to the properties of some HDR monitors such as (low-resolution of the led layer), no contour artifacts appear even after a linear scaling. Plus, this technique does not generate more realistic specular highlights, but it does extend the dynamic range and provides a linear scaling free of contours artifacts. The technique may adapt the coring function to 16 bits images and compare linearly scaled images against linearly scaled images after decontouring.

The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.

* * * * *

References


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed