U.S. patent number 8,086,038 [Application Number 11/776,510] was granted by the patent office on 2011-12-27 for invisible junction features for patch recognition.
This patent grant is currently assigned to Ricoh Co., Ltd.. Invention is credited to Jonathan J. Hull, Qifa Ke.
United States Patent |
8,086,038 |
Ke , et al. |
December 27, 2011 |
Invisible junction features for patch recognition
Abstract
The present invention uses invisible junctions which are a set
of local features unique to every page of the electronic document
to match the captured image to a part of an electronic document.
The present invention includes: an image capture device, a feature
extraction and recognition system and database. When an electronic
document is printed, the feature extraction and recognition system
captures an image of the document page. The features in the
captured image are then extracted, indexed and stored in the
database. Given a query image, usually a small patch of some
document page captured by a low resolution image capture device,
the features in the query image are extracted and compared against
those stored in the database to identify the query image. The
present invention also includes methods for feature extraction,
feature indexing, feature retrieval and geometric estimation.
Inventors: |
Ke; Qifa (Cupertino, CA),
Hull; Jonathan J. (San Carlos, CA) |
Assignee: |
Ricoh Co., Ltd. (Tokyo,
JP)
|
Family
ID: |
39877742 |
Appl.
No.: |
11/776,510 |
Filed: |
July 11, 2007 |
Prior Publication Data
|
|
|
|
Document
Identifier |
Publication Date |
|
US 20090016604 A1 |
Jan 15, 2009 |
|
Current U.S.
Class: |
382/176; 382/181;
382/190 |
Current CPC
Class: |
G06K
9/00463 (20130101); G06K 9/2054 (20130101); G06K
2209/01 (20130101) |
Current International
Class: |
G06K
9/34 (20060101); G06K 9/00 (20060101) |
Field of
Search: |
;382/190,181,176 |
References Cited
[Referenced By]
U.S. Patent Documents
Foreign Patent Documents
|
|
|
|
|
|
|
1245935 |
|
Mar 2000 |
|
CN |
|
0706283 |
|
Apr 1996 |
|
EP |
|
1229496 |
|
Aug 2002 |
|
EP |
|
1555626 |
|
Jul 2005 |
|
EP |
|
1662064 |
|
May 2006 |
|
EP |
|
1783681 |
|
May 2007 |
|
EP |
|
10-228468 |
|
Aug 1998 |
|
JP |
|
2000-165645 |
|
Jun 2000 |
|
JP |
|
2001230916 |
|
Aug 2001 |
|
JP |
|
WO 99/05658 |
|
Feb 1999 |
|
WO |
|
WO 2004/072897 |
|
Aug 2004 |
|
WO |
|
WO 2005/043270 |
|
May 2005 |
|
WO |
|
WO 2007/073347 |
|
Jun 2007 |
|
WO |
|
2008129373 |
|
Oct 2008 |
|
WO |
|
Other References
Cover, T.M. et al., "Nearest Neighbor Pattern Classification," IEEE
Transactions on Information Theory, Jan. 1967, pp. 21-27, vol.
IT-13, No. 1. cited by other .
U.S. Appl. No. 10/813,901, filed Mar. 30, 2004, Erol et al. cited
by other .
Aggarwal, M et al, "On Cosine-fourth and Vignetting Effects in Real
Lenses," ICCV Proceedings, IEEE, 2001, vol. 1, pp. 472-479,
[online] Retrieved from the Internet<URL:
http://www.metaverselab.org/classis/635/reading/aggarwal-iccv.pdf>.
cited by other .
Akenine-Moller, T. et al., "Real-Time Rendering," A.K. Peters,
Natick, MA, 2.sup.nd Edition, 2002, pp. 70-84. cited by other .
Archive of "Barcodepedia.com--the online barcode database,"
[online] [Archived by http://archive.org on Jul. 9, 2006; Retrieved
on Aug. 18, 2008] Retrieved from the
Internet<http://web.archive.org/web/20060709101455/http://en.barcodepe-
dia.com/>. cited by other .
Baba, M. et al., "Shadow Removal from a Real Image Based on Shadow
Density," Poster at SIGGRAPH2004, Updated Aug. 16, 2004, 4 pages,
[online] Retrieved from the
Internet<URL:http://www.cv.its.hiroshima-cu.ac.jp/baba/Shadow/poster04-
-02.pdf>. cited by other .
Baird, H.S., "Document Image Defect Models and Their Uses," Proc.,
IAPR 2.sup.nd International Conference on Document Analysis and
Recognition, Tsukuba Science City, Japan, Oct. 20-22, 1993, 7
pages. cited by other .
Baird, H., "Document Image Defect Models," In Proc. Of IAPR
Workshop on Syntactic and Structural Pattern Recognition, Murray
Hill, NJ, Jun. 1990, Structured Document Image Analysis,
Springer-Verlag, pp. 546-556. cited by other .
Baird, H., "The State of the Art of Document Image Degradation
Modeling," In Proc. Of the 4.sup.th IAPR International Workshop on
Document Analysis Systems, Rio de Janeiro, Brazil, 2000, pp. 1-16,
[online] Retrieved from the
Internet<URL:http://www2.parc.xerox.com/istl/members/baird/das00.p-
as.gz>. cited by other .
International Workshop on Document Analysis Systems, Rio de
Janeiro, Brazil, 2000, pp. 1-16, [online] Retrieved from the
Internet<URL:http://www2.parc.xerox.com/istl/members/baird/das00.pas.g-
z>. cited by other .
Barney Smith, E.H. et al., "Text Degradations and OCR Training,"
International Conference on Document Analysis and Recognition 2005,
Seoul, Korea, Aug. 2005, 5 pages, [online] Retrieved from the
Internet<URL:http://coen.boisestate.edu/EBarneySmith/Papers/ICDAR05.su-
b.--submit.pdf>. cited by other .
Bouget, J., "Camera Calibration Toolbox for Matlab," Online Source,
Updated Jul. 24, 2006, 6 pages, [online] Retrieved from the
Internet<URL:http:www.vision.caltech.edu/bougetj/calib.sub.--doc/index-
.html#ref>. cited by other .
Boukraa, M. et al., "Tag-Based Vision: Assisting 3D Scene Analysis
with Radio-Frequency Tags," Jul. 8, 2002, Proceedings of the Fifth
International Conference on Information Fusion, Piscataway, N.J.,
IEEE, Jul. 8-11, 2002, pp. 412-418. cited by other .
Boyd, S., "EE263: Introduction to Linear Dynamical Systems," Online
Lecture Notes, Stanford University, Spring Quarter, 2006-2007,
Accessed on Sep. 11, 2006, 4 pages, [online] Retrieved from the
Internet<URL:http://www.standford/edu/class/ee263/#lectures>.
cited by other .
Brassil, J. et al., "Hiding Information in Document Images," Proc.
Conf. Information Sciences and Systems (CISS-95), Mar. 1995, Johns
Hopkins University, Baltimore, MD, pp. 482-489. cited by other
.
"Call for Papers: ICAT 2007," 17.sup.th International Conference on
Artificial Reality and Telexistence, 2007, [Online] [Retrieved on
Nov. 4, 2008] Retrieved from the
Internet<URL:http://www.idemployee.id.tue.nl/g.w.m.rauterberg/conferen-
ces/ICAT2007-CfP.pdf>. cited by other .
Constantini, R. et al., "Virtual Sensor Design," Proceedings of the
SPIE, vol. 5301, 2004, pp. 408-419, Retrieved from the
Internet<URL:http://ivrgwww.epfl.ch/publications/cs04.pdf>.
cited by other .
Davis, M. et al., "Towards Context-Aware Face Recognition,"
Proceedings of the13th Annual ACM International Conference on
Multimedia, Nov. 6-11, 2005, pp. 483-486, vol. 13. cited by other
.
Doermann, D. et al., "Progress in Camera-Based Document Image
Analysis," Proceedings of the Seventh International Conference on
Document Analysis and Recognition, ICDAR 2003, 11 pages, [online]
Retrieved from the
Internet<URL:http://www.cse.salford.ac.uk/prima/ICDAR2003/Papers/0111.-
sub.--keynote.sub.--III.sub.--doermann.sub.--d.pdf>. cited by
other .
Erol, B. et al., "Linking Multimedia Presentations with Their
Symbolic Source Documents: Algorithm and Applications," Nov. 2-8,
2003, pp. 498-507, [Online] [Retreived on Oct. 15, 2008] Retrieved
from the
Internet<URL:http://rii.ricoh.com/{hull/pubs/p225.sub.--erol.pdf>.
cited by other .
Esposito, F. et al., "Machine Learning Methods for Automatically
Processing Historical Documents: from Paper Acquisition to XML
Transformation," Proceedings of the First International Workshop on
Document Image Analysis for Libraries (DIAL '04), IEEE, 2004, pp.
1-8. cited by other .
European Partial Search Report, European Application No.
EP07015093.3, Dec. 17, 2007, 7 pages. cited by other .
European Search Report, European Application No. 08160125.4, Oct.
13, 2008, 5 pages. cited by other .
European Search Report, European Application No. 06796845.3, Oct.
30, 2008, 12 pages. cited by other .
European Search Report, European Application No. 06796844.6, Oct.
30, 2008, 12 pages. cited by other .
European Search Report, European Application No. 06796848.7, Oct.
31, 2008, 12 pages. cited by other .
European Search Report, European Application No. 06796846.1, Nov.
5, 2008, 11 pages. cited by other .
European Search Report, European Application No. 08159971.4, Nov.
14, 2008, 6 pages. cited by other .
European Search Report, European Application No. 08160115.5, Nov.
12, 2008, 6 pages. cited by other .
European Search Report, European Application No. 08160130.4, Nov.
12, 2008, 7 pages. cited by other .
European Search Report, European Application No. 08160112.2, Nov.
10, 2008, 7 pages. cited by other .
European Search Report, European Application No. 07252397, Oct. 15,
2007, 7 pages. cited by other .
Ho, T.K. et al., "Decision Combination in Multiple Classifier
Systems," IEEE Transactions on Pattern Analysis and Machine
Intelligence, Jan. 1994, pp. 66-75, vol. 16, No. 1. cited by other
.
Ho, T.K. et al., "Evaluation of OCT Accuracy Using Synthetic Data,"
Proceedings of the 4.sup.th Annual Symposium on Document Analysis
and Information Retrieval, Apr. 24-26, 1995, pp. 413-422. [online]
Retrieved from the
Internet<URL:http://citeseer.ist.psu.edu/cache/papers/cs/2303-
/http:zSzzSzcm.bell-labs.comzSzcmzSzcszSzwhozSzhsbzSzeoasd.pdf/ho95evaluat-
ion.pdf>. cited by other .
Hull, J.J., "Document Image Matching and Retrieval with Multiple
Distortion-Invariant Descriptors," International Association for
Pattern Recognition Workshop on Document Analysis Systems, Jan. 1,
1995, pp. 375-396. cited by other .
Hull, J.J. et al., "Document Image Matching Techniques," Apr. 30,
1997, pp. 31-35, [Online] [Retrieved on May 2, 1997] Retrieved from
the
Internet<URL:http://rii.ricoch.com/hull/pubs/hull.sub.--sdiut97.pdf>-
;. cited by other .
Hull, J. J., "Document Image Similarity and Equivalence Detection,"
International Journal on Document Analysis and Recognition, 1998,
pp. 37-42, Springer-Verlag. cited by other .
Hull, J., "Document Image Skew Detection: Survey and Annotated
Bibliography," Document Analysis Systems II, World Scientific,
1998, pp. 40-64. cited by other .
Hull, J.J. et al., "Paper-Based Augmented Reality," 17.sup.th
International Conference on Artificial Reality and Telexistence,
Nov. 1, 2007, pp. 205-209. cited by other .
Kanungo, T. et al., "A Downhill Simplex Algorithm for Estimating
Morphological Degradation Model Parameters," University of Maryland
Technical Report, LAMP-RT-066, Feb. 2001, 15 pages, [online]
Retrieved from the
Internet<URL:http://lampsrv01.umiacs.umd.edu/pubs/TechReports-
/LAMP.sub.--066/LAMP.sub.--066.pdf>. cited by other .
Kanungo, T. et al., "Global and Local Document Degradation Models,"
Document Analysis and Recognition, 1993, Proceedings of the Second
International Conference on Volume, Oct. 20-22, 1993, pp. 730-734.
cited by other .
Khoubyari, S. et al., "Font and Funct on Word Ident ficat on n
Document Recogn t on," Computer Vision and Image Understanding,
Jan. 1996, pp. 66-74, vol. 63, No. 1. cited by other .
Khoubyari, S. et al., "Keyword Location and Noisy Document Images,"
Second Annual Symposium on Document Analysis and Information
Retrieval, Las Vegas, NV, Apr. 26-28, 1993, pp. 217-231. cited by
other .
Kopec, G.E. et al., "Document Image Decoding Using Markov Source
Models," IEEE Transactions on Pattern Analysis and Machine
Intelligence, Jun. 1994, pp. 602-617, vol. 16, No. 6. cited by
other .
Li, Y. et al., "Validation of Image Defect Models for Optical
Character Recognition," IEEE Trans. Pattern Anal. Mach. Intell. 18,
2, Feb. 1996, pp. 99-108, [online] Retrieved from the
Internet<URL:http://www.cs.cmu.edu/afs/cs/usr/andrewt/papers/Validate/-
journal.ps.gz>. cited by other .
Liang, J. et al., "Flattening Curved Documents in Images," In Proc.
Of the IEEE Conf. on Computer Vision and Pattern Recognition
(CVPR), 2005, 8 pages, [online] Retrieved from the
Internet<URL:http://www.cfar.umd.edu/.about.daniel/daniel.sub.--papers-
fordownload/liang-j.sub.--cpvr2005.pdf>. cited by other .
Lu, Y. et al., "Document Retrieval from Compressed Images," Pattern
Recognition, 2003, pp. 987-996, vol. 36. cited by other .
McDonald, G., "Third Voice: Invisible Web Graffiti," PC World, May
18, 1999, [online] [Retrieved on Nov. 14, 2006] Retrieved from the
Internet<URL:http://www.pcworld.com/news/article/0,aid,11016,00.asp>-
;. cited by other .
Microsoft Computer Dictionary (5.sup.th ed.), 2002, "Hyperlink"
Definition, pp. 260-261. cited by other .
"Mobile Search Engines," Sonera MediaLab, Nov. 15, 2002, pp. 1-12.
cited by other .
"Mobile Video Managed Service," Vidiator, 2008, [online] [Retrieved
on Aug. 29, 2008] Retrieved from the
Internet<URL:http://www.vidiator.com/services/managed.sub.--mobile.sub-
.--video.aspx>. cited by other .
Mukherjea, S. et al., "AMORE: A World Wide Web Image Retrieval
Engine," C&C Research Laboratories, NEC USA Inc., Baltzer
Science Publishers BV, World Wide Web 2, 1999, pp. 115-132. cited
by other .
Pavlidis, T., "Effects of Distortions on the Recognition Rate of a
Structural OCR System," In Pro. Conf. On Comp. Vision and Pattern
Recog., IEEE, Washington, DC, 1983, pp. 303-309. cited by other
.
PCT International Search Report and Written Opinion,
PCT/JP2006/316810, Oct. 10, 2006, 9 pages. cited by other .
PCT International Search Report and Written Opinion,
PCT/JP2006/316811, Oct. 10, 2006, 9 pages. cited by other .
PCT International Search Report and Written Opinion,
PCT/JP2006/316812, Oct. 10, 2006, 9 pages. cited by other .
PCT International Search Report and Written Opinion,
PCT/JP2006/316814, Oct. 10, 2006, 11 pages. cited by other .
Sato, T. et al., "High Resolution Video Mosaicing for Documents and
Photos by Estimating Camera Motion," Proceedings of the SPIE 5299,
246, 2004, 8 pages, [online] Retrieved from the Internet<URL:
http://yokoya.naist.jp/paper/datas/711/spie2004.pdf>. cited by
other .
Schalkoff, R.J., "Syntactic Pattern Recognition (SYNTPR) Overview,"
Pattern Recognition: Statistical, Structural and Neural Approaces,
Jan. 1, 1992, pp. 127-150, vol. 3, Wiley. cited by other .
Sivic, J. et al., "Video Google: A Text Retrieval Approach to
Object Matching in Videos," Proceedings of the Ninth IEEE
International Conference on Computer Vision (ICCV 2003), 2-Volume
Set, 2003, IEEE, pp. 1-8.=. cited by other .
Stoyanov, D., "Camera Calibration Tools," Online Source, Updated
Aug. 24, 2006, Accessed Aug. 31, 2006, 12 pages, [online] Retrieved
from the
Internet<URL:http://ubimon.doc.ic.ac.uk/dvs/index.php?m=581>.
cited by other .
Veltkamp, R. et al., "Content-Based Image Retrieval Systems: A
Survey," Department of Computing Science, Utrecht University, Oct.
28, 2002, pp. 1-62. cited by other .
Wikipedia Online Definition, "Optical Character Recognition," Sep.
14, 2008, pp. 1-7, [online] [Retrieved on Sep. 14, 2008] Retrieved
from the
Internet<URL:http://en.wikipedia.org/wiki/Optical.sub.--character.sub.-
--recognition>. cited by other .
Wikipedia Online Encyclopedia,"Automatic Identification and Data
Capture," Jul. 21, 2008, pp. 1-2, [Online] [Retrieved on Sep. 27,
2008] Retrieved from the
Internet<URL:http://en.wikipedia.org/wiki/Automatic.sub.--ide-
ntification.sub.--and.sub.--data.sub.--capture>. cited by other
.
Zhang, Z., "A Flexible New Technique for Camera Calibration," IEEE
Transactions on Pattern Analysis and Machine Intelligence, Nov.
2000, pp. 1330-1334, vol. 22, No. 11. cited by other .
Zheng, Q.-F. et al., "Effective and Efficient Object-Based Image
Retrieval Using Visual Phases," Proceedings of the 14.sup.th Annual
ACM International Conference on Multimedia, MM'06, Oct. 23-27,
2006, Santa Barbara, CA, pp. 77-80. cited by other .
Zi, G., "Groundtruth Generation and Document Image Degradation,"
University of Maryland Language and Media Processing Laboratory
Technical report (LAMP-TR-121), May 2005, 72 pages, [online]
Retrieved from the
Internet<URL:http://lampsrv01.umiacs.umd.edu/pubs/Tech
Reports/LAMP.sub.--121/LAMP.sub.--121.pdf>=. cited by other
.
European Search Report, European Application No. 09170045.0, Nov.
24, 2009, 4 Pages. cited by other .
Roth, M.T. et al., "The Garlic Project," Proc. of the 1996 ACM
SIGMOD International Conference on Management of Data, Montreal,
Quebec, Canada, Jun. 4, 1996, pp. 557. cited by other .
Archive of Scanbuy Solutions | Optical Intelligence for your Mobile
Devices, Scanbuy.RTM. Inc.,
www.scanbuy.com/website/solutions.sub.--summary.htm, [Online]
[Archived by http://archive.org on Jun. 19, 2006; Retrieved on Mar.
3, 2009] Retrieved from the
Internet<URL:http://web.archive.org/web/20060619172549/http://www.scan-
buy.com/website/solutions.sub.--su . . . >. cited by other .
Canny, J., "A Computational Approach to Edge Detection," IEEE
Transactions on Pattern Analysis and Machine Intelligence, Nov.
1986, pp. 679-714, vol. PAMI-8, No. 6. cited by other .
Di Stefano, L. et al., "A Simple and Efficient Connected Components
Labeling Algorithm," International Conference on Image Analysis and
Processing, 1999, pp. 322-327. cited by other .
Duda, R. O. et al., "Use of the Hough Transformation to Detect
Lines and Curves in Pictures," Communications of the ACM, Jan.
1972, pp. 11-15, vol. 15, No. 1. cited by other .
Erol, B. et al., "Prescient Paper: Multimedia Document Creation
with Document Image Matching," 17.sup.th International Conference
on Pattern Recognition, Aug. 23-26, 2004, Cambridge, UK. cited by
other .
Erol, B. et al., "Retrieval of Presentation Recordings with Digital
Camera Images," IEEE Conference on Computer Vision and Pattern
Recognition, Jun. 27-Jul. 2, 2004. cited by other .
Ezaki, N. et al., "Text Detection from Natural Scene Images:
Towards a System for Visually Impaired Persons," Proc. of 17.sup.th
Int. Conf. on Pattern Recognition (ICPR 2004), IEEE Computer
Society, Aug. 23-26, 2004, Cambridge, UK, pp. 683-686, vol. II.
cited by other .
Fadoua, D. et al., "Restoring Ink Bleed-Through Degraded Document
Images Using a Recursive Unsupervised Classification Technique,"
Lecture Notes in Computer Science 3872, Document Analysis Systems
VII, 7th International Workshop, DAS 2006, Feb. 13-15, 2006,
Nelson, New Zealand, Bunke, H. et al. (eds.), pp. 38-49. cited by
other .
Freund, Y. et al., "A Short Introduction to Boosting," Journal of
Japanese Society for Artificial Intelligence, Sep. 1999, pp.
771-780, vol. 14, No. 5. cited by other .
Hjelmas, E. et al., "Face Detection: A Survey," Computer Vision and
Image Understanding, 2001, pp. 236-274, vol. 83. cited by other
.
Hull, J.J., "Document Image Matching on CCITT Group 4 Compressed
Images," SPIE Conference on Document Recognition IV, Feb. 8, 1997,
pp. 82-87. cited by other .
Jagannathan, L. et al., Perspective Correction Methods for Camera
Based Document Analysis, Proc. First Int. Workshop on Camera-based
Document Analysis and Recognition, 2005, pp. 148-154. cited by
other .
Jain, A.K. et al., "An Introduction to Biometric Recognition," IEEE
Transactions on Circuits and Systems for Video Technology, Jan.
2004, pp. 4-20, vol. 14, No. 1. cited by other .
Po, L-M. et al., "A Novel Four-Step Search Algorithm for Fast Block
Motion Estimation," IEEE Transactions on Circuits and Systems for
Video Technology, Jun. 1996, pp. 313-317, vol. 6, Issue 3. cited by
other .
Rangarajan, K. et al. "Optimal Corner Detector," 1988, IEEE, pp.
90-94. cited by other .
Rosin, P.L. et al., "Image Difference Threshold Strategies and
Shadow Detection," Proceedings of the 6.sup.th British Machine
Vision Conference, 1995, 10 pages. cited by other .
Sezgin, M. et al., "Survey Over Image Thresholding Techniques and
Quantitative Performance Evaluation," Journal of Electronic
Imaging, Jan. 2004, pp. 146-165, vol. 13, No. 1. cited by other
.
Triantafyllidis, G.A. et al., "Detection of Blocking Artifacts of
Compressed Still Images," Proceedings of the 11th International
Conference on Image Analysis and Processing (ICIAP '01), IEEE,
2001, pp. 1-5. cited by other .
U.S. Appl. No. 10/696,735, filed Oct. 28, 2003, Erol, B. et al.,
"Techniques for Using a Captured Image for the Retrieval of
Recorded Information," 58 pages. cited by other .
Zanibbi, R. et al. "A Survey of Table Recognition," International
Journal on Document Analysis and Recognition, 2004, pp. 1-33. cited
by other .
Zhao, W. et al., Face Recognition: A Literature Survey, ACM
Computing Surveys (CSUR), 2003, pp. 399-458, vol. 35, No. 4. cited
by other .
Liu, Y. et al., "Automatic Texture Segmentation for Texture-Based
Image Retrieval," IEEE, Jan. 5-7, 2004, pp. 285-288. cited by other
.
Liu, T. et al., "A Fast Image Segmentation Algorithm for
Interactive Video Hotspot Retrieval," IEEE, 2001, pp. 3-8. cited by
other .
Wikipedia Online Encyclopedia, "Image Scanner," Last Modified Feb.
9, 2010, pp. 1-9, [Online] [Retrieved on Feb. 13, 2010] Retrieved
from the
Internet<URL:http://en.wikipedia.org/wiki/Image.sub.--scanner>.
cited by other .
Wikipedia Online Encyclopedia, "Waypoint," Last Modified Feb. 13,
2010, pp. 1-4, [Online] Retrieved on Feb. 13, 2010] Retrieved from
the Internet<URL:http://en.wikipedia.org/wiki/Waypoint>.
cited by other .
Erol, B. et al., "Linking Presentation Documents Using Image
Analysis," IEEE, Nov. 9-12, 2003, pp. 97-101, vol. 1. cited by
other .
Hull, J.J. et al., "Visualizing Multimedia Content on Paper
Documents: Components of Key Frame Selection for Video Paper,"
Proceedings of the Seventh International Conference on Document
Analysis and Recognition (ICDAR'03), IEEE, 2003, 4 pages. cited by
other .
United States Office Action, U.S. Appl. No. 11/624,466, Jun. 8,
2010, 29 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/827,530, Jun. 9,
2010, 35 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,294, Jun. 11,
2010, 19 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,300, Jun. 11,
2010, 20 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,024, Jul. 14,
2010, 29 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,049, Jul. 28,
2010, 27 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,279, Aug. 5,
2010, 37 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,286, Aug. 5,
2010, 28 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/240,596, Aug. 6,
2010, 32 pages. cited by other .
European Search Report, European Application No. 09156089.6, Jun.
19, 2009, 8 pages. cited by other .
Marques, O. et al., "Content-Based Image and Video Retrieval, Video
Content Representation, Indexing, and Retrieval, a Survey of
Content-Based Image Retrieval Systems, CBVQ (Content-Based Visual
Query)," Content-Based Image and Video Retrieval [Multimedia
Systems and Applications Series], Apr. 1, 2002, pp. 15-117, vol.
21, Kluwer Academic Publishers Group, Boston, USA. cited by other
.
Japanese Office Action, Japanese Application No. 2004-293962, Aug.
24, 2010, 3 pages. cited by other .
Extended European Search Report, Application No.
09178280.5-2201/2202646, Aug. 31, 2010, 6 pages. cited by other
.
United States Office Action, U.S. Appl. No. 11/461,143, Aug. 18,
2010, 9 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,272, Aug. 23,
2010, 31 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,126, Sep. 3,
2010, 28 pages. cited by other .
United States Notice of Allowance, U.S. Appl. No. 11/461,095, Sep.
27, 2010, 29 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/060,194, Oct. 1,
2010, 29 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,294, Oct. 7,
2010, 17 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,300, Oct. 6,
2010, 20 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/827,530, Oct. 7,
2010, 21 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/624,466, Oct. 14,
2010, 11 pages. cited by other .
United States Notice of Allowance, U.S. Appl. No. 11/461,091, Oct.
18, 2010, 31 pages. cited by other .
United States Notice of Allowance, U.S. Appl. No. 11/461,024, Nov.
15, 2010, 10 pages. cited by other .
United States Notice of Allowance, U.S. Appl. No. 11/461,049, Nov.
16, 2010, 10 pages. cited by other .
United States Notice of Allowability, U.S. Appl. No. 11/461,091,
Nov. 17, 2010, 22 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/777,142, Nov. 10,
2010, 17 pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/776,520, Dec. 7, 2010, 43
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/719,437, Dec. 9, 2010, 38
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,126, Dec. 23, 2010, 30
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,279, Jan. 7, 2011, 44
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/240,596, Jan. 21, 2011, 21
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,286, Jan. 21, 2011, 34
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,143, Feb. 4, 2011, 16
pages. cited by other .
Japanese Office Action, Japanese Patent Application No.
200910138044.X, Jan. 26, 2011, 6 pages. cited by other .
U.S. Notice of Allowance, U.S. Appl. No. 11/624,466, Feb. 22, 2011,
12 pages. cited by other .
U.S. Notice of Allowance, U.S. Appl. No. 11/461,272, Feb. 23, 2011,
28 pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/060,194, Feb. 25, 2011, 18
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/879,933, Mar. 2, 2011, 7
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/210,519, Mar. 14, 2011, 38
pages. cited by other .
U.S. Notice of Allowance, U.S. Appl. No. 11/461,024, Mar. 16, 2011,
12 pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,300, Mar. 18, 2011, 25
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/777,142, Mar. 18, 2011, 21
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,037, Mar. 30, 2011, 29
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/210,511, Apr. 4, 2011, 49
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/247,202, Apr. 6, 2011, 37
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,126, Apr. 12, 2011, 27
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 11/461,294, Apr. 12, 2011, 23
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/210,540, Apr. 15, 2011, 45
pages. cited by other .
U.S. Office Action, U.S. Appl. No. 12/340,124, Apr. 15, 2011, 48
pages. cited by other .
Antonacopoulos et al., "Flexible Page Segmentation Using The
Background", Proceedings of the IAPR International Conference on
Pattern Recognition, Jerusalem, Oct. 9-13, 1994. Conference B:
Pattern Recognition and Neural Networks; [Proceedings of the IAPR
International Conference On Pattern Recognition], Los Alamitos,
IEE, vol. 2, Oct. 9, 1994, pp. 339-344, XP000509905, ISBN:
978-0-8186-6272-0. cited by other .
Reniers et al., "Skeleton-based Hierarchical Shape Segmentation",
IEEE International Conference on Shape Modeling and Applications.
SMI'07, Jun. 1, 2007, Computer Society, pp. 179-188, XP031116745,
ISBN: 978-0-7695-2815-1. cited by other .
Rademacher, "View-Dependent Geometry", Computer Graphics
Proceedings, Annual Conference Series, Siggraph 99, Los Angeles,
California Aug. 8-13, 1999, pp. 439-446, XP001024743. cited by
other .
U.S. Patent Office Action, U.S. Appl. No. 12/121,275, Apr. 20,
2011, 44 pages. cited by other .
U.S. Patent Office Action, U.S. Appl. No. 11/776,520, Apr. 28,
2011, 10 pages. cited by other .
Extended European Search Report, European Patent Application No.
082523770, May 2, 2011, 6 pages. cited by other .
U.S. Patent Office Action, U.S. Appl. No. 12/210,519, Jun. 16,
2011, 22 pages. cited by other .
U.S. Patent Office Action, U.S. Appl. No. 12/060,194, Jun. 27,
2011, 18 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,279, Jul. 8,
2011, 46 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,286, Jul. 15,
2011, 37 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,126, Jul. 22,
2011, 35 pages. cited by other .
United States Notice of Allowance, U.S. Appl. No. 12/247,202, Jul.
28, 2011, 16 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/461,143, Aug. 11,
2011, 20 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/776,530, Aug. 19,
2011, 54 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/253,715, Aug. 31,
2011, 58 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/060,198, Sep. 1,
2011, 87 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/060,200, Sep. 2,
2011, 65 pages. cited by other .
United States Office Action, U.S. Appl. No. 11/777,142, Sep. 23,
2011, 22 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/210,511, Sep. 28,
2011, 25 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/247,205, Oct. 6,
2011, 56 pages. cited by other .
United States Office Action, U.S. Appl. No. 12/342,330, Oct. 7,
2011, 47 pages. cited by other.
|
Primary Examiner: Koziol; Stephen
Attorney, Agent or Firm: Patent Law Works LLP
Claims
The invention claimed is:
1. A computer-implemented method for generating invisible
junctions, the method comprising: receiving, with a processor, an
electronic document; applying, with the processor, a distance
transformation to the electronic document to generate a distance
transform; detecting, with the processor, a skeleton in the
distance transform; determining, with the processor, junction
points in the skeleton as the invisible junctions, the junction
points having a distance transformation value based on the distance
transform; determining, with the processor, a junction size for at
least one junction point, the junction size based at least in part
on the distance transformation value; and creating, with the
processor, a feature descriptor for each invisible junction, the
feature descriptor including the junction size.
2. The method of claim 1 wherein: determining the junction points
includes determining a junction location for each junction point;
and the feature descriptor includes the junction location.
3. The method of claim 1 wherein the junction size is the distance
transformation value multiplied by a factor.
4. The method of claim 1 wherein: determining the junction points
includes determining a junction orientation for each junction
point; and the feature descriptor includes the junction
orientation.
5. The method of claim 1 wherein the junction points are at least
one of a local maxima of a distance transform field and a local
maxima on the skeleton.
6. A computer-implemented method for generating and storing
invisible junctions, the method comprising: receiving, with a
processor, an electronic document; applying, with the processor, a
distance transformation to the electronic document to generate a
distance transform; detecting, with the processor, a skeleton in
the distance transform; determining, with the processor, junction
points in the skeleton as the invisible junctions, the junction
points having a distance transformation value based on the distance
transform; determining, with the processor, a junction size for at
least one junction point, the junction size based at least in part
on the distance transformation value; extracting, with the
processor, an invisible junction feature descriptor from the
electronic document for each invisible junction, the invisible
junction feature descriptor including the including the junction
size; creating, with the processor, a feature index from the
invisible junction feature descriptors; and storing the feature
index.
7. The method of claim 6, wherein storing the feature index
comprises storing the feature index in a database.
8. The method of claim 6, wherein storing the feature index
comprises storing the feature index and the invisible junction
feature descriptors in a database.
9. The method of claim 6, wherein creating the feature index from
the invisible junction feature descriptors comprises: creating a
quantization tree; and adding a feature point to the quantization
tree for the invisible junction feature descriptors.
10. The method of claim 9, wherein the quantization tree and the
feature point are stored in a database.
11. The method of claim 9, further comprising: extracting an
additional invisible junction feature descriptor; and adding a
second feature point to the quantization tree for the additional
invisible junction feature descriptor.
12. The method of claim 11, wherein the quantization tree is
trained by a set of training features using hierarchical
clustering.
13. The method of claim 11, wherein the feature point is added to
the quantization tree by adding a leaf node including a page ID and
coordinates (x, y) of the feature point.
14. The method of claim 6, wherein creating the feature index from
the invisible junction feature descriptors comprises: organizing a
feature point into a k-d tree; reducing a feature dimension using
principal component analysis; and ordering the feature dimension to
effectively partition the k-d tree.
15. A system for generating invisible junctions, the system
comprising: a processor; a feature determination unit stored on a
memory and executable by the processor, the feature determination
unit having an input and an output for applying a distance
transformation to an electronic document to generate a distance
transform, for detecting a skeleton in the distance transform, for
determining junction points in the skeleton as the invisible
junctions, the junction points having a distance transformation
value based on the distance transform and for determining a
junction size that is based at least in part on the distance
transformation value, the input of the feature determination unit
coupled to receive the electronic document; and a descriptor
creation unit stored on the memory and executable by the processor,
the descriptor creation unit having an input and an output for
creating a feature descriptor for each invisible junction, the
feature descriptor including the junction size, the input of the
descriptor creation unit coupled to the output of the feature
determination unit to receive information about the invisible
junctions, the output of the descriptor creation unit providing the
feature descriptors.
16. The system of claim 15, further comprising a database to store
the feature descriptors, the output of the descriptor creation unit
coupled to the database.
17. The system of claim 15, wherein the feature determination unit
comprises a location determination module having an input and an
output for determining a junction location, the input of the
location determination module coupled to receive the electronic
document, the output of the location determination module coupled
to the input of the descriptor creation unit.
18. The system of claim 17, wherein the feature determination unit
comprises a size determination module having an input and an output
for determining the junction size, the input of the size
determination module coupled to the output of the location
determination module to receive the junction location, the output
of the size determination module coupled to the input of the
descriptor creation unit.
19. The system of claim 18, wherein the feature determination unit
comprises an orientation determination module having an input and
an output for determining a junction orientation, the input of the
orientation determination module coupled to the output of the size
determination module to receive the junction size, the output of
the orientation determination module coupled to the input of the
descriptor creation unit.
20. A system for generating invisible junctions, the system
comprising: a processor; a feature extraction unit stored on a
memory and executable by the processor, the feature extraction unit
having an input and an output for applying a distance
transformation to an electronic document to generate a distance
transform, for detecting a skeleton in the distance transform, for
determining junction points in the skeleton as the invisible
junctions, the junction points having a distance transformation
value based on the distance transform, for determining a junction
size based on the distance transformation value and for producing
an invisible junction feature descriptor for each invisible
junction, the invisible junction feature descriptor including the
junction size, the input of the feature extraction unit coupled to
receive the electronic document; and a feature indexing unit stored
on the memory and executable by the processor, the feature indexing
unit having an input and an output for creating a feature index for
the invisible junctions, the input of the feature indexing unit
coupled to the output of the feature extraction unit to receive the
invisible junction feature descriptors, the output of the feature
indexing unit providing the feature index.
21. The system of claim 20, further comprising a database to store
the feature index, the output of the feature indexing unit coupled
to the database.
22. The system of claim 20, wherein the feature extraction unit
creates the invisible junction feature descriptors for the
invisible junctions.
23. The system of claim 20, wherein the feature extraction unit
comprises: a location determination module having an input and an
output for determining a junction location, the input of the
location determination module coupled to receive the electronic
document; a size determination module having an input and an output
for determining the junction size, the input of the size
determination module coupled to the output of the location
determination module to receive the junction location; an
orientation determination module having an input and an output for
determining a junction orientation, the input of the orientation
determination module coupled to the output of the size
determination module to receive the junction size; and wherein the
feature extraction unit is coupled to the feature indexing unit to
provide the junction location, junction size and junction
orientation.
24. The system of claim 20, wherein the feature indexing unit also
provides the invisible junction feature descriptors.
25. The system of claim 20, wherein the feature indexing unit
creates a quantization tree and adds a feature point to the
quantization tree for the invisible junction feature
descriptors.
26. The system of claim 25, wherein the feature indexing unit adds
the feature point by adding a leaf node including a page ID and
coordinates (x, y) of the feature point.
27. The system of claim 20, wherein the feature descriptor includes
a feature point and the feature indexing unit organizes the feature
point into a k-d tree, reduces a feature dimension of the k-d tree
using principal component analysis; and orders the feature
dimension to effectively partition the k-d tree.
Description
BACKGROUND OF THE INVENTION
The present invention relates to processing techniques for patch
recognition. More particularly, the present invention relates to
systems and methods for using a new feature referred to herein as
invisible junctions for image-based document patch recognition.
Computers and electronic documents were once restricted to use in a
desktop environments where electronic documents were output by the
computer to printers and printed on paper. Printers and copiers are
used in private and commercial office environments, in home
environments with personal computers, and in document printing and
publishing service environments. Once an electronic document is
printed out on paper, manipulations on these two types of documents
are mostly independent. Printing and copying technology has not
been able to bridge the gap between static printed media (i.e.,
paper documents), and the "virtual world" of interactivity that
includes the likes of digital communication, networking,
information provision, advertising, entertainment and electronic
commerce.
The advent and ever-increasing popularity of smaller portable
computing devices and personal electronic devices, such as personal
digital assistant (PDA) devices, cellular telephones (e.g.,
cellular camera phones) and digital cameras over the past few
years, has expanded the concept of documents to include their
electronic version by making the available in an electronically
readable and searchable form and by introducing interactive
multimedia capabilities, which are unparalleled by traditional
printed media.
There continue to be problems in the prior art in bridging between
the world of electronic documents on one hand and the world of
paper documents on the other. A gap exists between the virtual
multimedia-based world that is accessible electronically and the
physical world of print media. In particular, it is still very
difficult and/or computationally expensive to use a printed
document to access or even find the electronic document from which
the paper document was generated. A key obstacle to identifying and
finding an electronic document corresponding to a printed document
is the recognition of an image patch that is being captured by the
camera. While capturing an image of the printed document has become
trivial with the proliferation of cell phones with cameras, there
is no way to use such low quality images for electronic document
retrieval.
In other words, there is not an existing method that can
effectively identify from a database the document page the camera
is looking at, pin-point the exact camera look-at point on the
recognized page, and estimate the frame box of the image on the
recognized document page. This recognition task is made even more
challenging considering that: 1) the input image is a small portion
of the document page being looked at; 2) there is a large number of
document pages in the database that look similar to each other; 3)
the hand-held camera could have very different viewing conditions
including different viewing angles and distances with respect to
paper and camera motion due to hand movement; 4) there are
considerable photometric changes due to lighting changes; and 5)
there may be non-planar geometric deformation if the user is
holding the paper by hand.
While there have been attempts in the prior art, they suffer from a
number of deficiencies. For example, a popular Scale-Invariant
Feature Transform (or SIFT) is not suitable for text documents. The
SIFT key points are chosen from the extrema in scale space. More
specifically, all scales and image locations are scanned for local
maxima in scale space; these local maxima are chosen as key point
candidates. This makes the SIFT poor at discrimination between text
and the SIFT is not stable and repeatable in noisy environments.
Other prior art approaches focus on geometric features of the text
block but they are not suitable for Asian or ideographic
languages.
SUMMARY OF THE INVENTION
The present invention overcomes the deficiencies of the prior art
with a system for using invisible junctions for image-based
document patch recognition. The system is advantageous because it
uses invisible junctions of a low quality image patch to identify
and retrieve a corresponding electronic document, page, look-at
point and viewing region. The system is advantageous because it is
very fast, works with both western and eastern languages, and works
images that mix text and image components. The present invention
uses invisible junctions which are a set of local features unique
to every page of the electronic document to match the captured
image to a part of an electronic document. In one embodiment, the
system of the present invention includes: an image capture device,
a feature extraction and recognition system and database. The
feature extraction and recognition system further comprises a
feature extraction unit, a feature indexing unit, a feature
retrieval unit and a geometric estimation unit. When an electronic
document is printed, the feature extraction and recognition system,
for example in print-capture software, captures an image of the
document page. The features in the captured image are then
extracted, indexed and stored in the database. Given a query image,
usually a small patch of some document page captured by a low
resolution image capture device such as a web-cam, the features in
the query image are extracted and compared against those stored in
the database to identify the query image. The present invention
also includes methods for feature extraction, feature indexing,
feature retrieval and geometric estimation.
The features and advantages described herein are not all-inclusive
and, in particular, many additional features and advantages will be
apparent to one of ordinary skill in the art in view of the figures
and description. Moreover, it should be noted that the language
used in the specification has been principally selected for
readability and instructional purposes, and not to limit the scope
of the inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is illustrated by way of example, and not by way of
limitation in the figures of the accompanying drawings in which
like reference numerals are used to refer to similar elements.
FIG. 1 is a functional diagram of an embodiment of a system for
feature extraction and recognition in accordance with present
invention showing a graphical representation of a piece of paper
with text and images.
FIG. 2 illustrates a block diagram of an embodiment of a system
configured in accordance with an embodiment of the present
invention.
FIG. 3 illustrates a block diagram of a memory of the system of
FIG. 2 configured in accordance with a first embodiment of the
present invention.
FIG. 4 is a block diagram of an embodiment of a Feature Extraction
and Recognition system of FIG. 1 in accordance with the present
invention.
FIGS. 5A to 5E are graphical representations of the steps of
feature extraction being performed on an example image in
accordance with the present invention including an original input
image, a binarization (slightly blurred image) and a distance
transform (significantly burred image).
FIG. 6 is a flow chart of an embodiment of a method for performing
feature extraction in accordance with the present invention.
FIG. 7A is a flow chart of an embodiment of a method for performing
feature indexing in accordance with the present invention.
FIG. 7B is a flow chart of an alternate embodiment of a method for
performing feature indexing in accordance with the present
invention.
FIG. 8 is a flow chart of an embodiment of a method for feature
retrieval in accordance with the present invention.
FIG. 9 is a flow chart of an embodiment of a method for geometric
estimation in accordance with the present invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
A system for using a new feature referred to herein as invisible
junctions for image-based document patch recognition is described.
In the following description, for purposes of explanation, numerous
specific details are set forth in order to provide a thorough
understanding of the invention. It will be apparent, however, to
one skilled in the art that the invention can be practiced without
these specific details. In other instances, structures and devices
are shown in block diagram form in order to avoid obscuring the
invention.
Reference in the specification to "one embodiment" or "an
embodiment" means that a particular feature, structure, or
characteristic described in connection with the embodiment is
included in at least one embodiment of the invention. The
appearances of the phrase "in one embodiment" in various places in
the specification are not necessarily all referring to the same
embodiment.
Some portions of the detailed descriptions that follow are
presented in terms of algorithms and symbolic representations of
operations on data bits within a computer memory. These algorithmic
descriptions and representations are the means used by those
skilled in the data processing arts to most effectively convey the
substance of their work to others skilled in the art. An algorithm
is here, and generally, conceived to be a self-consistent sequence
of steps leading to a desired result. The steps are those requiring
physical manipulations of physical quantities. Usually, though not
necessarily, these quantities take the form of electrical or
magnetic signals capable of being stored, transferred, combined,
compared, and otherwise manipulated. It has proven convenient at
times, principally for reasons of common usage, to refer to these
signals as bits, values, elements, symbols, characters, terms,
numbers or the like.
It should be borne in mind, however, that all of these and similar
terms are to be associated with the appropriate physical quantities
and are merely convenient labels applied to these quantities.
Unless specifically stated otherwise as apparent from the following
discussion, it is appreciated that throughout the description,
discussions utilizing terms such as "processing" or "computing" or
"calculating" or "determining" or "displaying" or the like, refer
to the action and processes of a computer system, or similar
electronic computing device, that manipulates and transforms data
represented as physical (electronic) quantities within the computer
system's registers and memories into other data similarly
represented as physical quantities within the computer system
memories or registers or other such information storage,
transmission or display devices.
The present invention also relates to an apparatus for performing
the operations herein. This apparatus may be specially constructed
for the required purposes, or it may comprise a general-purpose
computer selectively activated or reconfigured by a computer
program stored in the computer. Such a computer program may be
stored in a computer readable storage medium, such as, but is not
limited to, any type of disk including floppy disks, optical disks,
CD-ROMs, and magnetic-optical disks, read-only memories (ROMs),
random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical
cards, or any type of media suitable for storing electronic
instructions, each coupled to a computer system bus.
Finally, the algorithms and displays presented herein are not
inherently related to any particular computer or other apparatus.
Various general-purpose systems may be used with programs in
accordance with the teachings herein, or it may prove convenient to
construct more specialized apparatus to perform the required method
steps. The required structure for a variety of these systems will
appear from the description below. In addition, the present
invention is not described with reference to any particular
programming language. It will be appreciated that a variety of
programming languages may be used to implement the teachings of the
invention as described herein.
System Overview
Referring now to FIG. 1, an embodiment of a system 100 of the
present invention for using a new feature, referred to herein as an
invisible junction, for image-based document patch recognition is
shown. The present invention uses invisible junctions, which are a
set of local features as described below with reference to FIG.
5A-5D, to match the captured image to a portion of an electronic
document. In this embodiment, the system 100 comprises: an image
capture device 106, a feature extraction and recognition system 108
and a database 110 configured to capture an image of a portion 104
of a printed or paper document 102 and retrieve the corresponding
electronic document, page, look-at point and viewing region. The
correspondence between the original area 104 of the paper document
102 and the electronic representation 112 is shown by line 114.
FIG. 1 illustrates an example of the system 100 capturing 116 a
query image such as a small patch 104 of some paper document 102
with the image capture device 106. The feature extraction and
recognition system 108 extracts the features in the query image and
compares them against those stored in the database 110 to identify
the electronic representation 112.
The image capture device 106 is any conventional type known in the
art. For example, in the query portion of the process, the image
capture device 106 may be a web camera, a camera that is part of
cell phone or a digital camera. The present invention is
particularly advantageous because it is operable with low
resolution image capture devices. The image capture device 106 is
coupled for communication with the feature extraction and
recognition system 108. This communication may be via a wired or
wireless network.
In another embodiment where model images are being indexed, the
image captured device 106 is a scanner of a multifunction
peripheral or software for generating a printed document. For
example, the image captured device 106 could be print capture
software in such an embodiment. These embodiments are used to index
documents being output or printed by a computer (not shown).
The feature extraction and recognition system 108 performs the
functions of feature extraction, indexing and retrieval for
invisible junctions. The feature extraction and recognition system
108 further comprises a feature extraction unit 310, a feature
indexing unit 312, a feature retrieval unit 314 and a geometric
estimation unit 316 as will be described in more detail below with
reference to FIGS. 2-4. The feature extraction and recognition
system 108 is coupled to the image capture device 106 to receive an
image of a paper document, a model image of the paper document or
an image patch. The feature extraction and recognition system 108
is coupled to the database 110 to store feature descriptions of
document pages and electronic document or indices to electronic
documents. The feature extraction and recognition system 108 is
also coupled to the database 110 to retrieve electronic document or
indices to electronic documents during retrieval. The feature
extraction and recognition system 108 is also coupled to output the
retrieved electronic document, page, look-at point and viewing
region to other computing systems for display with user
interfaces.
The database 110 is a conventional type and stores indices,
electronic documents, feature descriptions and other information
used in the indexing and retrieval process. The database may be a
federation of databases. In one embodiment, the database 110 is
stored within a multifunction peripheral device.
The system 100 of the present invention is particularly
advantageous because it provides a number of benefits. First, it
uses a new feature, referred to herein as an invisible junction,
that enable the identification and retrieval of a corresponding
electronic document, page, look-at point and viewing region. The
invisible junction provides stability or independence from noise
and viewing condition; is repeatable in both the query image and
the model image; provides strong discrimination, and is low
dimensional. Second, the present invention provides a very fast
recognition and retrieval at a rate of nearly 4 frames per second
on a database of 10000 pages. The recognition and retrieval is fast
enough for real-time interactive applications. Third, the present
invention can use noisy low quality input image such as available
with a cell phone camera for recognition and retrieval. In other
words, the retrieval operates accurately despite image rotation,
scale changes and occlusions. Fourth, the present invention can
perform recognition and retrieval on both eastern and western
languages. Fifth, the present invention can perform recognition and
retrieval on input images that include both text and graphic images
in any portion for text to image.
System Embodiments
Referring now also to FIG. 2, a functional block diagram of the
feature extraction and recognition system 108 configured in
accordance with an embodiment of the present invention is shown.
The feature extraction and recognition system 108 preferably
comprises a control unit 250, a display device 210, an input device
212, cursor control 214, a communication device 216, and one or
more input/output (I/O) devices 218.
The control unit 250 comprises an arithmetic logic unit, a
microprocessor, a general purpose computer or some other
information appliance equipped to provide electronic display
signals to display device 210. In one embodiment, the control unit
250 comprises a general purpose computer having a graphical user
interface, which may be generated by, for example, a program
written in Java running on top of an operating system like
WINDOWS.RTM. or UNIX.RTM. based operating systems. In one
embodiment, one or more application programs are executed by
control unit 250 including, without limitation, drawing
applications, word processing applications, electronic mail
applications, financial applications and web browser
applications.
Still referring to FIG. 2, the control unit 250 is shown including
processor 202, main memory 204 and data storage device 206, all of
which are communicatively coupled to system bus 208.
Processor 202 processes data signals and may comprise various
computing architectures including a complex instruction set
computer (CISC) architecture, a reduced instruction set computer
(RISC) architecture, or an architecture implementing a combination
of instruction sets. Although only a single processor is shown in
FIG. 2, multiple processors may be included.
Main memory 204 stores instructions and/or data that may be
executed by processor 202. The instructions and/or data may
comprise code for performing any and/or all of the techniques
described herein. Main memory 204 may be a dynamic random access
memory (DRAM) device, a static random access memory (SRAM) device,
or some other memory device known in the art. The memory 204 is
described in more detail below with reference to FIG. 3.
Data storage device 206 stores data and instructions for processor
202 and comprises one or more devices including a hard disk drive,
a floppy disk drive, a CD-ROM device, a DVD-ROM device, a DVD-RAM
device, a DVD-RW device, a flash memory device, or some other mass
storage device known in the art. In one embodiment, data storage
device 206 includes database 110. In an alternate embodiment, the
database is coupled to the feature extraction and recognition
system 108 via communication device 216 and signal line 220
representing a conventional network.
System bus 208 represents a shared bus for communicating
information and data throughout control unit 250. System bus 208
may represent one or more buses including an industry standard
architecture (ISA) bus, a peripheral component interconnect (PCI)
bus, a universal serial bus (USB), or some other bus known in the
art to provide similar functionality. Additional components coupled
to control unit 250 through system bus 208 include the display
device 210, the input device 212, cursor control 214, the
communication device 216 and the I/O device(s) 218.
Display device 210 represents any device equipped to display
electronic images and data as described herein. In one embodiment,
the display device 210 is a liquid crystal display (LCD) and light
emitting diodes (LEDs) to provide status feedback, operation
settings and other information to the user. In other embodiments,
the display device 210 may be, for example, a cathode ray tube
(CRT) or any other similarly equipped display device, screen or
monitor. In one embodiment, display device 210 is equipped with a
touch screen and/or includes a digitizer in which a
touch-sensitive, transparent panel covers the screen of display
device 210.
In one embodiment, the input device 212 is a keyboard. The keyboard
can be a QWERTY keyboard, a key pad, or representations of such
created on a touch screen. Cursor control 214 represents a user
input device equipped to communicate positional data as well as
command selections to processor 202. Cursor control 214 may include
a mouse, a trackball, a stylus, a pen, a touch screen, cursor
direction keys or other mechanisms to cause movement of a
cursor.
Communication device 216 links control unit 250 to a network 220
that may include multiple processing systems and in one embodiment
is a network controller. The network of processing systems may
comprise a local area network (LAN), a wide area network (WAN)
(e.g., the Internet), and/or any other interconnected data path
across which multiple devices may communicate. The control unit 250
also has other conventional connections to other systems such as a
network for distribution of files (media objects) using standard
network protocols such as TCP/IP, http, https, and SMTP as will be
understood to those skilled in the art.
One or more I/O devices 218 are coupled to the bus 208. These I/O
devices may be part of the other systems (not shown). For example,
the I/O device 218 can include an image scanner for capturing an
image of a document. The I/O device 218 may also includes a printer
for generating documents. The I/O device 218 may also include audio
input/output device equipped to receive audio input via a
microphone and transmit audio output via speakers. In one
embodiment, audio device is a general purpose; audio
add-in/expansion card designed for use within a general purpose
computer system. Optionally, I/O audio device may contain one or
more analog-to-digital or digital-to-analog converters, and/or one
or more digital signal processors to facilitate audio
processing.
It should be apparent to one skilled in the art that feature
extraction and recognition system 108 may include more or less
components than those shown in FIG. 2 without departing from the
spirit and scope of the present invention. For example, feature
extraction and recognition system 108 may include additional
memory, such as, for example, a first or second level cache, or one
or more application specific integrated circuits (ASICs). Further,
the feature extraction and recognition system 108 may include
multiple data buses rather than the single bus 208. Multiple buses
allow fast transfer of image data from and between the components
of the feature extraction and recognition system 108, and
simultaneous data transfer of user interface information to a
display device, for example. Similarly, additional input/output
devices 218 may be coupled to control unit 250 including, for
example, an RFID tag reader, digital still or video cameras, or
other devices that may or may not be equipped to capture and/or
download electronic data to control unit 250. One or more
components could also be eliminated such as the keyboard &
cursor control 212.
FIG. 3 is a block diagram of one embodiment of the memory unit 204
for the feature extraction and recognition system 108. The memory
unit 204 preferably comprises: an operating system 302, an
application 304, a control module 306, a browser 308, a feature
extraction unit 310, a feature indexing unit 312, a feature
retrieval unit 314 and a geometric estimation unit 316. Those
skilled in the art will recognize that the memory 204 also includes
buffers for storing data and other information temporarily during
the indexing or retrieval process. As noted above, the memory unit
204 stores instructions and/or data that may be executed by
processor 202. The instructions and/or data comprise code for
performing any and/or all of the techniques described herein. These
modules 302-314 are coupled by bus 208 to the processor 202 for
communication and cooperation to system 100. Those skilled in the
art will recognized that while the present invention will now be
described as modules or portions of a memory unit 204 of a
computer, the modules or portions thereof may also be stored in
other media such as permanent data storage device 206 and may be
distributed across a network 220 having a plurality of different
computers such as in a client/server environment.
The operating system 302 is preferably a custom operating system
that is accessible to user via an application interface. In an
alternate embodiment, the operating system 302 is one of a
conventional type such as, WINDOWS.RTM., SOLARIS.RTM. or LINUX.RTM.
based operating systems.
The memory unit 204 also includes one or more application programs
304 including, without limitation, drawing applications, word
processing applications, electronic mail applications, financial
applications and web browser applications. In one embodiment, the
applications 304 specifically utilize the unique capabilities of
the other modules or units of memory 204 to bridge the gap between
electronic documents and paper documents by enabling users to
establish and identify links between a printed paper and its
corresponding electronic document. More specifically, the
applications 304 access electronic data from a paper and
attach/embed electronic data to specific locations on a paper so
that a user can later retrieval that electronic data from the
paper, using cameras as the tool for the above operations. For
example, one application 304 retrieves an electronic file
(Microsoft Word, Adobe PDF, etc) of a printed paper in response to
the user simply pointing a web-cam and capturing an image of the
printed paper. Another example application 304 is a paper-based
photo-wiki, which lets the user attach the pictures you have taken
to specific locations on a paper map (and its electronic version),
by simply pointing the camera to the desired locations on the
physical paper map. There are many other applications 304 such as
novel user interfaces to access or browse a document database,
digitized pen and paper, map navigator, security in copiers and
scanners, camera games and duplicate detection for copyright
protection.
The control module 306 is used to control the other modules of the
memory 204. The control module 306 is adapted for control of and
communication with the application 304, the browser 308, the
feature extraction unit 310, the feature indexing unit 312, the
feature retrieval unit 314 and the geometric estimation unit 316.
The operation of the control module 306 will be apparent from the
description of FIGS. 4 and 6-9 below. While the control module 306
is shown as a separate module of the memory 204, those skilled in
the art will recognize that the control module 306 in another
embodiment may be distributed as routines in the other modules
310-316.
The browser 308 is a conventional type such as Internet Explorer by
Microsoft.RTM. or Firefox by Mozilla. The Web browser is a software
application that enables a user to display and interact with text,
images, and other information typically located on a Web page at a
website on the World Wide Web or a local area network.
The feature extraction unit 310 is software and routines for
extracting the invisible junction features from original electronic
documents and scanned images of paper documents. In one embodiment,
the feature extraction unit 310 includes an input coupled to
receive original electronic documents and an output to provide a
feature description to the feature indexing unit 312. In another
embodiment, the feature extraction unit 310 includes an input
coupled to receive an image of a paper document and provide a
feature description to the feature retrieval unit 314. In yet
another embodiment, the feature extraction unit 310 includes a
location determination module, a size determination module, an
orientation determination module and a descriptor creator module
coupled in series to produce the feature description. The operation
of the feature extraction unit 310 is described below in detail
with reference to FIGS. 5A-6. The feature extraction unit 310 is
coupled for communication with the feature indexing unit 312 and
the feature retrieval unit 314.
The feature indexing unit 312 is software and routines for
generating one or more indices of invisible junction feature
descriptions and storing them in the database 110. In one
embodiment, the feature indexing unit 312 uses a quantization tree
to index the feature descriptions. The feature indexing unit 312 is
coupled to an output of the feature extraction unit 310 to receive
invisible junction feature descriptions. The feature indexing unit
312 is coupled to the database 110. The operation of the feature
indexing unit 312 is described in detail below with reference to
FIGS. 7A and 7B.
The feature retrieval unit 314 is software and routines for
creating a query from the invisible junction feature descriptions
of a paper document and communicating with the database 110 to
retrieve a document, a point and a viewing region of a matching
electronic document from the database 110. The feature retrieval
unit 314 is coupled to an output of the feature extraction unit 310
to receive invisible junction feature descriptions, and to the
database 110 to perform queries and receive results. The possible
matching documents, points and viewing regions are output by the
feature retrieval unit 314. The operation of the feature retrieval
unit 314 is described in detail below with reference to FIG. 8.
The geometric estimation unit 316 is software and routines for
sorting the candidate documents output by the feature retrieval
unit 314 and selecting a document that includes the largest number
of feature points consistent with the geometric constraint. The
geometric estimation unit 316 is adapted for communication with the
feature retrieval unit 314 and has an output that provides a
document, a point and a viewing region that matches the input image
patch. The operation of the geometric estimation unit 316 is
described in detail below with reference to FIG. 9A.
Referring now to FIG. 4, functional flow diagram of an embodiment
of a feature extraction and recognition system 108 in accordance
with the present invention is shown. The feature extraction and
recognition system 108 includes two operational modes: a first mode
in which the original electronic document is rendered and input and
a feature index is stored for the invisible junction features from
the input; and a second mode in which a image patch is input, and a
feature description is created and used to retrieve a corresponding
document, point and viewing region are output. For convenience and
ease of understanding like reference numerals are used for like
components having the same or similar functionality as that
described above for other embodiments. This second embodiment of
the system 108 includes the feature extraction unit 310, the
feature indexing unit 312, the feature retrieval unit 314 and the
geometric estimation unit 316.
FIG. 4 illustrates the flow of information between the feature
extraction unit 310, the feature indexing unit 312, the feature
retrieval unit 314 and the geometric estimation unit 316. In
particular, the two modes of operation are show with distinct
couplings 420, 430. For example, the first mode is shown with the
path of double lines 420 in which the original electronic document
402 is rendered and input via signal line 420 to the feature
extraction unit 310. The feature extraction unit 310 outputs
feature descriptions for the invisible junctions in the document
402 to the feature indexing unit 312 via signal line 402 and the
feature indexing unit 312 creates indexes and stores them in the
database 110. The second mode is shown with the path of signal
lines 430 in which a patch of video data or a video frame 404 is
input via signal line 430 to the feature extraction unit 310. The
feature extraction unit 310 outputs feature descriptions for the
invisible junctions in the video frame 404 to the feature retrieval
unit 314 via signal line 430. The feature retrieval unit 314
queries the database 110 via signal line 430 using the feature
descriptions and outputs a list of documents having matching
features to the geometric estimation unit 316 via signal line 430.
The geometric estimation unit 316 applies geometric constraints to
select a matching document, point and viewing region 406.
Invisible Junction Features
The present invention has created and utilizes a novel set of
features for document patch recognition. The feature points used
for document representation and retrieval are the junctions of the
intrinsic skeleton existing in document pages. Since these
junctions are not visible, i.e., there are in the blank space of
the text document or regions with homogeneous colors, we refer to
them here as "invisible junctions" or "invisible junction
features". There are four key components in an invisible junction
feature: Location: the coordinate (x, y) of the feature point, with
the top-left corner of the document page as the origin;
Orientation: for setting up a local coordinate system for each
feature point when we compute the description of each feature point
using its neighborhood pixels; Scale: the size of the feature;
Description: the content of the feature.
Since the feature content will be determined by all of these key
factors, it is critical to have them stable and repeatable. For
recognition purpose, the descriptor (feature description) must also
be discriminative.
Feature Extraction
Referring now to both FIGS. 5A through 5D and 6, one embodiment of
a process for performing feature extraction is described. The
process begins by receiving 602 a document. In one embodiment, the
feature extraction unit 310 receives 602 an ideal model of an
electronic document 402 and renders the model as an image. In
another embodiment, the feature extraction unit 310 receives 602 a
captured image of a printed document such as a video frame or web
camera image 404.
Next, the process determines 604 a location for one or more
invisible junctions. In one embodiment, the feature extraction unit
310 detects features at the junctions of the text document
skeleton. As shown in FIGS. 5A and 5B for this embodiment, the
feature extraction unit 310 binarizes the input image, and then
applies a distance transform to the binary image. An example input
image 501, the binarization 502 of the input image 501 and the
distance transform 503 of the binarization 502 are shown in FIG.
5A. Next a skeleton 504 is detected as boundary of the label fields
from the distance transformation. The skeleton 504 alternatively
can be the ridge lines of the distance field. In one embodiment,
ridge lines are detected as local maxima in one dimension. For the
example input of FIG. 5A, the distance transform 503 and skeleton
504 are shown in FIG. 5B. The features are detected as the junction
points 506 as shown in FIG. 5B. The junction points 506
alternatively can be the local maxima (in 2D) of the distance
transform fields, or the local maxima on the skeleton 504. The use
of invisible junctions is particularly advantageous because they
are stable and repeatable. The text document contains mostly high
frequency image content in the text, which are easily contaminated
and hard to filter out without degrading the text. On the contrary,
the white space in the text page is mostly stable and contains
mostly DC components in the frequency domain. Noise in the white
space can be easily filtered out without affecting the feature
locations. While the distance transform field depends on the text
locations which is desirable since the text content is eventually
used for classification, the locations of invisible junction points
506 are robust to noise since the text around each junction are
usually contaminated in similar ways.
The process next determines 606 a junction size or scale for the
invisible junction. In one embodiment, the distance transformation
value at each feature point 506 is intrinsic to the feature point
506. The scale of the feature point 506 is thus assigned as such
distance value multiplied by a factor .alpha.. As shown in FIG. 5C
in the expanded view 512, .alpha. is selected as equal to 2 for
this example. The content inside the circle 510 (centered at the
feature point 506 with radius equal to its scale) is invariant to
camera in-plane rotation and zoom (or distance to the paper
document when the camera optical axis is perpendicular to the
paper). Since the feature is local, i.e., the circle is small, the
feature is also robust to changes in camera viewing angles with
respect to the paper (out-of-plane rotation). In other words, while
global perspective distortion is apparent, locally such perspective
effect is very small, and the effect on the final feature
descriptor is negligible for a large range of camera out-of-plane
rotations. Like the location, the scale of the invisible junction
is stable and repeatable. The scale is computed based on the
skeleton, and is therefore stable. However, compared to location
and orientation, scale is mostly subject to the changes of the
nearby texts. In one embodiment, this effect is reduced by applying
a weight that is inversely-proportional to the distance value when
computing the feature descriptors.
Next, the process determines 608 an orientation for the invisible
junction 506. In one embodiment, each feature point 506 is assigned
a local orientation. A 2D local coordinate system for each feature
point 506 is then determined with the feature location as the
origin and its local orientation as the x-axis. The feature
descriptor is computed based on this local coordinate system. To
derive the local orientation, the feature extraction unit 310
calculates a circle 511 centered at the feature point 506, with a
radius determined by the feature scale. This circle 511 intersects
the skeleton at several positions. The two positions that are
furthest apart (from among all pairs of positions on the circle
that intersect the skeleton) determine the local orientation of
this feature. If there are two pairs of positions with similar
distances, the feature extraction unit 310 selects the one that is
closer to a horizontal orientation. FIG. 5D illustrates an example
of estimation of feature orientation. The circle 511 centered at
the feature point 506 with radius equal to feature scale intersects
the skeleton 504 at three points A, B, and C. The two points A and
B has largest distance, the feature orientation is therefore
assigned as the direction of the line segment AB. Since local
orientations are sensitive to noise, and some of the orientations
may even be outliers, the feature extraction unit 310 makes a
global orientation estimate. Specifically, feature points with
local orientations vote for a global orientation for the query
image. The voting is done by counting the number of local feature
orientations in 360 bins (one degree for each bin). The dominant
mode of the orientation histogram is the selected as the global
orientation. After voting, every feature point is re-assigned its
orientation to be consistent with the selected global orientation.
In one embodiment, the feature extraction unit 310 uses a soft
split scheme when counting the numbers for the histogram to ensure
that the histogram will smoothly change when disturbed by
noise.
For text patches, the global orientation is stable since there is
indeed a dominant orientation, which is also true for most other
non-text documents. The present invention can also be adjusted to
achieve a tradeoff between pure local orientation and pure global
orientation. Specifically, for each feature point, the feature
extraction unit 310 can aggregate orientations from its neighbors
(as compared to from all feature points in the same image) to
determine its orientation. This will resolve the issue that in some
languages, such as Chinese or Japanese, the document page could
contain both vertical and horizontal text lines.
The orientation of a junction is determined by this skeleton which
is stable as discussed above, and is therefore also stable. This is
in contrast to the prior art SIFT, where the local orientation is
determined by gradients of pixels around the feature point. It is
well known that gradient operation amplifies noise, and therefore
makes the feature orientation sensitive to noise. Our local
orientation is derived from the skeleton and is much more stable.
The global voting step makes the feature orientation even more
stable, as it utilizes the fact that the camera is looking at a
planar page, and also the fact that such page has some dominant
orientation in its invisible skeleton.
Next, the process creates 610 a descriptor or feature description
for the invisible junction. Referring now also to FIG. 5E, once the
location, scale and orientation are determined for a feature, the
feature extraction unit 310 assigns a 2D coordinate frame 516 for
each feature point 506, and computes its descriptor 518 inside such
coordinate frame 516. For each feature, the feature extraction unit
310 computes a square with edge length equal to two times the
feature scale. This square is divided into 4.times.4 cells. In an
alternate embodiment, the feature extraction unit 310 can use
concentric circles to construct the cells. Inside each cell, the
feature extraction unit 310 accumulates a weighted gradient
orientation histogram that has 8 bins (i.e., 8 directions). The
gradient can be calculated by several techniques including the well
known Roberts or Sobel operators. The feature descriptor is the
concatenation of such orientation histograms, resulting in a
128-dimensional feature vector. This feature vector is normalized
to accommodate linear lighting changes.
In one embodiment, the feature extraction unit 310 applies
dimension reduction scheme using Principal Component Analysis
(PCA), and reserves the first 14 dimensions. While this might
reduce the discriminative power of the features, it helps reduce
the noise. For this embodiment, since the cells nearby the center
are mostly blank, the dimension reduction is more sensible. This
advantageously produces features that are intrinsically
low-dimensional, considering that for each class there is only one
prototype in our case. The descriptor of the present invention is
particularly advantageous because it is discriminative and
repeatable. Repeatability is a result of the repeatability of
feature location, orientation and scale. As can be seen, the
content of each feature point contains pixels from three or more
words in the text document. These contents from different words are
put together with some specific layout, making the descriptor very
discriminative, as there are few chances for such feature point to
duplicate in other places, with the same nearby words being put
together with same layout. The quantization (4.times.4 cell and 8
directions) also make the descriptor robust to noise.
Finally, the process determines 612 whether there are more
invisible junctions in the document. If so, the method selects an
unprocessed junction and continues in step 604 to process it. If
there are no more invisible junctions, the processing of the
document is complete and ends.
Feature Indexing
As was noted above, the feature indexing unit 312 generates one or
more indices of invisible junction feature descriptors and stores
them in the database 110. For each model image, the feature
indexing unit 312 extracts the feature points and stores them in
the database 110. The feature indexing unit 312 also stores
metadata associated with each feature point, including location (x,
y), orientation, scale, feature descriptor and a unique ID of the
document page that contains this feature point. In order to allow
fast real time matching and retrieval, the feature indexing unit
312 also indexes this data for efficient storage in and retrieval
from the database 110.
Referring now to FIG. 7A, one embodiment of a method for performing
feature indexing in accordance with the present invention is shown.
The process begins by creating 702 a quantization tree. In one
embodiment, the quantization tree is trained by a set of training
features using hierarchical clustering. The training set is
recursively divided into K clusters. For a quantization tree with K
branches and L levels, there is K.sup.L leaf nodes, i.e., the
vocabulary size is K.sup.L. For clustering the feature indexing
unit 312 uses k-means algorithm with multiple random
initializations. At each node of the tree, the feature indexing
unit 312 records the cluster center given by the points belonging
to this node. Each node of the quantization tree is also associated
with a node, using term-frequency-inverse-document-frequency. The
quantization tree partitions the feature space in a hierarchical
way. Once the quantization tree is created, it is stored 704 in the
database 110. Next, a feature point is retrieved or received 706
from the feature extraction unit 310, and is pushed down the tree
and added 708 to the database. At each level of the tree, the
feature point chooses the branch with a cluster center closest to
the feature point. Once the feature point reaches the leaf node, an
entry for this feature point is inserted into the hit list of this
leaf node, recording the page ID and coordinates (x, y) of this
feature point. If the page ID is already present in the hit list,
the feature indexing unit 312 appends the coordinates to that
existing entry, and increments the frequency of the appearance of
that page ID in the hit list. Since the feature point may be at or
near to the boundary between two clusters, the feature indexing
unit 312 uses a soft split when pushing a feature point down the
quantization tree. However this results in a large number of leaf
nodes hit by such feature point. As a trade off, the feature
indexing unit 312 only inserts the feature point into the closest k
(k=2 for now) leaf nodes.
In an alternate embodiment, the feature indexing unit 312 does not
store the original feature descriptors. Once the feature indexing
unit 312 quantizes the feature space, it can approximate the
feature descriptor by its index of its residing cell in the
quantization space. For each feature, the feature indexing unit 312
only stores the document page ID that contains this feature, as
well as the coordinate of the feature inside the document page. The
storage space requires is thereby dramatically reduced.
Referring now to FIG. 7B, another embodiment of a method for
performing feature indexing in accordance with the present
invention is shown. In this embodiment, the feature indexing unit
312 uses use the traditional k-d tree to organize 722 the feature
points. In another embodiment, the traditional k-d tree can be
modified to extend the k-d tree for fast disk I/O. The k-d tree
partitions the feature space into cells where each cell contains
one or more feature points (actually pointers to feature points).
During k-nearest neighbor search, it is often required to
back-track the tree. The feature indexing unit 312 uses a
depth-first linearization of the tree to order the feature points;
in such a way that memory/disk locality during back-tracking is
preserved. As a result, the cache miss rate is reduced (cache could
be the CPU cache if the features reside in the main memory, or
memory cache if the features reside in external hard disk drive),
and the speed performance be increased. Then the feature indexing
unit 312 uses PCA to reduce the feature dimension 724, or more
exactly, to find the useful dimensions. Even though our data is
intrinsically low dimensional, the low-dimensional space needs to
be estimated so that the result k-d tree is well-balanced. The
significance of such dimension reduction is not only to meet the
low-dimension requirement, but also to order the feature dimensions
in such a way that the space partition is most effective. In other
words, the space cut is ordered 726 in such a way that the
dimension with most energy will be cut first, so that the system
108 can achieve a k-d tree that is not only balanced, but also with
cell boundaries that are stable (i.e., small noises will not
disturb which cell should the feature belong to).
Feature Retrieval and Geometric Estimation
Referring now to FIG. 8, an embodiment of a method for feature
retrieval in accordance with the present invention is shown.
Where indexing is done in accordance with the first embodiment
described above with reference to FIG. 7A, the feature retrieval
unit 314 sends the feature points detected in the query image down
to the quantization, and collects the hit lists at the leaf nodes.
This produces a table of candidate document pages, along with a
list of coordinates (x, y) for each candidate page. This can be
send to the geometric estimation unit for further processing.
In the alternate embodiment of indexing in FIG. 7B, the retrieval
process begins with the feature retrieval unit 314 receiving 802 a
feature descriptor from the feature extraction unit 310. The
feature retrieval unit 314 performs pattern recognition based on
local features. The feature retrieval unit 314 searches for the
page document whose feature points best corresponded to the feature
points of the given query image patch. To establish the
correspondence (in other words to recognize), the feature retrieval
unit 314 recognizes each feature point in the given image. More
specifically, the feature retrieval unit 314 searches 804 the
database 110 for the k nearest neighbors (kNN) of a feature point
from the query image. The collection of all such points from the
database is then divided 806 into sets where points in one set
share the same document page ID (recorded as meta-data with the
feature point in the database). So for each candidate document
page, the feature retrieval unit 314 produces a set of feature
correspondences. Note that many of such correspondences are
outliers, since for each query point there is at most one true
correspondence in its k nearest neighbors.
Those skilled in the art will recognize that various values of k
can be used. In one example embodiment, k=10. Other values of k can
be used in order to reduce the query time. On alternate method also
monitors the distance between the query feature and the current
neighbor. Once there is a big jump in the distance, the feature
retrieval unit 314 returns k<10 neighbors as the following
neighbors are probably outliers (i.e., the true correspondence is
probably already in the current k<10 points).
In yet another embodiment, the feature retrieval unit 314 may
include scoring using tree-intersection. For each query image, the
feature retrieval unit 314 includes a tree representation given the
paths that the query points have gone through. The feature
retrieval unit 314 constructs a similar tree for each candidate
page. The similarity of the query image and a model image is then
computed based on the intersection of the two trees. Thereafter the
tree intersections can be ranking.
FIG. 9 is a flow chart of an embodiment of a method for geometric
estimation in accordance with the present invention. Once all the
candidate page identifications have been identified by the feature
retrieval unit 314, they are sent to the geometric estimation unit
for further to rank all candidate document page identifications to
generate the final query result. The geometric estimation unit 316
begins by sorting 902 the candidate documents using the number of
feature points that each candidate document page has. Then the
geometric estimation unit 316 ranks the candidates using geometric
constraints. For example, the geometric estimation unit 316 selects
the document that contains the largest number of feature points
consistent with the geometric constraint of the input image is a
plane, therefore there is a transformation between the paper plane
and the camera imaging plane. Finally, the geometric estimation
unit 316 outputs 906 the matching document identification and page,
point and viewing region.
The foregoing description of the embodiments of the present
invention has been presented for the purposes of illustration and
description. It is not intended to be exhaustive or to limit the
present invention to the precise form disclosed. Many modifications
and variations are possible in light of the above teaching. It is
intended that the scope of the present invention be limited not by
this detailed description, but rather by the claims of this
application. As will be understood by those familiar with the art,
the present invention may be embodied in other specific forms
without departing from the spirit or essential characteristics
thereof. Likewise, the particular naming and division of the
modules, routines, features, attributes, methodologies and other
aspects are not mandatory or significant, and the mechanisms that
implement the present invention or its features may have different
names, divisions and/or formats. Furthermore, as will be apparent
to one of ordinary skill in the relevant art, the modules,
routines, features, attributes, methodologies and other aspects of
the present invention can be implemented as software, hardware,
firmware or any combination of the three. Also, wherever a
component, an example of which is a module, of the present
invention is implemented as software, the component can be
implemented as a standalone program, as part of a larger program,
as a plurality of separate programs, as a statically or dynamically
linked library, as a kernel loadable module, as a device driver,
and/or in every and any other way known now or in the future to
those of ordinary skill in the art of computer programming.
Additionally, the present invention is in no way limited to
implementation in any specific programming language, or for any
specific operating system or environment. Accordingly, the
disclosure of the present invention is intended to be illustrative,
but not limiting, of the scope of the present invention, which is
set forth in the following claims.
* * * * *
References