Smart Self Calibrating Camera System

Zheng; Ying ;   et al.

Patent Application Summary

U.S. patent application number 17/503362 was filed with the patent office on 2022-03-24 for smart self calibrating camera system. This patent application is currently assigned to AiFi Corp. The applicant listed for this patent is AiFi Corp. Invention is credited to Steve Gu, Mahmoud Hassan, Staurt Kyle Neubarth, Hector Sanchez, Juan Ramon Terven, Ying Zheng.

Application Number20220092822 17/503362
Document ID /
Family ID1000005925518
Filed Date2022-03-24

United States Patent Application 20220092822
Kind Code A1
Zheng; Ying ;   et al. March 24, 2022

Smart Self Calibrating Camera System

Abstract

The present invention describes a system for calibrating a plurality of cameras in an area. The system functions by using certain patterns with visible or invisible properties In addition, the system implements automatic re-calibration in a specific way to reduce human intervention, cost and time.


Inventors: Zheng; Ying; (Santa Clara, CA) ; Sanchez; Hector; (Santa Clara, CA) ; Gu; Steve; (Santa Clara, CA) ; Neubarth; Staurt Kyle; (Mountain View, CA) ; Hassan; Mahmoud; (Cambridge, MA) ; Terven; Juan Ramon; (Queretaro, MX)
Applicant:
Name City State Country Type

AiFi Corp

Santa Clara

CA

US
Assignee: AiFi Corp
Santa Clara
CA

Family ID: 1000005925518
Appl. No.: 17/503362
Filed: October 18, 2021

Related U.S. Patent Documents

Application Number Filing Date Patent Number
17028388 Sep 22, 2020
17503362

Current U.S. Class: 1/1
Current CPC Class: G06T 2207/30204 20130101; G06T 2207/10024 20130101; G06T 7/80 20170101; G06T 2207/20084 20130101
International Class: G06T 7/80 20060101 G06T007/80

Claims



1. A method for calibrating a plurality of cameras in an area, comprising: Detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded; Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.

2. The method for calibrating a plurality of cameras in an area of claim 1, wherein the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.

3. The method for calibrating a plurality of cameras in an area of claim 1, wherein the at least one feature points are lines, dots or polygons.

4. The method for calibrating a plurality of cameras in an area of claim 1, wherein a user can manually be involved in the calibrating.
Description



CROSS REFERENCE FOR RELATED APPLICATION

[0001] The present application is a continuation patent application of (a) U.S. Non-Provisional patent application Ser. No. 17/028,388 entitled "SMART SELF CALIBRATING CAMERA SYSTEM", filed on Sep. 22, 2020. Thus, this present application claims the benefit of U.S. application Ser. No. 17/028,388, filed Sep. 22, 2020, all of which are incorporated by reference herein.

BACKGROUND OF THE APPLICATION

[0002] This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras.

[0003] Methods and apparatus for calibrating cameras in the certain areas are very common. Some methods are using reference objects and manual methods to calibrate the cameras. However, these methods need human intervention and cost time and money.

[0004] Therefore, it is desirable to have systems and methods to enable self-calibration for the cameras to save time and efforts.

SUMMARY OF THE INVENTION

[0005] This application relates to systems, methods, devices, and other techniques for video camera self-calibration based on video information received from more than one video cameras. In some embodiments, the system uses people as calibration markers. Instead of finding feature matches between cameras, the system matches one or more persons between cameras. Then the system identifies certain body key points of the one or more persons and then matches these key points. In addition, the system implements automatic re-calibration in a specific way to reduce human intervention, cost and time. In some embodiments, the system extracts detections from each camera, and then synchronizes frames using time-stamp, and then clusters one or more persons using re-id features. The system then aggregates key points from one or more persons along time for each camera. The system then finds matches same time, same person key points on camera pairs and then runs un-calibrated structure from motion on the key point matches. The system then aligns and upgrades scale using one or more persons' head and feet key points or the known camera height.

[0006] In some embodiments, the system implements a self-healing scheme to recalibrate after these situations (but not limited to these situations): accidental or on purpose camera position changes, change of focus or aspect ratio, or camera upgrades.

[0007] In some embodiments, when the system uses this self-healing scheme, the system uses multi-camera tracking to match people and key points. Then the system triangulates and projects key point coordinates. The system monitors accumulated errors over time. If the accumulated error is large, re-calibration is needed. If re-calibration is needed, the system runs people-based calibration.

[0008] In some implementations, this kind of method is synchronizing system time for the plurality of cameras, and then the method is detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time; and then the method is calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.

[0009] In some embodiments, the at least one feature points are encoded with color or depth information.

[0010] In some embodiments, the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.

[0011] In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.

[0012] In some embodiments, the at least one feature points are lines, dots or polygons.

[0013] In some embodiments, a user can manually be involved in the calibrating. In some embodiments, the object is configured to move. In some embodiments, the plurality of cameras is configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points.

[0014] In some embodiments, the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded; Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.

[0015] In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.

[0016] In some embodiment, the invention is related to a method of for calibrating a plurality of cameras in an area, comprising: Detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means; Capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras; Decoding the encoded coordinate information; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.

[0017] In some embodiments, wherein the plurality of cameras are configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, translucent stickers covered with some infrared ink to mark the patterns. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value. In some embodiments, translucent stickers are covered with some infrared ink to mark the patterns.

[0018] These and other aspects, their implementations and other features are described in details in the drawings, the description and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] FIG. 1 shows a method for self-calibrating a plurality of cameras in an area.

[0020] FIG. 2 shows another method for calibrating a plurality of cameras in an area.

[0021] FIG. 3 shows a third method for calibrating a plurality of cameras in an area.

DETAILED DESCRIPTION OF THE INVENTION

[0022] FIG. 1 shows a method 100 for self-calibrating a plurality of cameras in an area. In some implementations, the method comprises a step 105 of synchronizing system time for the plurality of cameras.

[0023] In some embodiments, the method comprises a step 110 of detecting at least one feature points of an object that is within range of the plurality of cameras, wherein the at least one feature points are setup in a pre-determined fashion, wherein the at least one feature points are configured within range of the plurality of cameras, wherein the at least one feature points are configured to be detected by color or infrared means, wherein any point of the at least one feature points are encoded with location information of the at least one feature points, wherein the location information of the at least one feature points are decoded and recorded during a duration of time.

[0024] In some embodiments, the method comprises a step 115 of calibrating the plurality of cameras by using the location information of the at least one feature points during a duration of time.

[0025] In some embodiments, the method comprises a step 120 of capturing a first set of one or more images of the feature points of the object along the route by a first one of the plurality of cameras and a second set of one or more images of the feature points of the object along the route by a second one of the plurality of cameras, wherein time stamp is recorded for each capturing.

[0026] In some embodiments, the method comprises a step 125 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points on the object between the first set of one or more images of the object and the second set of one or more images of the object at same time stamp.

[0027] In some embodiments, the at least one feature points are encoded with color or depth information. In some embodiments, the method is further comprising: Capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras; and Calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images, wherein the color or depth information is used for the matching of the same feature points, wherein the first one of the plurality of cameras and the second of the plurality of cameras are configured to pan, tilt and zoom. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.

[0028] In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras, wherein the at least one feature points are visible to infrared cameras.

[0029] In some embodiments, the at least one feature points are lines, dots or polygons.

[0030] In some embodiments, a user can manually be involved in the calibrating. In some embodiments, the object is configured to move. In some embodiments, the plurality of cameras is configured to move. In, some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, the method comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.

[0031] FIG. 2 shows a method 200 for self-calibrating a plurality of cameras in an area. In some implementations, the method comprises a step 205 of detecting feature points of a person, wherein the feature points are specific body area of the person, wherein the feature points are within range of the plurality of cameras, wherein dimensions of the specific body area of the person are measured and recorded.

[0032] In some embodiments, the method comprises a step 210 of capturing a first set of one or more images of the feature points by a first one of the plurality of cameras and a second set of one or more images of the feature points by a second one of the plurality of cameras.

[0033] In some embodiments, the method comprises a step 215 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same feature points between the first set of one or more images of the object and the second set of one or more images.

[0034] In some embodiments, the at least one feature points are not visible to human eyes and RGB cameras. In some embodiments, the at least one feature points are visible to infrared cameras. In some embodiments, the at, least one feature points are lines, dots or polygons. In some embodiments, a user can manually be involved in the calibrating.

[0035] FIG. 3 shows another method 300 for calibrating a plurality of cameras in an area.

[0036] In some embodiments, the method comprises a step 305 of detecting patterns in the area, wherein location of the patterns are pre-determined, wherein shape of the patterns are pre-determined, wherein color of the patterns are pre-determined, wherein the patterns are configured to contain encoded coordinate information, wherein the patterns are configured to be detected by optical or infrared means.

[0037] In some embodiments, the method comprises a step 310 of capturing a first set of one or more images of the patterns by a first one of the plurality of cameras and a second set of one or more images of the patterns by a second one of the plurality of cameras.

[0038] In some embodiments, the method comprises a step 315 of decoding the encoded coordinate information.

[0039] In some embodiments, the method comprises a step 320 of calibrating the first one of the plurality of cameras and the second one of the plurality of cameras by matching same pattern between the first set of one or more images of the object and the second set of one or more images and utilizing the encoded coordinate information that was decoded.

[0040] In some embodiments, the method comprises a step 325 of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value.

[0041] In some embodiments, the first object is a person. In some embodiments, one of the feature points is the person's head. In some embodiments, the position information of the same feature points is X and Y coordinates within an image. In some embodiments, the object is configured to move freely.

[0042] In some embodiments, wherein the plurality of cameras are configured to move. In some embodiments, a neural network is configured to match and identify the at least one feature points. In some embodiments, translucent stickers covered with some infrared ink to mark the patterns. In some embodiments, the method further comprises a step of recalibrating a subset of the plurality of cameras after a time period or when any of the re-projection errors exceeds a certain value. In some embodiments, translucent stickers are covered with some infrared ink to mark the patterns.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed