Method And System For Creating A View-angle Dependent 2d And/or 3d Image/video Utilizing A Monoscopic Video Camera Array

Chen; Xuemin ;   et al.

Patent Application Summary

U.S. patent application number 13/077922 was filed with the patent office on 2012-03-01 for method and system for creating a view-angle dependent 2d and/or 3d image/video utilizing a monoscopic video camera array. Invention is credited to Chris Boross, Xuemin Chen, Jeyhan Karaoguz, Nambi Seshadri.

Application Number20120050494 13/077922
Document ID /
Family ID45696688
Filed Date2012-03-01

United States Patent Application 20120050494
Kind Code A1
Chen; Xuemin ;   et al. March 1, 2012

METHOD AND SYSTEM FOR CREATING A VIEW-ANGLE DEPENDENT 2D AND/OR 3D IMAGE/VIDEO UTILIZING A MONOSCOPIC VIDEO CAMERA ARRAY

Abstract

2D images and corresponding depth information are concurrently captured via an array of monoscopic sensing devices such as a monoscopic video camera array. The captured 2D images and the captured corresponding depth information are utilized to determine an image mapping function based on view angles. The captured 2D images and the captured corresponding depth information may be modified or adjusted to a given view angle through the determined image mapping function to compose a corresponding 3D image for the given view angle. Regression analysis may be performed to determine the image mapping function by fitting the captured 2D images and the captured corresponding depth information to known view angles of the monoscopic video camera array. 2D image data and corresponding depth information are determined for the given view angle utilizing the determined image mapping function so as to compose corresponding 2D and/or 3D images/video for the given view angle.


Inventors: Chen; Xuemin; (Rancho Santa Fe, CA) ; Seshadri; Nambi; (Irvine, CA) ; Karaoguz; Jeyhan; (Irvine, CA) ; Boross; Chris; (Sunnyvale, CA)
Family ID: 45696688
Appl. No.: 13/077922
Filed: March 31, 2011

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61377867 Aug 27, 2010
61439283 Feb 3, 2011
61439193 Feb 3, 2011
61439274 Feb 3, 2011
61439130 Feb 3, 2011
61439290 Feb 3, 2011
61439119 Feb 3, 2011
61439297 Feb 3, 2011
61439201 Feb 3, 2011
61439209 Feb 3, 2011
61439113 Feb 3, 2011
61439103 Feb 3, 2011
61439083 Feb 3, 2011
61439301 Feb 3, 2011

Current U.S. Class: 348/48 ; 348/E13.074
Current CPC Class: G06T 19/20 20130101; G06T 15/205 20130101; H04N 13/122 20180501; G06T 2219/2016 20130101
Class at Publication: 348/48 ; 348/E13.074
International Class: H04N 13/02 20060101 H04N013/02

Claims



1. A method, comprising: in an array of monoscopic sensing devices comprising one or more image sensors and one or more depth sensors: concurrently capturing a plurality of two-dimensional images and corresponding depth information; determining a function for two-dimensional image data and for corresponding depth information, based on view angles, utilizing said captured plurality of two-dimensional images and said captured corresponding depth information; modifying said captured plurality of two-dimensional images and said captured corresponding depth information to a given view angle utilizing said determined function; and composing a three-dimensional image for said given view angle utilizing said modified plurality of two-dimensional images and said modified corresponding depth information.

2. The method of claim 1, comprising modeling said captured plurality of two-dimensional images and said captured corresponding depth information to said function in terms of view angles.

3. The method of claim 2, comprising performing regression analysis on said captured plurality of two-dimensional images and said captured corresponding depth information in terms of view angles for said modeling.

4. The method of claim 3, comprising matching said captured plurality of two-dimensional images and said captured corresponding depth information to view angles of a plurality of monoscopic sensing devices of said array of monoscopic sensing devices during said regression analysis.

5. The method according to claim 3, comprising determining said function based on said regression analysis.

6. The method according to claim 5, comprising determining two-dimensional image data and corresponding depth information for said given view angle based on said determined function.

7. The method according to claim 6, comprising composing a two-dimensional image for said given view angle utilizing said determined two-dimensional image data.

8. The method according to claim 7, comprising rendering said composed two-dimensional image for said given view angle.

9. The method according to claim 6, comprising composing said three-dimensional image for said given view angle utilizing said determined two-dimensional image data and said determined corresponding depth information.

10. The method according to claim 7, comprising rendering said composed three-dimensional image for said given view angle.

11. A system for processing signals, the system comprising: one or more processors and/or circuits for use in an array of monoscopic sensing devices comprising one or more image sensors and one or more depth sensors, wherein said one or more processors and/or circuits are operable to: concurrently capture a plurality of two-dimensional images and corresponding depth information; determine a function for two-dimensional image data and for corresponding depth information, based on view angles, utilizing said captured plurality of two-dimensional images and said captured corresponding depth information; modify said captured plurality of two-dimensional images and said captured corresponding depth information to a given view angle utilizing said determined function; and composing a three-dimensional image for said given view angle utilizing said modified plurality of two-dimensional images and said modified corresponding depth information.

12. The system according to claim 11, wherein said one or more circuits are operable to model said captured plurality of two-dimensional images and said captured corresponding depth information to said function in terms of view angles.

13. The system according to claim 12, wherein said one or more circuits are operable to perform regression analysis on said captured plurality of two-dimensional images and said captured corresponding depth information in terms of view angles for said modeling.

14. The system according to claim 13, wherein said one or more circuits are operable to match said captured plurality of two-dimensional images and said captured corresponding depth information to view angles of a plurality of monoscopic sensing devices of said array of monoscopic sensing devices during said regression analysis.

15. The system according to claim 13, wherein said one or more circuits are operable to determine said function based on said regression analysis.

16. The system according to claim 15, wherein said one or more circuits are operable to determine two-dimensional image data and corresponding depth information for said given view angle based on said determined function.

17. The system according to claim 16, wherein said one or more circuits are operable to compose a two-dimensional image for said given view angle utilizing said determined two-dimensional image data.

18. The system according to claim 17, wherein said one or more circuits are operable to render said composed two-dimensional image for said given view angle.

19. The system according to claim 16, wherein said one or more circuits are operable to compose said three-dimensional image for said given view angle utilizing said determined two-dimensional image data and said determined corresponding depth information.

20. The system according to claim 19, wherein said one or more circuits are operable to render said composed three-dimensional image for said given view angle.
Description



CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE

[0001] This patent application makes reference to, claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 61/377,867, which was filed on Aug. 27, 2010.

[0002] This patent application makes reference to, claims priority to, and claims benefit from U.S. Provisional Application Ser. No. 61/439,283, which was filed on Feb. 3, 2011.

[0003] This application also makes reference to: [0004] U.S. Patent Application Ser. No. 61/439,193 filed on Feb. 3, 2011; [0005] U.S. patent application Ser. No. ______ (Attorney Docket No. 23461US03) filed on Mar. 31, 2011; [0006] U.S. Patent Application Ser. No. 61/439,274 filed on Feb. 3, 2011; [0007] U.S. patent application Ser. No. ______ (Attorney Docket No. 23462US03) filed on Mar. 31, 2011; [0008] U.S. Patent Application Ser. No. 61/439,130 filed on Feb. 3, 2011; [0009] U.S. patent application Ser. No. ______ (Attorney Docket No. 23464US03) filed on Mar. 31, 2011; [0010] U.S. Patent Application Ser. No. 61/439,290 filed on Feb. 3, 2011; [0011] U.S. patent application Ser. No. ______ (Attorney Docket No. 23465US03) filed on Mar. 31, 2011; [0012] U.S. Patent Application Ser. No. 61/439,119 filed on Feb. 3, 2011; [0013] U.S. patent application Ser. No. ______ (Attorney Docket No. 23466US03) filed on Mar. 31, 2011; [0014] U.S. Patent Application Ser. No. 61/439,297 filed on Feb. 3, 2011; [0015] U.S. patent application Ser. No. _______ (Attorney Docket No. 23467US03) filed on Mar. 31, 2011; [0016] U.S. Patent Application Ser. No. 61/439,201 filed on Feb. 3, 2011; [0017] U.S. Patent Application Ser. No. 61/439,209 filed on Feb. 3, 2011; [0018] U.S. Patent Application Ser. No. 61/439,113 filed on Feb. 3, 2011; [0019] U.S. patent application Ser. No. ______ (Attorney Docket No. 23472US03) filed on Mar. 31, 2011; [0020] U.S. Patent Application Ser. No. 61/439,103 filed on Feb. 3, 2011; [0021] U.S. patent application Ser. No. ______ (Attorney Docket No. 23473US03) filed on Mar. 31, 2011;

[0022] U.S. Patent Application Ser. No. 61/439,083 filed on Feb. 3, 2011; [0023] U.S. patent application Ser. No. ______ (Attorney Docket No. 23474US03) filed on Mar. 31, 2011; [0024] U.S. Patent Application Ser. No. 61/439,301 filed on Feb. 3, 2011; and [0025] U.S. patent application Ser. No. ______ (Attorney Docket No. 23475US03) filed on Mar. 31, 2011.

[0026] Each of the above stated applications is hereby incorporated herein by reference in its entirety.

FIELD OF THE INVENTION

[0027] Certain embodiments of the invention relate to video processing. More specifically, certain embodiments of the invention relate to a method and system for creating a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array.

BACKGROUND OF THE INVENTION

[0028] Digital video capabilities may be incorporated into a wide range of devices such as, for example, digital televisions, digital direct broadcast systems, digital recording devices, and the like. Digital video devices may provide significant improvements over conventional analog video systems in processing and transmitting video sequences with increased bandwidth efficiency.

[0029] Video content may be recorded in two-dimensional (2D) format or in three-dimensional (3D) format. In various applications such as, for example, the DVD movies and the digital TV, a 3D video is often desirable because it is often more realistic to viewers than the 2D counterpart. A 3D video comprises a left view video and a right view video. A 3D video frame may be produced by combining left view video components and right view video components, respectively.

[0030] Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings.

BRIEF SUMMARY OF THE INVENTION

[0031] A system and/or method is provided for creating a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array, substantially as illustrated by and/or described in connection with at least one of the figures, as set forth more completely in the claims.

[0032] These and other advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.

BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0033] FIG. 1 is a diagram illustrating an exemplary video communication system that is operable to create a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array, in accordance with an embodiment of the invention.

[0034] FIG. 2 is a diagram that illustrates mapping of 2D image data to different image planes depending on view angles and lighting conditions, in accordance with an embodiment of the invention.

[0035] FIG. 3 is a diagram that illustrates producing of a 2D monoscopic image and a corresponding depth image for a given view angle from a plurality of 2D monoscopic images and corresponding depth images captured via a monoscopic video camera array, in accordance with an embodiment of the invention.

[0036] FIG. 4 is a flow chart illustrating exemplary steps that may be performed by a monoscopic video camera array to create a 2D and/or 3D image/video to match user's view angle and lighting conditions, in accordance with an embodiment of the invention.

DETAILED DESCRIPTION OF THE INVENTION

[0037] Certain embodiments of the invention may be found in a method and system for creating a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array. In various embodiments of the invention, an array of monoscopic sensing devices such as a monoscopic video camera array may be utilized to concurrently capture a plurality of two-dimensional (2D) monoscopic images and corresponding depth information. The captured plurality of 2D monoscopic images and the captured corresponding depth information may be utilized to determine or model an image mapping function for 2D image data and depth information based on view angles and lighting conditions. The determined image mapping function may be utilized to adjust or modify the plurality of captured 2D monoscopic images and the captured corresponding depth information to a given view angle. The plurality of modified 2D monoscopic images and the modified corresponding depth information may be utilized to compose a corresponding 3D image for the given view angle. Numerical analysis such as regression analysis may be performed to model the plurality of captured 2D monoscopic images and the captured corresponding depth information in terms of view angles and lighting conditions. The image mapping function may be determined through the regression analysis by matching or fitting the plurality of captured 2D monoscopic images and the captured corresponding depth information to known view angles and associated lighting conditions of the monoscopic video camera array. 2D image data and corresponding depth information may be determined or calculated for the given view angle utilizing the determined image mapping function. For 2D video rendering and/or playback, the determined 2D image data may be utilized to compose 2D images/video for the given angle. For 3D video rendering and/or playback, the determined 2D image data and the determined corresponding depth information may be combined to compose 3D images/video for the given view angle.

[0038] FIG. 1 is a diagram illustrating an exemplary video communication system that is operable to create a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array, in accordance with an embodiment of the invention. Referring to FIG. 1, there is shown a video communication system 100. The video communication system 100 comprises a monoscopic video camera array 110, a video processor 120, a display 132, a memory 134 and a 3D video rendering device 136.

[0039] The monoscopic video camera array 110 may comprise a plurality of single-viewpoint or monoscopic video cameras 110.sub.1-110.sub.N, where the parameter N is the number of monoscopic video cameras. Each of the monoscopic video cameras 110.sub.1-110.sub.N may be placed at a certain view angle with respect to an encountered scene in front of the monoscopic video camera array 110. Each of the monoscopic video cameras 110.sub.1-110.sub.N may operate independently to collect or capture information for the encountered scene. The monoscopic video cameras 110.sub.1-110.sub.N each may be operable to capture 2D image data and corresponding depth information for the encountered scene. A 2D video comprises a collection of 2D sequential images. 2D image data for the 2D video specifies intensity and/or color information in terms of pixel position in the 2D sequential images. Depth information for the 2D video represents distance to objects visible in terms of pixel position in the 2D sequential images. The monoscopic video camera array 110 may provide or communicate the captured image/video data and the captured corresponding depth information to the video processor 120 for further process to support 2D and/3D image/video rendering and/or playback.

[0040] A monoscopic video camera such as the monoscopic video camera 110.sub.1 may comprise a depth sensor 111, an emitter 112, a lens 114, optics 116, and one or more image sensors 118. The monoscopic video camera 110.sub.1 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to capture a 2D monoscopic image/video via a single viewpoint corresponding to the lens 114. The monoscopic video camera 110.sub.1 may be operable to collect corresponding depth information for the captured 2D image via the depth sensor 111.

[0041] The depth sensor 111 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to detect electromagnetic (EM) waves in the infrared spectrum. The depth sensor 111 may determine or detect depth information for objects in front of the lens 114 based on corresponding infrared EM waves. For example, the depth sensor 111 may sense or capture depth information for the objects based on time-of-flight of infrared EM waves transmitted by the emitter 112 and reflected from the objects back to the depth sensor 111.

[0042] The emitter 112 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to produce and/or transmit electromagnetic waves in infrared spectrum, for example.

[0043] The lens 114 is an optical component that may be utilized to capture or sense EM waves. The captured EM waves may be focused through the optics 116 on the image sensor(s) 118 to form 2D images for the scene in front of the lens 114.

[0044] The optics 116 may comprise optical devices for conditioning and directing EM waves received via the lens 114. The optics 116 may direct the received EM waves in the visible spectrum to the image sensor(s) 118 and direct the received EM waves in the infrared spectrum to the depth sensor 111, respectively. The optics 116 may comprise one or more lenses, prisms, luminance and/or color filters, and/or mirrors.

[0045] The image sensor(s) 118 may each comprise suitable logic, circuitry, interfaces, and/or code that may be operable to sense optical signals focused by the lens 114. The image sensor(s) 118 may convert the optical signals to electrical signals so as to capture intensity and/or color information for the scene in front of the lens 114. Each image sensor 118 may comprise, for example, a charge coupled device (CCD) image sensor or a complimentary metal oxide semiconductor (CMOS) image sensor.

[0046] The video processor 120 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to handle and control operations of various device components such as the monoscopic video camera array 110, and manage output to the display 132 and/or the 3D video rendering device 136. The video processor 120 may comprise an image engine 122, a video codec 124, a digital signal processor (DSP) 126 and an input/output (I/O) 128. The video processor 120 may utilize the image sensors 118 to capture 2D monoscopic image/video data. The video processor 120 may utilize the lens 114 and the optics 116 to collect corresponding depth information for the captured 2D monoscopic image/video data. The video processor 120 may process the captured 2D monoscopic image/video data and the captured corresponding depth information via the image engine 122 and the video codec 124, for example. In this regard, the video processor 120 may be operable to compose a 2D and/or 3D image/video from the processed 2D image data and the processed corresponding depth information for 2D and/or 3D video rendering and/or playback. The composed 2D and/or 3D video may be presented or displayed to a user via the display 132 and/or the 3D video rendering device 136. The video processor 120 may also enable or allow a user to interact with the monoscopic video camera array 110, when needed, to support or control video recording and/or playback.

[0047] The image engine 122 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to provide or output view-angle dependent 2D image data and corresponding view-angle dependent depth information, respectively. In this regard, the image engine 122 may model or map 2D monoscopic image/video data and corresponding depth information, captured by the monoscopic video camera array 110, to an image mapping function in terms of view angles. The image mapping function may convert the captured 2D monoscopic image/video data and the captured corresponding depth information to different set of 2D image data and corresponding depth information depending on view angles. The image mapping function may be determined, for example, by matching or fitting the captured 2D monoscopic image/video data and the captured corresponding depth information to view known view angles of the monoscopic video cameras 110.sub.1-110.sub.N, The image engine 122 may utilize the determined image mapping function to map or convert the captured monoscopic image/video data and the captured corresponding depth information to view-angle dependent 2D image data and view-angle dependent depth information, respectively.

[0048] The video codec 124 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to perform video compression and/or decompression. The video codec 124 may utilize various video compression and/or decompression algorithms such as video compression and/or decompression algorithms specified in MPEG-2, and/or other video formats for video coding.

[0049] The DSP 126 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to perform signal processing of image/video data and depth information supplied from the monoscopic video camera array 110.

[0050] The I/O module 128 may comprise suitable logic, circuitry, interfaces, and/or code that may enable the monoscopic video camera array 110 to interface with other devices in accordance with one or more standards such as USB, PCI-X, IEEE 1394, HDMI, DisplayPort, and/or analog audio and/or analog video standards. For example, the I/O module 128 may be operable to communicate with the image engine 122 and the video codec 124 for a 2D and/or 3D image/video for a given user's view angle, output the resulting 2D and/or 3D image/video, read from and write to cassettes, flash cards, or other external memory attached to the video processor 120, and/or output video externally via one or more ports such as a IEEE 1394 port, a HDMI and/or an USB port for transmission and/or rendering.

[0051] The display 132 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to display images/video to a user. The display 132 may comprise a liquid crystal display (LCD), a light emitting diode (LED) display and/or other display technologies on which images/video captured via the monoscopic video camera array 110 may be displayed to the user at a given user's view angle.

[0052] The memory 134 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to store information such as executable instructions and data that may be utilized by the monoscopic video camera array 110. The executable instructions may comprise various video compression and/or decompression algorithms utilized by the video codec 124 for video coding. The data may comprise captured images/video and/or coded video. The memory 134 may comprise RAM, ROM, low latency nonvolatile memory such as flash memory and/or other suitable electronic data storage.

[0053] The 3D video rendering device 136 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to render images/video supplied from the monoscopic video camera array 110. The 3D video rendering device 136 may be coupled to the video processor 120 internally or externally. The 3D video rendering device 136 may be adapted to different user's view angles to render 3D image/video output from the video processor 120.

[0054] Although the monoscopic video camera array 110 is illustrated in FIG.1 to support the creation of a view-angle dependent 2D and/or 3D image/video, the invention is not so limited. In this regard, an array of monoscopic video sensing devices, which comprises one or more image sensors and one or more depth sensors, may be utilized to create a view-angle dependent 2D and/or 3D image/video without departing from the spirit and scope of the various embodiments of the invention. An image sensor may comprise one or more light emitters and/or one or more light receivers.

[0055] In an exemplary operation, the monoscopic video camera array 110 may be operable to concurrently or simultaneously capture a plurality of 2D monoscopic images and corresponding depth information. For example, the monoscopic video camera array 110 may capture a 2D monoscopic image/video via the image sensors 118. Corresponding depth information for the captured 2D monoscopic image/video may be collected or captured via the sensor 111. The monoscopic video camera array 110 may provide or communicate the captured 2D monoscopic images/video and corresponding depth information to the video processor 120. The video processor 120 may be operable to perform video processing on the captured 2D monoscopic images/video and the captured corresponding depth information via device components such as the image engine 122. In various embodiments of the invention, the image engine 122 may be operable to model the captured 2D monoscopic images/video and the captured corresponding depth information to an image mapping function in terms of view angles and lighting conditions. The video processor 120 may utilize the image mapping function to map or match the captured 2D video data and the captured corresponding depth data to view angles. The image mapping function may be determined by matching or fitting the captured 2D monoscopic images/video and the captured corresponding depth information to view angles and associated lighting conditions of the monoscopic video cameras 110.sub.1-110.sub.N, In this regard, different view angles and associated lighting conditions may be converted or correspond to different sets of 2D images/video data and corresponding depth information. In other words, the video processor 120 may interpret the captured 2D monoscopic images/video in different image planes depending on view angles and associated lighting conditions. An image plane may be assumed to be coincident with the XY-plane of a XYZ coordinate system, and is parallel to the XY-plane at distance d, where d>0. The video processor 120 may be operable to compose or generate 2D and/or 3D images/videos from the captured 2D monoscopic images/video depending on view angles and associated lighting conditions. For example, for a given view angle, the video processor 120 may utilize the determined image mapping function to map or convert the captured 2D monoscopic images/video and the captured corresponding depth information to a specific set of 2D image data and corresponding depth information for the given view angle. The video processor 120 may generate or compose a 2D and/or 3D image/video for the given angle utilizing the resulting view-angle dependent 2D image data and corresponding view-angle dependent depth information. The generated 2D and/or 3D image/video for the given view angle may be presented or displayed to the user via the display 132 and/or the 3D video rendering device 136.

[0056] FIG. 2 illustrates mapping of 2D image data to different image planes depending on view angles, in accordance with an embodiment of the invention. Referring to FIG. 2, there is shown a XYZ coordinate system 200. The XYZ coordinate system 200 composes a XY-plane 201 and a plurality of image planes 202-204. A point Q(x,y) in the XY-plane 201 may represent or correspond to an image pixel in one of a plurality of 2D images captured by the monoscopic video camera array 110. The image engine 122 may be operable to map the point Q(x,y) in the XY-plane 201 into different image planes depending on view angles. For example, for given view angles .crclbar..sub.1 and .crclbar..sub.2, and associated lighting conditions .xi..sub.1 and .xi..sub.2, the image engine 122 may output or provide different depth values z.sub.1(.crclbar..sub.1, .xi..sub.1) and z.sub.2(.crclbar..sub.2, .xi..sub.2) for the point Q(x,y) in the XY-plane 201. In this regard, the point Q(x,y) in the XY-plane 201may be mapped or projected to the point P.sub.1(x,y, z.sub.i(.crclbar..sub.1, .xi..sub.1)) in the image plane 202 and to the point P.sub.2(x,y, z.sub.2(.GAMMA..sub.2, .xi..sub.2)) in the image plane 202, respectively, for video rendering and/or playback.

[0057] FIG. 3 illustrates producing of a 2D monoscopic image and a corresponding depth image for a given view angle from a plurality of 2D monoscopic images and corresponding depth images captured via a monoscopic video camera array, in accordance with an embodiment of the invention. Referring to FIG. 3, there is shown a plurality of 2D monoscopic images 310.sub.1-310.sub.N and a plurality of depth images 320.sub.1-320.sub.N that may be captured via the monoscopic video cameras 110.sub.1-110.sub.N for the captured plurality of 2D monoscopic images 310.sub.1-310.sub.N, where the parameter N is the number of monoscopic video cameras. The parameters .crclbar..sub.1 . . . .crclbar..sub.N are corresponding view angles of the monoscopic video cameras 110.sub.1-110.sub.N for the scene in front of the monoscopic video camera array 110.

[0058] The image engine 122 may model the captured 2D monoscopic images 310.sub.1-310.sub.N and the captured corresponding depth images 320.sub.1-320.sub.N to fit or match the known view angles .crclbar..sub.1 . . . .crclbar..sub.N, respectively. For example, the image engine 122 may perform regression analysis on the captured 2D monoscopic images 310.sub.1-310.sub.N, the captured corresponding depth images 320.sub.1-320.sub.N to fit or match the known view angles .crclbar..sub.1 . . . .crclbar..sub.N so as to determine or establish a relation among intensity and/or color information, depth information and view angles for the scene in front of the monoscopic video camera array 110. In this regard, the image engine 122 may convert or map the captured 2D monoscopic images 310.sub.1-310.sub.N and the captured corresponding depth images 320.sub.1-320.sub.N to the view angles .crclbar..sub.1 . . . .crclbar..sub.N. The image engine 122 may provide or output different 2D monoscopic images and corresponding depth images depending on different view angles. For a given view angle, .beta., the depth image 322 and the 2D monoscopic image 324 may be provided by the image engine 122, respectively. The depth image 322 and the 2D monoscopic image 324 may be combined to compose a 3D image for the given view angle .beta..

[0059] FIG. 4 is a flow chart illustrating exemplary steps that may be performed by a monoscopic video camera array to create a 2D and/or 3D image/video to match user's view angle, in accordance with an embodiment of the invention. Referring to FIG. 4, the exemplary steps may begin with step 402, in which the monoscopic video camera array 110 is powered on. In step 404, the monoscopic video camera array 110 may be operable to concurrently capture a plurality of 2D monoscopic images and corresponding depth information. In step 406, the image engine 122 may be operable to model the captured 2D monoscopic images and the captured corresponding depth information to an image mapping function in terms of view angles. In this regard, the image engine 122 may perform regression analysis on the captured 2D monoscopic images and the captured corresponding depth information for the modeling. For example, the image engine 122 may match or fit the captured 2D monoscopic images and the corresponding depth images to view angles of the monoscopic video cameras 110.sub.1-110.sub.N to determine the image mapping function. In step 408, it may be determined whether image/video rendering for a given view angle is needed for the captured 2D monoscopic images. In instances where image/video rendering for the given view angle is needed for the captured 2D monoscopic images, then in step 410, the image engine 122 may determine or calculate 2D image monoscopic image/video data and corresponding depth information for the given view angle utilizing the determined image mapping function. In step 412, it may be determined whether 3D image/video rendering for the given view angle is needed. In instances where 3D image/video rendering for the given view angle is not needed, then in step 414, the video processor 120 may be operable to compose 2D monoscopic images/video utilize determined 2D monoscopic image/video data. In step 416, the composed 2D monoscopic images/video may be rendered to the user.

[0060] In step 408, in instances where image/video rendering for the given view angle is not needed for the captured 2D monoscopic images/video, then the exemplary steps return to step 404.

[0061] In step 412, in instances where 3D image/video rendering for the given view angle is needed, then in step 418, in which the video processor 120 may be operable to compose 3D monoscopic images/video utilize determined 2D monoscopic image/video data and the determined depth information. In step 420, the composed 3D images/video may be rendered to the user.

[0062] Various aspects of a method and system for creating a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array are provided. In various exemplary embodiments of the invention, the video processor 120 may be operable to manage and handle operations of various device components of an array of monoscopic sensing devices such as the monoscopic video camera array 110. The monoscopic video camera array 110 comprises a plurality of monoscopic video cameras 110.sub.1-110.sub.N, where the parameter N is the number of monoscopic video cameras.

[0063] The video processor 120 may utilize the monoscopic video camera array 110 to concurrently collect or capture a plurality of 2D monoscopic images and corresponding depth information for image/video rendering and/or playback. The image engine 122 may be operable to determine an image mapping function for 2D image data and depth information in terms of view angles based on the captured 2D monoscopic images and the captured corresponding depth information. The video processor 120 may be operable to utilize the determined image mapping function to adapt or modify the captured 2D monoscopic images and the captured corresponding depth information to a given view angle. The modified 2D monoscopic images and the modified corresponding depth information may be utilized to compose a corresponding 3D image for the given view angle. The image engine 122 may be operable to perform numerical analysis such as regression analysis to model the captured 2D monoscopic images and the captured corresponding depth information in terms of view angles.

[0064] The image mapping function may be determined through the regression analysis by matching or fitting the captured 2D monoscopic images and the captured corresponding depth information to known view angles of the monoscopic video camera array. The image engine 122 may be operable to determine or calculate 2D image data and corresponding depth information for the given view angle based on the determined image mapping function. For 2D video rendering and/or playback, the video processor 120 may be operable to utilize the determined 2D image data to compose a 2D image for the given angle. For 3D image/video rendering and/or playback, the video processor 120 may utilize the determined 2D image data and the determined corresponding depth information may be combined to compose a 3D image for the given view angle.

[0065] Other embodiments of the invention may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for creating a view-angle dependent 2D and/or 3D image/video utilizing a monoscopic video camera array.

[0066] Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.

[0067] The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

[0068] While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed