Method And System For Scaling 3d Video

Neuman; Darren ;   et al.

Patent Application Summary

U.S. patent application number 12/963014 was filed with the patent office on 2011-06-09 for method and system for scaling 3d video. Invention is credited to Jason Herrick, Darren Neuman, Christopher Payson, Qinghua Zhao.

Application Number20110134217 12/963014
Document ID /
Family ID44081627
Filed Date2011-06-09

United States Patent Application 20110134217
Kind Code A1
Neuman; Darren ;   et al. June 9, 2011

METHOD AND SYSTEM FOR SCALING 3D VIDEO

Abstract

A method and system are provided in which an integrated circuit (IC) comprises multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.


Inventors: Neuman; Darren; (Palo Alto, CA) ; Herrick; Jason; (Pleasanton, CA) ; Zhao; Qinghua; (Cupertino, CA) ; Payson; Christopher; (Bolton, MA)
Family ID: 44081627
Appl. No.: 12/963014
Filed: December 8, 2010

Related U.S. Patent Documents

Application Number Filing Date Patent Number
61267729 Dec 8, 2009
61296851 Jan 20, 2010
61330456 May 3, 2010

Current U.S. Class: 348/43 ; 348/E13.064
Current CPC Class: H04N 2213/007 20130101; H04N 13/139 20180501
Class at Publication: 348/43 ; 348/E13.064
International Class: H04N 13/00 20060101 H04N013/00

Claims



1. A method, comprising: in an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing: determining whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and selectively interconnecting one or more of the plurality of devices based on the determination.

2. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.

3. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.

4. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.

5. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.

6. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.

7. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.

8. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined on a picture-by-picture basis.

9. The method of claim 1, comprising scaling the 3D video data in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the scaling comprising a horizontal scaling and a vertical scaling.

10. The method of claim 1, comprising performing one or more operations in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the one or more operations being performed before the 3D video data is scaled, after the 3D video data is scaled, or both.

11. A system, comprising: an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing; the integrated circuit being operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and the integrated circuit being operable to selectively interconnect one or more of the plurality of devices based on the determination.

12. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.

13. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.

14. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.

15. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.

16. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.

17. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.

18. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices on a picture-by-picture basis.

19. The system of claim 11, wherein the selectively interconnected one or more of the plurality of devices are operable to horizontally scale the 3D video data and to vertically scale the 3D video data.

20. The system of claim 19, wherein the selectively interconnected one or more of the plurality of devices are operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scaled, or both.
Description



CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE

[0001] This application makes reference to, claims priority to, and claims the benefit of:

U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009; U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010.

[0002] This application also makes reference to:

U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010; U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010; U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23439U502) filed on Dec. 8, 2010; and U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010.

[0003] Each of the above referenced applications is hereby incorporated herein by reference in its entirety.

FIELD OF THE INVENTION

[0004] Certain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.

BACKGROUND OF THE INVENTION

[0005] The availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.

[0006] Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.

BRIEF SUMMARY OF THE INVENTION

[0007] A system and/or method for scaling 3D video, as set forth more completely in the claims.

[0008] Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.

BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0009] FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention.

[0010] FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention.

[0011] FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention.

[0012] FIGS. 4A and 4B illustrate format-related variables for left-and-right (L/R) format and over-and-under (O/U) format, respectively, in accordance with embodiments of the invention.

[0013] FIGS. 5A and 5B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention.

[0014] FIGS. 6A and 6B illustrate configurations of the processing network when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention.

[0015] FIGS. 7A and 7B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention.

[0016] FIGS. 8A and 8B illustrate configurations of the processing network when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention.

[0017] FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080p O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention.

[0018] FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in a processing network configured for scaling 3D video data, in accordance with embodiments of the invention.

[0019] FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in a configured processing network, in accordance with an embodiment of the invention.

[0020] FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in a configured processing network, in accordance with an embodiment of the invention.

DETAILED DESCRIPTION OF THE INVENTION

[0021] Certain embodiments of the invention may be found in a method and system for scaling 3D video. Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.

[0022] FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable to handle 3D video data scaling, in accordance with an embodiment of the invention. Referring to FIG. 1, there is shown an SoC 100, a host processor module 120, and a memory module 130. The SoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content. Examples of signals comprising video content that may be received and processed by the SoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, YCrCb, and red, green, blue (RGB) signals. Such signals may be received by the SoC 100 from one or more video sources communicatively coupled to the SoC 100.

[0023] The SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided.

[0024] The host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100. The memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100. For example, the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing.

[0025] The SoC 100 may comprise an interface module 102, a video processor module 104, and a core processor module 106. The SoC 100 may be implemented as a single integrated circuit comprising the components listed above. The interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100.

[0026] The video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by the SoC 100. The video processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data. The video processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling. In some embodiments, when the video content comprises audio data, the video processor module 104, and/or another module in the SoC 100, may be operable to handle the audio data.

[0027] The core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data. In this regard, the core processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate the video processor module 104. In some embodiments of the invention, the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100. For example, the core processor module 106 may comprise memory that may be utilized during 3D video data processing by the video processor module 104.

[0028] In operation, the SoC 100 may receive one or more signals comprising 3D video data through the interface module 102. When the 3D video data received in those signals is to be scaled, the video processor module 104 and/or the core processor module 106 may be utilized to determine whether to scale 3D video data in the video processor module 104 before the 3D video data is captured to memory through the video processor module 104 or after the captured 3D video data is retrieved from the memory through the video processor module 104. The memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of the memory module 130 and/or of the core processor module 106, for example.

[0029] At least a portion of the video processor module 104 may be configured by the host processor module 120 and/or the core processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of the video processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by the SoC 100.

[0030] As indicated above, the SoC 100 may be operable handle 3D video data in multiple input formats and multiple output formats. The complexity of the SoC 100, however, may increase significantly the larger the number of input and output formats supported. An approach that may simplify the SoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have the SoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary.

[0031] FIGS. 2A-2E illustrate various input and output packing schemes for 3D video data, in accordance with embodiments of the invention. Referring to FIG. 2A, there is shown a first packing scheme or first format 200 for 3D video data and a second packing scheme or second format 210 for 3D video data. Each of the first format 200 and the second format 210 illustrates the arrangement of the left eye content (L) and the right eye content (R) in a 3D video picture. In this regard, a 3D video picture may refer to a 3D video frame or a 3D video field in a video sequence, whichever is appropriate. The L and R in the first format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format. The L and R in the second format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format. Another arrangement, one not shown in FIG. 2A, may be one in which the L is in a first 3D video picture and the R is in a second 3D video picture. Such arrangement may be referred to as a sequential format because the 3D video pictures are processed sequentially.

[0032] Both the first format 200 and the second format 210 may be utilized by the SoC 100 described above to process 3D video data and may be referred to as native formats of the SoC 100. When 3D video data is received in one of the multiple input formats supported by the SoC 100, the SoC 100 may convert that input format to one of the first format 200 and the second format 210, if such conversion is necessary. The SoC 100 may then process the 3D video data in a native format. Once the 3D video data is processed, the SoC 100 may convert the processed 3D video data into one of the multiple output formats supported by the SoC 100, if such conversion is necessary. The SoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210.

[0033] Referring to FIG. 2B, there is shown a conversion mapping of certain input formats 202a, 204a, and 206a supported by the SoC 100 to the first format 200. For example, an L/R input format 202a may be converted to the first format 200, which is also an L/R format. In another example, a line interleaved input format 204a may be converted to the first format 200. In yet another example, a checkerboard input format 206a may be converted to the first format 200. In each of these scenarios, the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the first format 200.

[0034] Referring to FIG. 2C, there is shown a conversion mapping of the first format 200 to certain output formats 202b, 204b, and 206b supported by the SoC 100. For example, the first format 200 may be converted to an L/R output format 202b. In another example, the first format 200 may be converted to a line interleaved output format 204b. In yet another example, the first format 200 may be converted to a checkerboard output format 206b. In each of these scenarios, the SoC 100 may determine the appropriate type of output format to which the first format 200 is to be converted.

[0035] Referring to FIG. 2D, there is shown a conversion mapping of certain input formats 212a, 214a, and 216a supported by the SoC 100 to the second format 210. For example, an O/U input format 212a may be converted to the second format 210, which is also an O/U format. In another example, an O/U .times.2 input format 214a may be converted to the second format 210. In yet another example, a multi-decode input format 216a may be converted to the second format 210. In each of these scenarios, the SoC 100 may detect the type of input format associated with the 3D video data and may determine that the appropriate conversion of the detected input format is to the second format 210.

[0036] Referring to FIG. 2E, there is shown a conversion mapping of the second format 210 to certain output formats 212b and 214b supported by the SoC 100. For example, the second format 210 may be converted to an O/U output format 212b. In another example, the second format 210 may be converted to an O/U x2 output format 214b. In each of these scenarios, the SoC 100 may determine the appropriate type of output format to which the second format 210 is to be converted.

[0037] The conversion operations supported by the SoC 100 may also comprise converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200. In this manner, 3D video data may be received in any one of multiple input formats, such as the input formats 202a, 204a, 206a, 212a, 214a, and 216a (FIGS. 2B and 2D). Accordingly, resulting processed 3D video data may be generated in any one of multiple output formats, such as the output formats 202b, 204b, 206b, 212b, and 214b (FIGS. 2C and 2E).

[0038] The various input formats and output formats described above with respect to FIGS. 2A-2E are provided by way of illustration and not of limitation. The SoC 100 may support additional input formats that may be converted to a native format such as the first format 200, the second format 210, and the sequential format. Similarly, the SoC may support additional output formats to which a native format may be converted.

[0039] FIGS. 3A-3C are block diagrams that illustrate a processing network that is operable to scale 3D video data, in accordance with embodiments of the invention. Referring to FIG. 3A, there is shown a processing network 300 that may be part of the video processor module 104 in the SoC 100, for example. The processing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data, including 3D video data. In this regard, the processing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data. The various devices, components, modules, blocks, circuits, or the like in the processing network 300 may be dynamically configured and/or dynamically interconnected during the operation of the SoC 100 through one or more signals generated by the core processor module 106 and/or by the host processor module 120. In this regard, the configuration and/or the selective interconnection of various portions of the processing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data.

[0040] In the embodiment of the invention described in FIG. 3A, the processing network 300 may comprise an MPEG feeder (MFD) module 302, multiple video feeder (VFD) modules 304, an HDMI module 306, crossbar modules 310a and 310b, multiple scaler (SCL) modules 308, a motion-adaptive deinterlacer (MAD) module 312, a digital noise reduction (DNR) module 314, multiple capture (CAP) modules 320, and two compositor (CMP) modules 322. Each of the above-listed components may be operable to handle video data, including 3D video data. The references to a memory (not shown) in FIG. 3A may be associated with a DRAM utilized by the processing network 300 to handle storage of video data during various operations. Such DRAM may be part of the memory module 130 described above with respect to FIG. 1. In some instances, the DRAM may be part of memory embedded in the SoC 100. The references to a video encoder (not shown) in FIG. 3A may be associated with hardware and/or software in the SoC 100 that may be utilized after the processing network 300 to further process video data for communication to an output device, such as a display device, for example.

[0041] Each of the crossbar modules 310a and 310b may comprise multiple input ports and multiple output ports. The crossbar modules 310a and 310b may be configured such that any one of the input ports may be connected to one or more of the output ports. The crossbar modules 310a and 310b may enable pass-through connections 316 between one or more output ports of the crossbar module 310a and corresponding input ports of the crossbar module 310b. Moreover, the crossbar modules 310a and 310b may enable feedback connections 318 between one or more output ports of the crossbar module 310b and corresponding input ports of the crossbar module 310a. The configuration of the crossbar modules 310a and/or 310b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.

[0042] The MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310a. The video data read by the MFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown). Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300. The HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310a. The HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310a at another data rate.

[0043] Each SCL module 308 may be operable to scale video data received from the crossbar module 310a and provide the scaled video data to the crossbar module 310b. The MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310a, including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310b. The DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310b. In some embodiments of the invention, the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308.

[0044] Each CAP module 320 may be operable to capture video data from the crossbar module 310b and store the captured video data in memory. Each CMP module 322 may be operable to blend or combine video data received from the crossbar module 310b with graphics data. For example, FIG. 3A shows one CMP module 322 being provided with a graphics feed Gfxa that is blended by the CMP module 322 with video data received from the crossbar module 310b before the combination is communicated to a video encoder. Similarly, another CMP module 322 is provided with a graphics feed Gfxb that is blended by the CMP module 322 with video data received from the crossbar module 310b before the combination is communicated to a video encoder.

[0045] Referring to FIG. 3B, there is shown the SCL module 308 in a first configuration that may be utilized when the 3D video data scaling comprises scaling down horizontally. In this configuration, the SCL module 308 may comprise a horizontal scaler (HSCL) module 330, which may be configured to operate first and handles the horizontal scaling (sx) of the video data, and a vertical scaler (VSCL) module 332, which may be configured to operate after the horizontal scaling and handles the vertical scaling (sy) of the video data. The overall scaling of the SCL module 308 in this configuration may be given by the product sxsy. The input pixel rate of the SCL module 308 at node "in" is SCL.sub.in, the output pixel rate of the HSCL module 330 at node "H" is SCL.sub.H, and the output pixel rate of the VSCL module 332 at node "V" is SCL.sub.V, which is the same as the output pixel rate of the SCL module 308 at node "out", SCL.sub.out.

[0046] Referring to FIG. 3C, there is shown the SCL module 308 in a second configuration that may be utilized when the 3D video data scaling comprises scaling up horizontally. In this configuration, the VSCL module 332 may be configured to operate first and the HSCL module 330 may be configured to operate after the VSCL module 332. The overall scaling of the SCL module 308 in this configuration may be given by the product sysx. The input pixel rate of the SCL module 308 at node "in" is SCL.sub.in, the output pixel rate of the VSCL module 332 at node "V" is SCL.sub.V, and the output pixel rate of the HSCL module 330 at node "H" is SCL.sub.H, which is the same as the output pixel rate of the SCL module 308 at node "out", SCL.sub.out.

[0047] By configuring the processing network 300 and/or one or more of the SCL modules 308, the processing network 300 may be utilized to scale and/or process 3D video data received by the SoC 100 in any one of the multiple input formats supported by the SoC 100, such as those described above with respect to FIGS. 2B and 2D, for example. Similarly, the scaled and/or processed 3D video data generated by the configured processing network 300 and/or one or more SCL modules 308 may be converted, if necessary, to any one of the multiple output formats supported by the SoC 100, such as those described above with respect to FIGS. 2C and 2E, for example.

[0048] FIGS. 4A and 4B illustrate format-related variables for L/R format and O/U format, respectively, in accordance with embodiments of the invention. Referring to FIG. 4A, there is shown a 3D video data picture 400 that illustrates some of the variables associated with a side-by-side or left-and-right arrangement. FIG. 4B shows a 3D video data picture 410 that illustrates the same variables when associated with a top-and-bottom or over-and-under arrangement. For example, when the picture 400 or the picture 410 is associated with an input format, such as before the 3D video data is scaled and/or processed by the processing network 300, the variables may be described as follows: xtot=ixtot is the total width of the picture, ytot=iytot is the total height of the picture, xact=ixact is the active width of the picture, yact=iyact is the active height of the picture, x=ix is the width of the area of the picture that is to be cropped and displayed, and y=iy is the height of the area of the picture that is to be cropped and displayed.

[0049] When the picture 400 or the picture 410 is associated with an output format, such as after the 3D video data is scaled and/or processed by the processing network 300, the variables may be described as follows: xtot=oxtot is the total width of the picture, ytot=oytot is the total height of the picture, xact=oxact is the active width of the picture, yact=oyact is the active height of the picture, x=ox is the width of the area on the display that the input content is to be displayed, and y=oy is the height of the area on the display that the input content is to be displayed.

[0050] Based on the variables described in FIGS. 4A and 4B, a 3D video picture may be scaled up horizontally when ox>ix, may be scaled down horizontally when ox<ix, may be scaled up vertically when oy>iy, and may be scaled down vertically when oy<iy.

[0051] When 3D video data received by the SoC 100 is scaled utilizing the processing network 300, the order in which the scaling of the 3D video data occurs with respect to the operations provided by the CAP module 320 and the VFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300. Below are provided various scenarios that describe the selection of the order or positioning of the scaling operation in a sequence of operations that may be performed on 3D video data by the processing network 300.

[0052] FIGS. 5A and 5B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an L/R output format, in accordance with embodiments of the invention. Referring to FIG. 5A, there is shown a first configuration 500 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The first configuration 500 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. For example, in the first configuration 500, the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310a. The output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310b. The scaled 3D video data may be captured by the CAP module 320 and may be stored in a memory 502. The memory 502 may be a DRAM memory, for example. One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310a and 310b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.

[0053] In the first configuration 500, the pixel rate at node "A", p_rate.sub.A, is the same as the input pixel rate of the SCL module 308, SCL.sub.in. The output pixel rate of the SCL module 308 is SCL.sub.out=SCL.sub.insxsy=p_rate.sub.Asxsy. Moreover, the pixel rate at node "C", p_rate.sub.C, is associated with the output characteristics of the 3D video data.

[0054] With respect to the CAP module 320 in the first configuration 500, the real time scheduling, cap_rts.sub.1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:

n_req = ox N C , ( 1 ) t_n _req = ox S C L out = ox p_rate A sx sy = ix p_rate A 1 sy , ( 2 ) cap_rts 1 = n_req t_n _req = ( ix p_rate A 1 sy ) / ox N C , ( 3 ) ##EQU00001##

where ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to FIGS. 4A and 4B, and N.sub.c is the burst size of the CAP module 320 in number of pixels.

[0055] With respect to the VFD module 304 in the first configuration 500, the real time scheduling, vfd_rts.sub.1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:

n_req = ox N V , ( 4 ) t_n _req = ox p_rate C , ( 5 ) vfd_rts 1 = n_req t_n _req = ox p_rate C / ox N V , ( 6 ) ##EQU00002##

where N.sub.V is the burst size of the VFD module 304 in number of pixels.

[0056] Referring to FIG. 5B, there is shown a second configuration 510 of the processing network 300 that may be utilized when an L/R input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The second configuration 510 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. For example, in the second configuration 510, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310a and 310b. The 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310a. The output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.

[0057] In the second configuration 510, the pixel rate at node "C", p_rate.sub.C, may be the same as the output pixel rate of the SCL module 308, SCL.sub.out. The input pixel rate of the SCL module 308 may be SCL.sub.in=SCL.sub.out/(sxsy)=p_rate.sub.C/(sxsy). Moreover, the pixel rate at node "A", p_rate.sub.A, may be associated with the input characteristics of the 3D video data.

[0058] With respect to the CAP module 320 in the second configuration 510, the real time scheduling, cap_rts.sub.2, is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:

n_req = ix N C , ( 7 ) t_n _req = ix p_rate A , ( 8 ) cap_rts 2 = t_n _req n_req = ix p_rate A / ix N C , ( 9 ) ##EQU00003##

where ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to FIGS. 4A and 4B.

[0059] With respect to the VFD module 304 in the second configuration 510, the real time scheduling, vfd_rts.sub.2, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:

n_req = ix N V , ( 10 ) t_n _req = ix S C L i n = ix p_rate C / ( sx sy ) = ox sy p_rate C , ( 11 ) vfd_rts 2 = t_n _req n_req = ox p_rate C sy / ix N V . ( 12 ) ##EQU00004##

[0060] A decision or selection as to whether to perform the scaling operation before capture, as in the first configuration 500, or after the captured data is retrieved from memory, as in the second configuration 510, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., N.sub.c=N.sub.v=N), the bandwidth calculations may be determined as follows:

BW 1 = cap_rts 1 + vfd_rts 1 = ( ix p_rate A 1 sy + ox p_rate C ) / ox N , ( 13 ) BW 2 = cap_rts 2 + vfd_rts 2 = ( ix p_rate A + ox p_rate C sy ) / ix N , ( 14 ) .lamda. = BW 2 BW 1 = ox / N ix / N sy , ( 15 ) ##EQU00005##

where BW1 is the bandwidth associated with the first configuration 500, BW2 is the bandwidth associated with the second configuration 510, and .lamda. is the ratio of the two bandwidths. When .lamda.<1, the SCL module 308 is to be positioned before the CAP module 320, as in the first configuration 500, and when .lamda.>1, the SCL module 308 is to be positioned after the VFD module 304, as in the second configuration 510.

[0061] FIGS. 6A and 6B illustrate configurations of the processing network 300 when scaling 3D video data from an L/R input format to an O/U output format, in accordance with embodiments of the invention. Referring to FIG. 6A, there is shown a third configuration 600 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The third configuration 600 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In the third configuration 600, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A. That is, the 3D video data may be provided to one of the SCL modules 308 from the MFD module 302 or from the HDMI module 306 by the appropriate configuration of the crossbar module 310a. The output of the SCL module 308 may be provided to one of the CAP modules 320 by the appropriate configuration of the crossbar module 310b. The scaled 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the scaled and captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the CMP modules 322 through the pass-through connections 316 between the crossbar modules 310a and 310b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.

[0062] With respect to the CAP module 320 in the third configuration 600, the real time scheduling, cap_rts.sub.3, may be determined as follows:

cap_rts 3 = ( ix p_rate A 1 sy ) / ox N C . ( 16 ) ##EQU00006##

[0063] With respect to the VFD module 304 in the third configuration 600, the real time scheduling, vfd_rts.sub.3, may be determined as follows:

vfd_rts 3 = ox p_rate D / ox N V , ( 17 ) ##EQU00007##

where the pixel rate at node "D", p_rate.sub.D, may be associated with the output characteristics of the 3D video data.

[0064] Referring to FIG. 6B, there is shown a fourth configuration 610 of the processing network 300 that may be utilized when an L/R input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The fourth configuration 610 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In the fourth configuration 610, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B. That is, the 3D video data may be provided to one of the CAP modules 320 from the MFD module 302 or from the HDMI module 306 through the pass-through connections 316 between the crossbar modules 310a and 310b. The 3D video data may be captured by the CAP module 320 and may be stored in the memory 502. One of the VFD modules 304 may retrieve the captured 3D video data from the memory 502 and may provide the retrieved 3D video data to one of the SCL modules 308 by the appropriate configuration of the crossbar module 310a. The output of the SCL module 308 may be provided to one of the CMP modules 322 by the appropriate configuration of the crossbar module 310b. The CMP module 322 may subsequently communicate the 3D video data to a video encoder.

[0065] With respect to the CAP module 320 in the fourth configuration 610, the real time scheduling, cap_rts.sub.4, may be determined as follows:

cap_rts 4 = ix p_rate A / ix N C , ( 18 ) ##EQU00008##

[0066] With respect to the VFD module 304 in the fourth configuration 610, the real time scheduling, vfd_rts.sub.4, may be determined as follows:

vfd_rts 4 = ox p_rate D sy / ix N V , ( 19 ) ##EQU00009##

where the pixel rate at node "D", p_rate.sub.D, may be the same as the output pixel rate of the SCL module 308, SCL.sub.out.

[0067] A decision or selection as to whether to perform the scaling operation before capture, as in the third configuration 600, or after the captured data is retrieved from memory, as in the fourth configuration 610, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., N.sub.C=N.sub.V=N), the following ratio may be determined:

.lamda. = BW 2 BW 1 = ox / N ix / N sy . ( 22 ) ##EQU00010##

where BW1 is the bandwidth associated with the third configuration 600, BW2 is the bandwidth associated with the fourth configuration 610, and .lamda. is the ratio of the two bandwidths. When .lamda.<1, the SCL module 308 is to be positioned before the CAP module 320, as in the third configuration 600, and when .lamda.>1, the SCL module 308 is to be positioned after the VFD module 304, as in the fourth configuration 610.

[0068] FIGS. 7A and 7B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an L/R output format, in accordance with embodiments of the invention. Referring to FIG. 7A, there is shown a fifth configuration 700 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The fifth configuration 700 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A.

[0069] With respect to the CAP module 320 in the fifth configuration 700, the real time scheduling, cap may be determined as follows:

cap_rts 5 = ( ix p_rate B 1 sy ) / ox N C , ( 23 ) ##EQU00011##

where the pixel rate at node "B", p_rate.sub.B, may be the same as the input pixel rate of the SCL module 308, SCL.sub.in.

[0070] With respect to the VFD module 304 in the fifth configuration 700, the real time scheduling, vfd_rts.sub.5, may be determined as follows:

vfd_rts 5 = ox p_rate C / ox N V . ( 24 ) ##EQU00012##

[0071] Referring to FIG. 7B, there is shown a sixth configuration 710 of the processing network 300 that may be utilized when an O/U input format and an L/R output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The sixth configuration 710 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B.

[0072] With respect to the CAP module 320 in the sixth configuration 710, the real time scheduling, cap_rts.sub.s, may be determined as follows:

cap_rts 6 = ix p_rate B / ix N C , ( 25 ) ##EQU00013##

where the pixel rate at node "B", p_rate.sub.B, may be associated with the input characteristics of the 3D video data.

[0073] With respect to the VFD module 304 in the sixth configuration 710, the real time scheduling, vfd_rts.sub.s, may be determined as follows:

vfd_rts 6 = ox p_rate C sy / ix N V . ( 26 ) ##EQU00014##

[0074] A decision or selection as to whether to perform the scaling operation before capture, as in the fifth configuration 700, or after the captured data is retrieved from memory, as in the sixth configuration 710, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., N.sub.c=N.sub.v=N), the following ratio may be determined:

.lamda. = BW 2 BW 1 = ox / N ix / N sy , ( 27 ) ##EQU00015##

where BW1 is the bandwidth associated with the fifth configuration 700, BW2 is the bandwidth associated with the sixth configuration 710, and .lamda. is the ratio of the two bandwidths. When .lamda.<1, the SCL module 308 is to be positioned before the CAP module 320, as in the fifth configuration 700, and when .lamda.>1, the SCL module 308 is to be positioned after the VFD module 304, as in the sixth configuration 710.

[0075] FIGS. 8A and 8B illustrate configurations of the processing network 300 when scaling 3D video data from an O/U input format to an O/U output format, in accordance with embodiments of the invention. Referring to FIG. 8A, there is shown a seventh configuration 800 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur before the capture operation. The seventh configuration 800 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the first configuration 500 in FIG. 5A.

[0076] With respect to the CAP module 320 in the seventh configuration 800, the real time scheduling, cap may be determined as follows:

cap_rts 7 = ( ix p_rate B 1 sy ) / ox N C . ( 28 ) ##EQU00016##

[0077] With respect to the VFD module 304 in the seventh configuration 800, the real time scheduling, vfd_rts.sub.7, may be determined as follows:

vfd_rts 7 = ox p_rate D / ox N V . ( 29 ) ##EQU00017##

[0078] Referring to FIG. 8B, there is shown an eighth configuration 810 of the processing network 300 that may be utilized when an O/U input format and an O/U output format are being considered and the 3D video data scaling operation is determined to occur after the captured 3D video data is retrieved from memory. The eighth configuration 810 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, the arrangement of the processing network 300 may be similar to that of the second configuration 510 in FIG. 5B.

[0079] With respect to the CAP module 320 in the eighth configuration 810, the real time scheduling, cap may be determined as follows:

cap_rts 8 = ix p_rate B / ix N C . ( 30 ) ##EQU00018##

[0080] With respect to the VFD module 304 in the eighth configuration 810, the real time scheduling, vfd_rts.sub.s, may be determined as follows:

vfd_rts 8 = ox p_rate D sy / ix N V . ( 31 ) ##EQU00019##

[0081] A decision or selection as to whether to perform the scaling operation before capture, as in the seventh configuration 800, or after the captured data is retrieved from memory, as in the eighth configuration 810, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., N.sub.c=N.sub.v=N), the following ratio may be determined:

.lamda. = BW 2 BW 1 = ox / N ix / N sy , ( 32 ) ##EQU00020##

where BW1 is the bandwidth associated with the seventh configuration 800, BW2 is the bandwidth associated with the eighth configuration 810, and .lamda. is the ratio of the two bandwidths. When .lamda.<1, the SCL module 308 is to be positioned before the CAP module 320, as in the seventh configuration 800, and when .lamda.>1, the SCL module 308 is to be positioned after the VFD module 304, as in the eighth configuration 810.

[0082] FIG. 9 is a diagram that illustrates an example of scaling on the capture side when the 3D video has a 1080 progressive (1080p) O/U input format and a 720p L/R output format, in accordance with an embodiment of the invention. Referring to FIG. 9, the example shown corresponds to the fifth configuration 700 described above with respect to FIG. 7A. In this example, an input picture 900, which is formatted as 1080p O/U 3D video data, is provided to the processing network 300 for scaling and/or processing. The input picture 900 is scaled by a scaling operation 910 that is performed by, for example, one of the SCL modules 308 shown in FIG. 3A. A scaled picture 920 is then captured to memory by a capture operation 930 performed by, for example, one of the CAP modules 320 shown in FIG. 3A. The captured picture is retrieved from memory through a capture retrieval operation 940 performed by, for example, one of the VFD modules 304 shown in FIG. 3A. The retrieval of the captured picture, that is, the manner in which the 3D video data is read from the memory, is performed such that an output picture 950 is generated having a 720p L/R format.

[0083] FIGS. 10A and 10B are block diagrams that illustrate the order in which additional video processing operations may be performed in the processing network 300 when configured for scaling 3D video data, in accordance with embodiments of the invention. Referring to FIG. 10A, there is shown a ninth configuration 1000 of the processing network 300 in which the location of the SCL module 308 is before the CAP module 320. The ninth configuration 1000 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, a first core (P1) module 1002 may be positioned before the SCL module 308, while a second core (P2) module 1004 may be positioned after the SCL module 308. Moreover, a third core (P3) module 1006 may be positioned after the VFD module 304. The various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314. Other modules not shown in FIG. 3A, but that may be included in the processing network 300, may also be utilized as core modules in the ninth configuration 1000.

[0084] Referring to FIG. 10B, there is shown a tenth configuration 1010 of the processing network 300 in which the location of the SCL module 308 is after the VFD module 304. The tenth configuration 1010 may refer to a particular interconnection and/or operation of several of the modules in the processing network 300. In this configuration, additional processing cores or operations may be performed on the 3D video data. For example, the P1 module 1002 may be positioned before the CAP module 320. The P2 module 1004 may be positioned after the VFD module 304 and before the SCL module 308. Moreover, the P3 module 1006 may be positioned after the SCL module 308. As indicated above, the various core modules described herein may refer to processing modules in the processing network 300 such as the MAD module 312 and/or the DNR module 314. Other modules not shown in FIG. 3A, but that may be included in the processing network 300, may also be utilized as core modules in the tenth configuration 1010.

[0085] FIG. 11 is a flow chart that illustrates steps for scaling 3D video data in the configured processing network 300, in accordance with an embodiment of the invention. Referring to FIG. 11, there is shown a flow chart 1100 in which, at step 1110, the video processor module 104 in the SoC 100 may receive 3D video data from a source of such data. At step 1120, the video processor module 104 and/or the host processor module 120 may determine whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104.

[0086] At step 1130, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A. The configuration may be based on the order or positioning determined in step 1120 regarding the scaling of the 3D video data. At step 1140, the 3D video data may be scaled by the configured processing network in the video processor module 104.

[0087] FIG. 12 is a flow chart that illustrates steps for scaling 3D video data from multiple sources in the configured processing network 300, in accordance with an embodiment of the invention. Referring to FIG. 12, there is shown a flow chart 1200 in which, at step 1210, the video processor module 104 in the SoC 100 may receive 3D video data from multiple sources of such data. At step 1220, the video processor module 104 and/or the host processor module 120 may determine, for each of the sources, whether to scale the 3D video data received before capture to memory through the video processor module 104 or after capture to memory and subsequent retrieval from memory through the video processor module 104.

[0088] At step 1230, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in FIG. 3A. The configuration may be based on the order or positioning determined in step 1220 regarding the scaling of the 3D video data for each of the sources. In this regard, the processing network may be configured to have multiple paths for processing the 3D video data from the various sources of such data. At step 1240, the 3D video data from each source may be scaled by the configured processing network in the video processor module 104.

[0089] Various embodiments of the invention relate to an integrated circuit, such as the SoC 100 described above with respect to FIG. 1, for example, which may be operable to selectively route and process 3D video data. For example, the processing network 300 described above with respect to FIG. 3A may be utilized in the SoC 100 to route and process 3D video data. The integrated circuit may comprise multiple devices, such as the various modules in the processing network 300, for example, which may be operable to be selectively interconnected to enable the routing and the processing of 3D video data. The integrated circuit may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory. Moreover, the integrated circuit may be operable to selectively interconnect one or more of the multiple devices based on the determination.

[0090] The integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor. The input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format. The integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data. The integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.

[0091] The selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.

[0092] In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.

[0093] Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.

[0094] The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

[0095] While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed