US20110134218A1 - Method and system for utilizing mosaic mode to create 3d video - Google Patents
Method and system for utilizing mosaic mode to create 3d video Download PDFInfo
- Publication number
- US20110134218A1 US20110134218A1 US12/963,035 US96303510A US2011134218A1 US 20110134218 A1 US20110134218 A1 US 20110134218A1 US 96303510 A US96303510 A US 96303510A US 2011134218 A1 US2011134218 A1 US 2011134218A1
- Authority
- US
- United States
- Prior art keywords
- video data
- video
- sources
- picture
- format
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/007—Aspects relating to detection of stereoscopic image format, e.g. for adaptation to the display format
Definitions
- Certain embodiments of the invention relate to the processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for utilizing mosaic mode to create 3D video.
- FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to utilize mosaic mode to create multiple windows in an output video picture, in accordance with an embodiment of the invention.
- FIG. 2 is a diagram that illustrates various packing schemes for 3D video data, in accordance with embodiments of the invention.
- FIG. 3 is a block diagram that illustrates a processing network that is operable to handle 3D video data, in accordance with an embodiment of the invention.
- FIG. 4 is a diagram that illustrates the flow of data in mosaic mode, in accordance with an embodiment of the invention.
- FIG. 5 is a diagram that illustrates an exemplary output video picture with multiple windows that is generated utilizing mosaic mode, in accordance with an embodiment of the invention.
- FIG. 6 is a diagram that illustrates the storage of decoded video data from multiple sources in corresponding buffers in mosaic mode, in accordance with an embodiment of the invention.
- FIG. 7 is a diagram that illustrates the generation of an output video picture having a 2D output format in mosaic mode, in accordance with an embodiment of the invention.
- FIGS. 8A-8C are diagrams that illustrate the generation of an output video picture having a 3D output format in mosaic mode, in accordance with embodiments of the invention.
- FIG. 9 is a flow chart that illustrates steps for generating output video pictures with multiple windows utilizing mosaic mode, in accordance with an embodiment of the invention.
- Certain embodiments of the invention may be found in a method and system for utilizing a mosaic mode to create 3D video.
- Various embodiments of the invention may relate to a processor comprising a video feeder, such as an MPEG video feeder, for example, wherein the processor may receive video data from multiple sources through the video feeder.
- the video data from one or more of those sources may comprise 3D video data.
- the video data from each source may be stored in a corresponding different area in memory during a capture time for a single picture.
- Each of the different areas in memory may correspond to a different window of multiple windows in an output video picture.
- the processor may store the video data from each source in memory in either a 2D format or a 3D format, based on a format of the output video picture.
- left-eye and right-eye information may be stored in different portions of memory.
- the video data may be read from the different areas in memory to a single buffer during a feed time for a single picture before being utilized to generate the output video picture.
- a user may be provided with multiple windows in an output video picture to concurrently display video from different sources, including 3D video.
- the multi-windowed output video picture may have a 2D output format or a 3D output format based on, for example, the characteristics of the device in which the output video picture is to be displayed, reproduced, and/or stored. That is, while the different sources may provide 2D video data and/or 3D video data, the video data in each of the windows in the output video picture may be in the same format.
- FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable utilize mosaic mode to create multiple windows in an output video picture, in accordance with an embodiment of the invention.
- SoC system-on-chip
- the SoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content.
- CVBS composite, blanking, and sync
- S-video separate video
- HDMI high-definition multimedia interface
- component signals component signals
- PC personal computer
- SIF source input format
- RGB red, green, blue
- Such signals may be received by the SoC 100 from one or more video sources communicatively coupled to the SoC 100 .
- the SoC 100 may also be operable to receive and/or process graphics content from one or more sources of such content.
- the SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage.
- output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
- display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology.
- the characteristics of the output signals such as pixel rate, resolution, and/or whether the output format is a 2D output format or a 3D output format, for example, may be based on the type of output device to which those signals are to be provided.
- the output signals may comprise one or more
- the host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 . For example, parameters and/or other information, including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100 .
- the host processor module 120 may be operable to control and/or select a mode of operation for the SoC 100 . For example, the host processor module 120 may enable a mosaic mode for the SoC 100 .
- the memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100 .
- the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with the processing of video data during mosaic mode.
- the memory module 130 may store graphics data that may be retrieved by the SoC 100 for mixing with video data.
- the graphics data may comprise 2D graphics data and/or 3D graphics data for mixing with video data in the SoC 100 .
- the SoC 100 may comprise an interface module 102 , a video processor module 104 , and a core processor module 106 .
- the SoC 100 may be implemented as a single integrated circuit comprising the components listed above.
- the interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content and/or graphics content.
- the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100 .
- the SoC 100 may communicate one or more signals that comprise a sequence of output video pictures comprising multiple windows to concurrently display video from different sources.
- the format of the multi-windowed output video pictures may be based on, for example, the characteristics of the output devices.
- the video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data and/or graphics data.
- the video processor module 104 may be operable to support multiple formats for video data and/or graphics data, including multiple input formats and/or multiple output formats.
- the video processor module 104 may be operable to perform various types of operations on 2D video data and/or 3D video data. For example, when video data from several sources is received by the video processor module 104 , and the video data from any one of those sources may comprise 2D video data or 3D video data, the video processor module 104 may generate output video comprising a sequence of output video pictures having multiple windows, wherein each of the windows in an output video picture corresponds to a particular source of video data.
- the output video pictures that are generated by the video processor module 104 may be in a 2D output format or in a 3D output format in accordance with the device in which the output video pictures are to be displayed, reproduced, and/or stored. That is, even when a portion of the sources provide video data comprising 2D video data and another portion of the sources provide video data comprising 3D video data, the output video pictures generated by the video processor module 104 may be generated in either a 2D output format or a 3D output format.
- the video processor module 104 when the video content comprises audio data, the video processor module 104 , and/or another module in the SoC 100 , may be operable to handle the audio data.
- the core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100 .
- the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content and/or and graphics content.
- the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100 .
- the core processor module 106 may comprise memory that may be utilized during the processing of video data and/or graphics data by the video processor module 104 .
- the core processor module 106 may be operable to control and/or enable mosaic mode in the SoC 100 . Such control and/or enabling may be performed in coordination with the host processor module 120 , for example.
- mosaic mode may be enabled in the SoC 100 by the host processor module 120 and/or the core processor module 106 , for example.
- the SoC 100 may receive video data from more than one source through the interface module 102 .
- the video data from any one of the sources may comprise 2D video data or 3D video data.
- the video processor module 104 may decode the video data and may store the decoded video data from each source in a different buffer in memory.
- the video processor module 104 may comprise a video decoder (not shown), such as an MPEG decoder, for example.
- the memory may be dynamic random access memory (DRAM), which may be part of the memory module 130 , for example, and/or part of the SoC 100 , such as embedded memory in the core processor module 106 , for example.
- DRAM dynamic random access memory
- a video feeder within the video processor module 104 , such as an MPEG video feeder, for example, may be utilized to obtain the video data from the buffers and feed the video data for capture into memory.
- the video data from each of the buffers may be fed and captured separately into a corresponding area in memory.
- the capture of the video data from the various buffers may occur during a single picture capture time, that is, may occur during the time it takes to capture a single picture into memory by the SoC 100 .
- the capture of the video data allows for the generation of the multiple windows in the output video picture by storing the video data from a particular source in an area of memory that correspond to a particular window in the output video picture.
- the windowing operation associated with mosaic mode occurs in connection with the capture of the video data to memory.
- the video data may be stored in memory in a 2D format or in a 3D format, based on a format of the output video picture.
- the video data may be stored in memory in a 3D format, left-eye and right-eye information may be stored in different portions of memory.
- the video data may be read from memory to a single buffer during a video feed process for a single picture, that is, during the time it takes to feed a single picture from memory by the SoC 100 . Once the video data has been placed in the single buffer, it may be subsequently processed to generate the output video picture with multiple windows for communication to an output device through the interface module 102 .
- FIG. 2 is a diagram that illustrates various packing schemes for 3D video data, in accordance with embodiments of the invention.
- a first packing scheme or first format 200 for 3D video data is shown.
- a second packing scheme or second format 210 for 3D video data illustrates the arrangement of the left-eye content (L) and the right-eye content (R) in a 3D picture.
- the left-eye content or information may also be referred to as a left 3D picture and the right-eye content or information may also be referred to as a right 3D picture.
- a 3D picture may correspond to a 3D frame or a 3D field in a video sequence, whichever is appropriate.
- the L and R portions in the first format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format.
- the L and R portions in the second format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format.
- Another arrangement, one not shown in FIG. 2 may be one in which the L portion is in a first 3D picture and the R portion is in a second 3D picture. Such arrangement may be referred to as a sequential format because the 3D pictures are processed and/or handled sequentially.
- Both the first format 200 and the second format 210 may be utilized as native formats by the SoC 100 to process 3D video data.
- the SoC 100 may also be operable to utilize the sequential format as a native format, which may be typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210 .
- the SoC 100 may also support converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200 . Such conversion may be associated with various operations performed by the SoC 100 , including but not limited to operations associated with mosaic mode.
- the SoC 100 may support additional native formats other than the first format 200 , the second format 210 , and the sequential format, for example.
- FIG. 3 is a block diagram that illustrates a processing network that is operable to handle 3D video data, in accordance with an embodiment of the invention.
- a processing network 300 that may be part of the video processor module 104 in the SoC 100 , for example.
- the processing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data.
- the processing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data in accordance with various modes of operation, including mosaic mode.
- the various devices, components, modules, blocks, circuits, or the like in the processing network 300 may be dynamically configured and/or dynamically interconnected during the operation of the SoC 100 through one or more signals generated by the core processor module 106 and/or by the host processor module 120 .
- the configuration and/or the selective interconnection of various portions of the processing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data.
- the processing network 300 may comprise an MPEG feeder (MFD) module 302 , multiple video feeder (VFD) modules 304 , an HDMI module 306 , crossbar modules 310 a and 310 b , multiple scaler (SCL) modules 308 , a motion-adaptive deinterlacer (MAD) module 312 , a digital noise reduction (DNR) module 314 , multiple capture (CAP) modules 320 , compositor (CMP) modules 322 a and 322 b , and an MPEG decoder module 324 .
- MFD MPEG feeder
- VFD video feeder
- HDMI HDMI
- SCL multiple scaler
- MAD motion-adaptive deinterlacer
- DNR digital noise reduction
- CAP capture
- CMP compositor
- DRAM utilized by the processing network 300 to handle storage of video data and/or graphics data during various operations.
- DRAM may be part of the memory module 130 described above with respect to FIG. 1 .
- the DRAM may be part of memory embedded in the SoC 100 .
- the references to a video encoder (not shown) in FIG. 3 may be associated with hardware and/or software in the SoC 100 that may be utilized after the processing network 300 to further process video data for communication to an output device, such as a display device, for example.
- Each of the crossbar modules 310 a and 310 b may comprise multiple input ports and multiple output ports.
- the crossbar modules 310 a and 310 b may be configured such that any one of the input ports may be connected to one or more of the output ports.
- the crossbar modules 310 a and 310 b may enable pass-through connections 316 between one or more output ports of the crossbar module 310 a and corresponding input ports of the crossbar module 310 b .
- the crossbar modules 310 a and 310 b may enable feedback connections 318 between one or more output ports of the crossbar module 310 b and corresponding input ports of the crossbar module 310 a .
- the configuration of the crossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.
- one or more processing paths may be configured in accordance with a mode of operation, such as mosaic mode, for example.
- the MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310 a .
- the video data read by the MFD module 302 may have been stored in memory after being generated by the MPEG decoder module 324 .
- the MFD module 302 may be utilized to feed video data from multiple sources to one of the CAP modules 320 , for example.
- the MPEG decoder module 324 may be operable to decode video data received through one or more bit streams.
- the MPEG encoder 324 may receive up to N bit streams, BS 1 , . . . , BSN, corresponding to N different sources of video data.
- Each bit stream may comprise 2D video data or 3D video data, depending on the source.
- the MPEG decoder module 324 decodes a single bit stream, the decoded video data may be stored in a single buffer or area in memory.
- the MPEG decoder module 324 decodes more than one bit stream, the decoded video data from each bit stream may be stored in a separate buffer or area in memory.
- the MPEG decoder module 324 may be utilized during mosaic mode to decode video data from several sources and to store the decoded video data in separate buffers.
- the MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats supported by the processing network 300 .
- the MPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format.
- the decoded video data may be stored in memory in accordance with the format in which it is provided by the MPEG decoder module 324 .
- each buffer utilized in mosaic mode may be smaller than the size of a single buffer utilized to store decoded video data from a single source during a different mode of operation.
- Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310 .
- the video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300 .
- the HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310 a .
- the HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310 a at another data rate.
- Each SCL module 308 may be operable to scale video data received from the crossbar module 310 a and provide the scaled video data to the crossbar module 310 b .
- the MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310 a , including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310 b .
- the DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310 a , including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310 b .
- the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308 .
- Each CAP module 320 may be operable to capture video data from the crossbar module 310 b and store the captured video data in memory.
- One of the CAP modules 320 may be utilized during mosaic mode to capture video data stored in one or more buffers and fed to the CAP module 320 by the MFD module 302 .
- the video data in one of the buffers is fed and captured separately from the video data in another buffer. That is, instead of a single capture being turned on for all buffers, a separate capture is turned on for each buffer, with all the captures occurring during a single picture capture time.
- the video data stored in each buffer may be captured to different areas in memory that correspond to different windows in the output video picture.
- Each of the CMP modules 322 a and 322 b may be operable to combine or mix video data received from the crossbar module 310 b . Moreover, each of the CMP modules 322 a and 322 b may be operable to combine or mix video data received from the crossbar module 310 b with graphics data. For example, the CMP module 322 a may be provided with a graphics feed, Gfxa, for mixing with video data received from the crossbar module 310 b . Similarly, the CMP module 322 b may be provided with a graphics feed, Gfxb, for mixing with video data received from the crossbar module 310 b.
- FIG. 4 is a diagram that illustrates the flow of data in mosaic mode, in accordance with an embodiment of the invention.
- a processing path 400 that may be configured by interconnecting several modules in the processing network 300 described above with respect to FIG. 3 .
- the processing path 400 may be utilized to process 2D video data and/or 3D video data to generate output video pictures with multiple windows to concurrently display video data from different sources.
- the processing path 400 illustrates a flow of data within the processing network 300 when operating in mosaic mode.
- bit streams BS 1 , . . . , BS 2
- Each bit stream may correspond to a different source of video data and to a different window in the output video picture.
- the video data in the bit streams may be decoded by the MPEG decoder module 324 and stored in separate buffers in a memory 410 .
- the decoded video data associated with each bit stream may be in one of the multiple formats supported by the processing network 300 .
- the memory 410 may correspond to the memory described above with respect to FIG. 3 .
- the MFD module 302 may feed the video data stored in the buffers to the CAP module 320 for capture in the memory 410 .
- the feed and capture process may comprise feeding and capturing the video data in each of the buffers separately.
- the feed and capture of the video data in all of the buffers may occur during a single picture capture time by the CAP module 320 .
- the feed and capture of the video data may occur in a different portion of the memory 410 than the portion utilized for buffering the output from the MPEG decoder module 324 .
- different memories may be utilized for video data buffering and for video data capture.
- the video data may be captured in such a manner that the captured video data from each source is stored within the memory 410 in an area of memory that corresponds to a window in the output video picture.
- the captured video data may be stored in the memory 410 in a 2D format.
- the captured video data may be stored in the memory 410 in a 3D format.
- the left-eye information and the right-eye information in the video data may be stored in different portions of the memory 410 , for example.
- the VFD module 304 may read and feed the captured video data to a single buffer.
- the operation of the VFD module 304 during mosaic mode may be substantially the same as in other modes of operation.
- the VFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 2D output format.
- the VFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 3D output format.
- the 3D output format may be a 3D L/R output format or a 3D O/U output format.
- the video data in the single buffer may be communicated to a compositor module, such as the CMP module 322 a , for example, which in turn provides an input to a video encoder to generate the output video picture in the appropriate output format.
- the processing path 400 is provided by way of illustration and not of limitation. Other data flow paths may be implemented during mosaic mode that may comprise more or fewer of the various devices, components, modules, blocks, circuits, or the like, of the processing network 300 to enable the generation of output video pictures comprising multiple windows for concurrently displaying video data from different sources.
- FIG. 5 is a diagram that illustrates an exemplary output video picture with multiple windows that is generated utilizing mosaic mode, in accordance with an embodiment of the invention.
- the output video picture 500 may be representative of the type of output video picture that may be generated by the SoC 100 when operating in mosaic mode.
- the output video picture 500 may comprise 12 windows associated with 12 different sources of video data. That is, the video data that is to be displayed in each of the windows in the output video picture 500 is generated by the SoC 100 from video data received from a different source. For example, the video data for “Window 1 ” is received by the SoC 100 from a source S 1 . Similarly, the video data for “Window 2 ,” . . . , “Window 12 ,” is received by the SoC 100 from sources S 2 , . . . , S 12 , respectively.
- the output video picture 500 is provided by way of illustration and not of limitation.
- the SoC 100 may generate output video pictures that may comprise more or fewer windows than those shown in the output video picture 500 .
- the size, layout, and/or arrangement of the video windows need not follow the size, layout, and/or arrangement of the output video picture 500 .
- the windows in an output video picture may be of different sizes.
- the windows in an output video picture need not be in a grid pattern.
- the SoC 100 may be operable to dynamically change the characteristics and/or the video data associated with any one window in the output video picture 500 .
- FIG. 6 is a diagram that illustrates the storage of decoded video data from multiple sources in corresponding buffers in mosaic mode, in accordance with an embodiment of the invention.
- the MPEG decoder module 324 may receive bit streams from sources 1 , . . . , N.
- the MPEG decoder module 324 may decode the video data in the N bit streams and may respectively generate decoded video data 1 , . . . , decoded video data N.
- the decoded video data 1 may be stored in a buffer 610 in the memory 410 , for example.
- the buffer 610 may correspond to an area of the memory 410 in which to store the decoded video data from source 1 .
- each of the decoded video data 2 , . . . , decoded video data N may be stored in corresponding buffers 610 in the memory 410 .
- the characteristics of each of the buffers 610 may vary based on the number, size, layout, and/or configuration of the windows in the output video picture that is to be generated by the SoC 100 while operating in mosaic mode.
- the MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats.
- the MPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format.
- the decoded video data may be stored in the buffers 610 in the memory 410 in accordance with the format in which it is provided by the MPEG decoder module 324 .
- FIG. 7 is a diagram that illustrates the generation of an output video picture having a 2D output format in mosaic mode, in accordance with an embodiment of the invention.
- FIG. 7 there are shown four sources of video data 710 , which are arranged in the figure in the manner in which the video data from each of the sources is to be displayed in an output video picture 730 .
- the video data associated with a first source, 51 which comprises 2D video data
- the video data associated with a second source, S 2 which comprises 3D video data
- the video data associated with a third source, S 3 which also comprises 3D video data, is to be displayed at the bottom-left window in the output video picture 730 .
- the video data associated with a fourth source, S 4 which also comprises 2D video data, is to be displayed at the bottom-right window in the output video picture 730 .
- the video data from the sources S 1 , S 2 , S 3 , and S 4 is provided to the MPEG decoder module 324 through bit streams BS 1 , BS 2 , BS 3 , and BS 4 , respectively.
- the MPEG decoder module 324 may decode the video data and provide the decoded video data to the buffers 610 in the memory 410 for storage.
- a first of the buffers 610 which is labeled B 1
- a second of the buffers 610 which is labeled B 2 , may store 3D video data from the second source, S 2 .
- the 3D video data from the second source, S 2 may comprise L 2 video data and R 2 video data, which correspond to the left-eye and the right-eye information, respectively.
- the third of the buffers 610 which is labeled B 3 , may store 3D video data from the third source, S 3 .
- the 3D video data from the third source, S 3 may comprise L 3 video data and R 3 video data, which correspond to the left-eye information and right-eye information, respectively.
- a fourth of the buffers 610 which is labeled B 4 , may store 2D video data, 2 D 4 , from the fourth source, S 4 .
- the feed and capture of the video data in the buffers 610 may be performed by, for example, the MFD module 302 and one of the CAP modules 320 .
- the capture may be turned on four times, one time for each of the buffers 610 .
- the capture of the video data in all four buffers 610 to the memory 410 may be performed in a single picture capture time.
- the windowing that is, the arrangement of the video data in the memory 410 to construct or layout the windows in the output video picture 730 , may be carried out by capturing the video data from a particular source in an area of the memory 410 that corresponds to the window in the output video picture 730 in which the video data from that particular source is to be displayed.
- FIG. 7 shows a canvas 720 that corresponds to the area in the memory 410 in which the contents from the buffers 610 are to be stored.
- the canvas 720 has a 2D format and comprises a top-left memory area, a top-right memory area, a bottom-left memory area, and a bottom-right memory area, each of which corresponds to one of the windows in the output video picture 730 .
- the video data from each of the buffers 610 may be stored in a corresponding memory area in the canvas 720 .
- the 2D video data, 2 D 1 associated with the first source, S 1
- the L 2 video data and the R 2 video data associated with the second source, S 2 are not in 2D format.
- the L 2 video data may be used and stored in the top-right memory area of the canvas 720 while the R 2 video data may be discarded.
- the L 3 video data and the R 3 video data associated with the third source, S 3 are not in 2D format.
- the L 3 video data may be used and stored in the bottom-left memory area of the canvas 720 while the R 3 video data may be discarded.
- the 2D video data, 2 D 4 associated with the fourth source, S 4 , may be stored in the bottom-right memory area of the canvas 720 .
- the VFD module 304 may be utilized to read and feed the entire contents of the canvas 720 to a single buffer for further processing and to subsequently generate the output video picture 730 .
- a compositor module such as the CMP module 222 a or the CMP module 222 b , for example, may be utilized to perform a masking operation in connection with the generation of the output video picture 730 in mosaic mode.
- FIGS. 8A-8C are diagrams that illustrate the generation of an output video picture having a 3D output format in mosaic mode, in accordance with embodiments of the invention.
- FIG. 8A there is shown the sources 710 , the MPEG decoder module 324 , and the buffers 610 as described above with respect to FIG. 7 .
- a left picture canvas 810 and a right picture canvas 820 in the memory 410 are also shown.
- the left picture canvas 810 may correspond to an area in the memory 410 in which the left-eye information from the buffers 610 may to be stored.
- the right picture canvas 820 may correspond to an area in the memory 410 in which the right-eye information from the buffers 610 may be stored.
- Both the left picture canvas 810 and the right picture canvas 820 comprise a top-left memory area, a top-right memory area, a bottom-left memory area, and a bottom-right memory area, each of which corresponds to one of the windows in an output video picture 830 .
- the output video picture 830 may have a 3D L/R output format.
- the video data from the buffers 610 may be stored in the left picture canvas 810 and the right picture canvas 810 as appropriate.
- the 2D video data, 2 D 1 associated with the first source, S 1 , which is not in a 3D format, may be stored in the top-left memory area of both the left picture canvas 810 and the right picture canvas 820 .
- the L 2 video data and the R 2 video data associated with the second source, S 2 may be stored in the top-right memory area of the left picture canvas 810 and in the top-right memory area of the right picture canvas 820 , respectively.
- the L 3 video data and the R 3 video data associated with the third source, S 3 may be stored in the bottom-left memory area of the left picture canvas 810 and in the bottom-left memory area of the right picture canvas 820 , respectively.
- the 2D video data, 2 D 4 associated with the fourth source, S 4 , which is not in a 3D format, may be stored in the bottom-right memory area of both the left picture canvas 810 and the right picture canvas 820 .
- the VFD module 304 may be utilized to read and feed the entire contents of the left picture canvas 810 and of the right picture canvas 820 to a single buffer for further processing and to subsequently generate the output video picture 830 .
- the output video picture 830 comprises a left picture 832 L and a right picture 832 R in a side-by-side configuration, each of which comprises four windows corresponding to the four sources of the video data.
- a compositor module such as the CMP module 222 a or the CMP module 222 b , for example, may be utilized to perform a masking operation in connection with the generation of the output video picture 830 in mosaic mode.
- the feeding and capturing of video data into the left picture canvas 810 may occur before feeding and capturing of video data into the right picture canvas 820 .
- the video data in the buffers 610 may be fed and captured sequentially to the memory 410 .
- the video data from the first of the buffers 610 , B 1 may be fed and captured to the memory 410 first
- the video data from the second of the buffers 610 , B 2 may be fed and captured to the memory 410 second
- the video data from the third of the buffers 610 , B 3 may be fed and captured to the memory 410 next
- the video data from the fourth of the buffers 610 , B 4 may be fed and captured to the memory 410 last.
- the left-eye information stored in one of the buffers 610 may be fed and captured to the memory 410 before the right-eye information stored in that same buffer may be fed and captured to the memory 410 .
- the sources 710 , the MPEG decoder module 324 , the buffers 610 , the left picture canvas 810 , and the right picture canvas 820 as described above with respect to FIG. 8A .
- the VFD module 304 may be utilized to read and feed the entire contents of the left picture canvas 810 and a right picture canvas 820 to a single buffer for further processing and to subsequently generate an output video picture 840 .
- the output video picture 840 may have a 3D O/U output format.
- the output video picture 840 may comprise a left picture 842 L and a right picture 842 R in a top-and-bottom configuration, each of which comprises four windows corresponding to the four sources of the video data.
- a compositor module such as the CMP module 222 a or the CMP module 222 b , for example, may be utilized to perform a masking operation in connection with the generation of the output video picture 840 in mosaic mode.
- the VFD module 304 may be utilized to read and feed the entire contents of the left picture canvas 810 and a right picture canvas 820 to a single buffer for further processing and to subsequently generate a first output video picture 850 and a second output video picture 855 .
- the first output video picture 850 and the second output video picture 855 may correspond to a 3D sequential output format.
- the first output video picture 850 may correspond to a left picture in the 3D sequential output format and may be generated based on the contents of the left picture canvas 810 .
- the second output video picture 855 may correspond to a right picture in the 3D sequential output format and may be generated based on the contents of the right picture canvas 820 .
- the second output video picture 855 may be generated before the first output video picture 850 .
- a compositor module such as the CMP module 222 a or the CMP module 222 b , for example, may be utilized to perform a masking operation in connection with the generation of the first output video picture 850 and the second output video picture 855 in mosaic mode.
- FIGS. 7 , 8 A, 8 B, and 8 C are provided by way of illustration and not of limitation.
- One or more embodiments of the invention may be implemented in which the number of sources and/or the characteristics of the windows in the output video picture may be different from those described above.
- the various embodiments of the invention described herein may also be utilized when the video data from all of the sources may be in a 2D format or when the video data from all of the sources may be in a 3D format, for example.
- FIG. 9 is a flow chart that illustrates steps for generating output video pictures with multiple windows utilizing mosaic mode, in accordance with an embodiment of the invention.
- the MPEG decoder module 324 may receive video data from multiple sources.
- the video data after being decoded by the MPEG decoder module 324 , may be stored in separate buffers based on the source of the video data.
- the video data stored in the various buffers may be fed and captured utilizing the MFD module 302 and one of the CAP modules 320 . Multiple captures may be turned on based on the number of buffers from which video data is being captured. Moreover, the capture of the video data in all of the buffers may occur during a single picture capture time.
- the video data may be captured to memory according to the output format of the output video picture. For example, when the output video picture is to be in a 2D output format, the video data associated with a source, whether it is 2D video data or 3D video data, may be captured into memory utilizing a 2D canvas such as the 2D canvas 720 .
- the video data associated with a source may be captured into memory utilizing a 3D canvas such as the left picture canvas 810 and the right picture canvas 820 .
- a 3D canvas such as the left picture canvas 810 and the right picture canvas 820 .
- the left-eye information and the right-eye information may be stored in different portions of the memory.
- the windowing associated with the output video picture may be achieved through the capturing of the video data into portions of memory that correspond to a particular window in the output video picture.
- the VFD module 304 may be utilized to read and feed the captured video data stored in memory to a single buffer for further processing.
- the video feed may be performed in a single picture feed time.
- the output video picture may be generated in the appropriate output format from the video data in the single buffer.
- a processor such as the SoC 100 described above with respect to FIG. 1 , for example, which comprises a video feeder that is operable to receive video data from a plurality of sources.
- the video feeder may be an MPEG video feeder such as the MFD module 302 described above with respect to FIG. 3 , for example.
- the video data from one or more of the plurality of sources may comprise 3D video data.
- the SoC 100 may be operable to store the received video data from each of the plurality of sources into a corresponding area in a memory, wherein the storing of the received video data occurs during a capture time for a single picture.
- the SoC 10 may be operable to store the received video data from each source of the plurality of sources at different instants during the capture time for the single picture.
- the memory may be, for example, the memory module 130 described above with respect to FIG. 1 or the memory module 400 described above with respect to FIG. 4 .
- the area in the memory in which the received video data from a source of the plurality of sources is stored may correspond to a window of a plurality of windows in an output video picture, such as the output video pictures 500 , 730 , 830 , and 840 described above.
- the SoC 100 may be operable to store the received video data from each source of the plurality of sources in a 2D format or in a 3D format.
- the SoC 100 may be operable to store left-eye information associated with a current source of the plurality of sources before storage of right-eye information associated with the current source.
- the SoC 100 may subsequently store left-eye information associated with a next source of the plurality of sources before storage of right-eye information associated with the next source.
- the SoC 100 may be operable to store left-eye information associated with two or more sources of the plurality of sources before storage of right-eye information associated with the two or more sources.
- the SoC 100 may be operable to read the stored video data from the memory to a single buffer during a feed time for a single picture.
- the single buffer may be comprised within a device, component, module, block, circuit, or the like, that follows or is subsequent in operation to one of the VFD modules 304 , such as the CMP modules 322 a and 322 b , and/or a video encoder, for example.
- the processor may be operable to read the stored video data from the memory to the single buffer in an L/R format or an O/U format.
- a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for utilizing mosaic mode to create 3D video.
- the present invention may be realized in hardware, software, or a combination of hardware and software.
- the present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
- a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
- Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Television Systems (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Circuits (AREA)
Abstract
Description
- This application makes reference to, claims priority to, and claims the benefit of:
- U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009;
U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and
U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010. - This application also makes reference to:
- U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010;
U.S. Provisional patent application Ser. No. ______(Attorney Docket No. 23437U502) filed on Dec. 8, 2010;
U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010; and
U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010. - Each of the above referenced applications is hereby incorporated herein by reference in its entirety.
- Certain embodiments of the invention relate to the processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for utilizing mosaic mode to create 3D video.
- The availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.
- Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
- A system and/or method for utilizing a mosaic mode to create 3D video, as set forth more completely in the claims.
- Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
-
FIG. 1 is a block diagram that illustrates a system-on-chip that is operable to utilize mosaic mode to create multiple windows in an output video picture, in accordance with an embodiment of the invention. -
FIG. 2 is a diagram that illustrates various packing schemes for 3D video data, in accordance with embodiments of the invention. -
FIG. 3 is a block diagram that illustrates a processing network that is operable to handle 3D video data, in accordance with an embodiment of the invention. -
FIG. 4 is a diagram that illustrates the flow of data in mosaic mode, in accordance with an embodiment of the invention. -
FIG. 5 is a diagram that illustrates an exemplary output video picture with multiple windows that is generated utilizing mosaic mode, in accordance with an embodiment of the invention. -
FIG. 6 is a diagram that illustrates the storage of decoded video data from multiple sources in corresponding buffers in mosaic mode, in accordance with an embodiment of the invention. -
FIG. 7 is a diagram that illustrates the generation of an output video picture having a 2D output format in mosaic mode, in accordance with an embodiment of the invention. -
FIGS. 8A-8C are diagrams that illustrate the generation of an output video picture having a 3D output format in mosaic mode, in accordance with embodiments of the invention. -
FIG. 9 is a flow chart that illustrates steps for generating output video pictures with multiple windows utilizing mosaic mode, in accordance with an embodiment of the invention. - Certain embodiments of the invention may be found in a method and system for utilizing a mosaic mode to create 3D video. Various embodiments of the invention may relate to a processor comprising a video feeder, such as an MPEG video feeder, for example, wherein the processor may receive video data from multiple sources through the video feeder. The video data from one or more of those sources may comprise 3D video data. The video data from each source may be stored in a corresponding different area in memory during a capture time for a single picture. Each of the different areas in memory may correspond to a different window of multiple windows in an output video picture. The processor may store the video data from each source in memory in either a 2D format or a 3D format, based on a format of the output video picture. For example, when a 3D format is to be used, left-eye and right-eye information may be stored in different portions of memory. The video data may be read from the different areas in memory to a single buffer during a feed time for a single picture before being utilized to generate the output video picture.
- By utilizing a mosaic mode to process video data as described herein, a user may be provided with multiple windows in an output video picture to concurrently display video from different sources, including 3D video. The multi-windowed output video picture may have a 2D output format or a 3D output format based on, for example, the characteristics of the device in which the output video picture is to be displayed, reproduced, and/or stored. That is, while the different sources may provide 2D video data and/or 3D video data, the video data in each of the windows in the output video picture may be in the same format.
-
FIG. 1 is a block diagram that illustrates a system-on-chip (SoC) that is operable utilize mosaic mode to create multiple windows in an output video picture, in accordance with an embodiment of the invention. Referring toFIG. 1 , there is shown anSoC 100, ahost processor module 120, and amemory module 130. TheSoC 100 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive and/or process one or more signals that comprise video content, including 3D video content. Examples of signals comprising video content that may be received and processed by theSoC 100 include, but need not be limited to, composite, blanking, and sync (CVBS) signals, separate video (S-video) signals, high-definition multimedia interface (HDMI) signals, component signals, personal computer (PC) signals, source input format (SIF) signals, and red, green, blue (RGB) signals. Such signals may be received by theSoC 100 from one or more video sources communicatively coupled to theSoC 100. The SoC 100 may also be operable to receive and/or process graphics content from one or more sources of such content. - The SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from the
SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate, resolution, and/or whether the output format is a 2D output format or a 3D output format, for example, may be based on the type of output device to which those signals are to be provided. Moreover, the output signals may comprise one or more output video pictures, each of which may comprise multiple windows to concurrently display video from different sources. - The
host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of theSoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to theSoC 100 by thehost processor module 120 at various times during the operation of theSoC 100. Thehost processor module 120 may be operable to control and/or select a mode of operation for theSoC 100. For example, thehost processor module 120 may enable a mosaic mode for theSoC 100. - The
memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of theSoC 100. For example, thememory module 130 may store intermediate values that result during the processing of video data, including those values associated with the processing of video data during mosaic mode. Moreover, thememory module 130 may store graphics data that may be retrieved by theSoC 100 for mixing with video data. For example, the graphics data may comprise 2D graphics data and/or 3D graphics data for mixing with video data in theSoC 100. - The
SoC 100 may comprise aninterface module 102, avideo processor module 104, and acore processor module 106. TheSoC 100 may be implemented as a single integrated circuit comprising the components listed above. Theinterface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content and/or graphics content. Similarly, theinterface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to theSoC 100. For example, theSoC 100 may communicate one or more signals that comprise a sequence of output video pictures comprising multiple windows to concurrently display video from different sources. The format of the multi-windowed output video pictures may be based on, for example, the characteristics of the output devices. - The
video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data and/or graphics data. Thevideo processor module 104 may be operable to support multiple formats for video data and/or graphics data, including multiple input formats and/or multiple output formats. Thevideo processor module 104 may be operable to perform various types of operations on 2D video data and/or 3D video data. For example, when video data from several sources is received by thevideo processor module 104, and the video data from any one of those sources may comprise 2D video data or 3D video data, thevideo processor module 104 may generate output video comprising a sequence of output video pictures having multiple windows, wherein each of the windows in an output video picture corresponds to a particular source of video data. In this regard, the output video pictures that are generated by thevideo processor module 104 may be in a 2D output format or in a 3D output format in accordance with the device in which the output video pictures are to be displayed, reproduced, and/or stored. That is, even when a portion of the sources provide video data comprising 2D video data and another portion of the sources provide video data comprising 3D video data, the output video pictures generated by thevideo processor module 104 may be generated in either a 2D output format or a 3D output format. In some embodiments, when the video content comprises audio data, thevideo processor module 104, and/or another module in theSoC 100, may be operable to handle the audio data. - The
core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of theSoC 100. For example, thecore processor module 106 may be operable to control and/or configure operations of theSoC 100 that are associated with processing video content and/or and graphics content. In some embodiments of the invention, thecore processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by theSoC 100. For example, thecore processor module 106 may comprise memory that may be utilized during the processing of video data and/or graphics data by thevideo processor module 104. Thecore processor module 106 may be operable to control and/or enable mosaic mode in theSoC 100. Such control and/or enabling may be performed in coordination with thehost processor module 120, for example. - In operation, mosaic mode may be enabled in the
SoC 100 by thehost processor module 120 and/or thecore processor module 106, for example. In this mode, theSoC 100 may receive video data from more than one source through theinterface module 102. The video data from any one of the sources may comprise 2D video data or 3D video data. Thevideo processor module 104 may decode the video data and may store the decoded video data from each source in a different buffer in memory. In this regard, thevideo processor module 104 may comprise a video decoder (not shown), such as an MPEG decoder, for example. The memory may be dynamic random access memory (DRAM), which may be part of thememory module 130, for example, and/or part of theSoC 100, such as embedded memory in thecore processor module 106, for example. - A video feeder (not shown) within the
video processor module 104, such as an MPEG video feeder, for example, may be utilized to obtain the video data from the buffers and feed the video data for capture into memory. The video data from each of the buffers may be fed and captured separately into a corresponding area in memory. Moreover, the capture of the video data from the various buffers may occur during a single picture capture time, that is, may occur during the time it takes to capture a single picture into memory by theSoC 100. - The capture of the video data allows for the generation of the multiple windows in the output video picture by storing the video data from a particular source in an area of memory that correspond to a particular window in the output video picture. In other words, the windowing operation associated with mosaic mode occurs in connection with the capture of the video data to memory.
- During capture, the video data may be stored in memory in a 2D format or in a 3D format, based on a format of the output video picture. When the video data is stored in memory in a 3D format, left-eye and right-eye information may be stored in different portions of memory. The video data may be read from memory to a single buffer during a video feed process for a single picture, that is, during the time it takes to feed a single picture from memory by the
SoC 100. Once the video data has been placed in the single buffer, it may be subsequently processed to generate the output video picture with multiple windows for communication to an output device through theinterface module 102. -
FIG. 2 is a diagram that illustrates various packing schemes for 3D video data, in accordance with embodiments of the invention. Referring toFIG. 2 there is shown a first packing scheme orfirst format 200 for 3D video data. Also shown is a second packing scheme orsecond format 210 for 3D video data. Each of thefirst format 200 and thesecond format 210 illustrates the arrangement of the left-eye content (L) and the right-eye content (R) in a 3D picture. The left-eye content or information may also be referred to as a left 3D picture and the right-eye content or information may also be referred to as a right 3D picture. In this regard, a 3D picture may correspond to a 3D frame or a 3D field in a video sequence, whichever is appropriate. The L and R portions in thefirst format 200 are arranged in a side-by-side arrangement, which is typically referred to as a left-and-right (L/R) format. The L and R portions in thesecond format 210 are arranged in a top-and-bottom arrangement, which is typically referred to as an over-and-under (O/U) format. Another arrangement, one not shown inFIG. 2 , may be one in which the L portion is in a first 3D picture and the R portion is in a second 3D picture. Such arrangement may be referred to as a sequential format because the 3D pictures are processed and/or handled sequentially. - Both the
first format 200 and thesecond format 210 may be utilized as native formats by theSoC 100 to process 3D video data. TheSoC 100 may also be operable to utilize the sequential format as a native format, which may be typically handled by theSoC 100 in a manner that is substantially similar to the handling of thesecond format 210. TheSoC 100 may also support converting from thefirst format 200 to thesecond format 210 and converting from thesecond format 210 to thefirst format 200. Such conversion may be associated with various operations performed by theSoC 100, including but not limited to operations associated with mosaic mode. TheSoC 100 may support additional native formats other than thefirst format 200, thesecond format 210, and the sequential format, for example. -
FIG. 3 is a block diagram that illustrates a processing network that is operable to handle 3D video data, in accordance with an embodiment of the invention. Referring toFIG. 3 , there is shown aprocessing network 300 that may be part of thevideo processor module 104 in theSoC 100, for example. Theprocessing network 300 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to route and process video data. In this regard, theprocessing network 300 may comprise multiple devices, components, modules, blocks, circuits, or the like, that may be selectively interconnected to enable the routing and processing of video data in accordance with various modes of operation, including mosaic mode. The various devices, components, modules, blocks, circuits, or the like in theprocessing network 300 may be dynamically configured and/or dynamically interconnected during the operation of theSoC 100 through one or more signals generated by thecore processor module 106 and/or by thehost processor module 120. In this regard, the configuration and/or the selective interconnection of various portions of theprocessing network 300 may be performed on a picture-by-picture basis when such an approach is appropriate to handle varying characteristics of the video data. - In the embodiment of the invention described in
FIG. 3 , theprocessing network 300 may comprise an MPEG feeder (MFD)module 302, multiple video feeder (VFD)modules 304, anHDMI module 306,crossbar modules modules 308, a motion-adaptive deinterlacer (MAD)module 312, a digital noise reduction (DNR)module 314, multiple capture (CAP)modules 320, compositor (CMP)modules MPEG decoder module 324. The references to a memory (not shown) inFIG. 3 may be associated with a DRAM utilized by theprocessing network 300 to handle storage of video data and/or graphics data during various operations. Such DRAM may be part of thememory module 130 described above with respect toFIG. 1 . In some instances, the DRAM may be part of memory embedded in theSoC 100. The references to a video encoder (not shown) inFIG. 3 may be associated with hardware and/or software in theSoC 100 that may be utilized after theprocessing network 300 to further process video data for communication to an output device, such as a display device, for example. - Each of the
crossbar modules crossbar modules crossbar modules connections 316 between one or more output ports of thecrossbar module 310 a and corresponding input ports of thecrossbar module 310 b. Moreover, thecrossbar modules feedback connections 318 between one or more output ports of thecrossbar module 310 b and corresponding input ports of thecrossbar module 310 a. The configuration of thecrossbar modules 310 a and/or 310 b may result in one or more processing paths being configured within theprocessing network 300 in accordance with the manner and/or order in which video data is to be processed. For example, one or more processing paths may be configured in accordance with a mode of operation, such as mosaic mode, for example. - The
MFD module 302 may be operable to read video data from memory and provide such video data to thecrossbar module 310 a. The video data read by theMFD module 302 may have been stored in memory after being generated by theMPEG decoder module 324. During mosaic mode, theMFD module 302 may be utilized to feed video data from multiple sources to one of theCAP modules 320, for example. - The
MPEG decoder module 324 may be operable to decode video data received through one or more bit streams. For example, theMPEG encoder 324 may receive up to N bit streams, BS1, . . . , BSN, corresponding to N different sources of video data. Each bit stream may comprise 2D video data or 3D video data, depending on the source. When theMPEG decoder module 324 decodes a single bit stream, the decoded video data may be stored in a single buffer or area in memory. When theMPEG decoder module 324 decodes more than one bit stream, the decoded video data from each bit stream may be stored in a separate buffer or area in memory. In this regard, theMPEG decoder module 324 may be utilized during mosaic mode to decode video data from several sources and to store the decoded video data in separate buffers. - The
MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats supported by theprocessing network 300. For example, theMPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format. The decoded video data may be stored in memory in accordance with the format in which it is provided by theMPEG decoder module 324. - Since mosaic mode is utilized to generate an output video picture with multiple windows, the amount of video data that is displayed in one of the windows in the output video picture is a small portion of the amount of video data that is needed to display the entire output video picture. Therefore, the size of each buffer utilized in mosaic mode may be smaller than the size of a single buffer utilized to store decoded video data from a single source during a different mode of operation.
- Each
VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by theVFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with theprocessing network 300. TheHDMI module 306 may be operable to provide a live feed of high-definition video data to thecrossbar module 310 a. TheHDMI module 306 may comprise a buffer (not shown) that may enable theHDMI module 306 to receive the live feed at one data rate and provide the live feed to thecrossbar module 310 a at another data rate. - Each
SCL module 308 may be operable to scale video data received from thecrossbar module 310 a and provide the scaled video data to thecrossbar module 310 b. TheMAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from thecrossbar module 310 a, including operations related to inverse telecine (IT), and provide progressive video data to thecrossbar module 310 b. TheDNR module 314 may be operable to perform artifact reduction operations on video data received from thecrossbar module 310 a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to thecrossbar module 310 b. In some embodiments of the invention, the operations performed by theDNR module 314 may be utilized before the operations of theMAD module 312 and/or the operations of theSCL module 308. - Each
CAP module 320 may be operable to capture video data from thecrossbar module 310 b and store the captured video data in memory. One of theCAP modules 320 may be utilized during mosaic mode to capture video data stored in one or more buffers and fed to theCAP module 320 by theMFD module 302. In this regard, the video data in one of the buffers is fed and captured separately from the video data in another buffer. That is, instead of a single capture being turned on for all buffers, a separate capture is turned on for each buffer, with all the captures occurring during a single picture capture time. The video data stored in each buffer may be captured to different areas in memory that correspond to different windows in the output video picture. - Each of the
CMP modules crossbar module 310 b. Moreover, each of theCMP modules crossbar module 310 b with graphics data. For example, theCMP module 322 a may be provided with a graphics feed, Gfxa, for mixing with video data received from thecrossbar module 310 b. Similarly, theCMP module 322 b may be provided with a graphics feed, Gfxb, for mixing with video data received from thecrossbar module 310 b. -
FIG. 4 is a diagram that illustrates the flow of data in mosaic mode, in accordance with an embodiment of the invention. Referring toFIG. 4 , there is shown aprocessing path 400 that may be configured by interconnecting several modules in theprocessing network 300 described above with respect toFIG. 3 . Theprocessing path 400 may be utilized to process 2D video data and/or 3D video data to generate output video pictures with multiple windows to concurrently display video data from different sources. - The
processing path 400 illustrates a flow of data within theprocessing network 300 when operating in mosaic mode. For example, bit streams, BS1, . . . , BS2, are provided to theMPEG decoder module 324. Each bit stream may correspond to a different source of video data and to a different window in the output video picture. The video data in the bit streams may be decoded by theMPEG decoder module 324 and stored in separate buffers in amemory 410. The decoded video data associated with each bit stream may be in one of the multiple formats supported by theprocessing network 300. Thememory 410 may correspond to the memory described above with respect toFIG. 3 . TheMFD module 302 may feed the video data stored in the buffers to theCAP module 320 for capture in thememory 410. The feed and capture process may comprise feeding and capturing the video data in each of the buffers separately. The feed and capture of the video data in all of the buffers may occur during a single picture capture time by theCAP module 320. The feed and capture of the video data may occur in a different portion of thememory 410 than the portion utilized for buffering the output from theMPEG decoder module 324. In another embodiment of the invention, different memories may be utilized for video data buffering and for video data capture. - The video data may be captured in such a manner that the captured video data from each source is stored within the
memory 410 in an area of memory that corresponds to a window in the output video picture. When the output video picture is to have a 2D output format, the captured video data may be stored in thememory 410 in a 2D format. When the output video picture is to have a 3D output format, the captured video data may be stored in thememory 410 in a 3D format. In this regard, the left-eye information and the right-eye information in the video data may be stored in different portions of thememory 410, for example. - The
VFD module 304 may read and feed the captured video data to a single buffer. In this regard, the operation of theVFD module 304 during mosaic mode may be substantially the same as in other modes of operation. When the captured video data is stored in a 2D format in thememory 410, theVFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 2D output format. When the captured video data is stored in a 3D format in thememory 410, theVFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 3D output format. The 3D output format may be a 3D L/R output format or a 3D O/U output format. The video data in the single buffer may be communicated to a compositor module, such as theCMP module 322 a, for example, which in turn provides an input to a video encoder to generate the output video picture in the appropriate output format. - The
processing path 400 is provided by way of illustration and not of limitation. Other data flow paths may be implemented during mosaic mode that may comprise more or fewer of the various devices, components, modules, blocks, circuits, or the like, of theprocessing network 300 to enable the generation of output video pictures comprising multiple windows for concurrently displaying video data from different sources. -
FIG. 5 is a diagram that illustrates an exemplary output video picture with multiple windows that is generated utilizing mosaic mode, in accordance with an embodiment of the invention. Referring toFIG. 5 , there is shown anoutput video picture 500 with multiple windows, each of which is associated with a different source of video data. Theoutput video picture 500 may be representative of the type of output video picture that may be generated by theSoC 100 when operating in mosaic mode. Theoutput video picture 500 may comprise 12 windows associated with 12 different sources of video data. That is, the video data that is to be displayed in each of the windows in theoutput video picture 500 is generated by theSoC 100 from video data received from a different source. For example, the video data for “Window 1” is received by theSoC 100 from a source S1. Similarly, the video data for “Window 2,” . . . , “Window 12,” is received by theSoC 100 from sources S2, . . . , S12, respectively. - The
output video picture 500 is provided by way of illustration and not of limitation. For example, theSoC 100 may generate output video pictures that may comprise more or fewer windows than those shown in theoutput video picture 500. The size, layout, and/or arrangement of the video windows need not follow the size, layout, and/or arrangement of theoutput video picture 500. For example, the windows in an output video picture may be of different sizes. In another example, the windows in an output video picture need not be in a grid pattern. Moreover, theSoC 100 may be operable to dynamically change the characteristics and/or the video data associated with any one window in theoutput video picture 500. -
FIG. 6 is a diagram that illustrates the storage of decoded video data from multiple sources in corresponding buffers in mosaic mode, in accordance with an embodiment of the invention. Referring toFIG. 6 , there is shown theMPEG decoder module 324 being utilized during mosaic mode. In this exemplary embodiment, theMPEG decoder module 324 may receive bit streams fromsources 1, . . . , N. TheMPEG decoder module 324 may decode the video data in the N bit streams and may respectively generate decodedvideo data 1, . . . , decoded video data N. The decodedvideo data 1 may be stored in abuffer 610 in thememory 410, for example. Thebuffer 610 may correspond to an area of thememory 410 in which to store the decoded video data fromsource 1. Similarly, each of the decodedvideo data 2, . . . , decoded video data N may be stored in correspondingbuffers 610 in thememory 410. The characteristics of each of thebuffers 610 may vary based on the number, size, layout, and/or configuration of the windows in the output video picture that is to be generated by theSoC 100 while operating in mosaic mode. - As noted above, the
MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats. For example, theMPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format. The decoded video data may be stored in thebuffers 610 in thememory 410 in accordance with the format in which it is provided by theMPEG decoder module 324. -
FIG. 7 is a diagram that illustrates the generation of an output video picture having a 2D output format in mosaic mode, in accordance with an embodiment of the invention. Referring toFIG. 7 , there are shown four sources ofvideo data 710, which are arranged in the figure in the manner in which the video data from each of the sources is to be displayed in anoutput video picture 730. For example, the video data associated with a first source, 51, which comprises 2D video data, is to be displayed at the top-left window in theoutput video picture 730. The video data associated with a second source, S2, which comprises 3D video data, is to be displayed at the top-right window in theoutput video picture 730. The video data associated with a third source, S3, which also comprises 3D video data, is to be displayed at the bottom-left window in theoutput video picture 730. Moreover, the video data associated with a fourth source, S4, which also comprises 2D video data, is to be displayed at the bottom-right window in theoutput video picture 730. - The video data from the sources S1, S2, S3, and S4 is provided to the
MPEG decoder module 324 through bit streams BS1, BS2, BS3, and BS4, respectively. TheMPEG decoder module 324 may decode the video data and provide the decoded video data to thebuffers 610 in thememory 410 for storage. As shown inFIG. 7 , a first of thebuffers 610, which is labeled B1, may store 2D video data, 2D1, from the first source, S1. A second of thebuffers 610, which is labeled B2, may store 3D video data from the second source, S2. The 3D video data from the second source, S2, may comprise L2 video data and R2 video data, which correspond to the left-eye and the right-eye information, respectively. The third of thebuffers 610, which is labeled B3, may store 3D video data from the third source, S3. The 3D video data from the third source, S3, may comprise L3 video data and R3 video data, which correspond to the left-eye information and right-eye information, respectively. Moreover, a fourth of thebuffers 610, which is labeled B4, may store 2D video data, 2D4 , from the fourth source, S4. - As described above with respect to
FIG. 4 , the feed and capture of the video data in thebuffers 610 may be performed by, for example, theMFD module 302 and one of theCAP modules 320. In this example, the capture may be turned on four times, one time for each of thebuffers 610. The capture of the video data in all fourbuffers 610 to thememory 410 may be performed in a single picture capture time. - The windowing, that is, the arrangement of the video data in the
memory 410 to construct or layout the windows in theoutput video picture 730, may be carried out by capturing the video data from a particular source in an area of thememory 410 that corresponds to the window in theoutput video picture 730 in which the video data from that particular source is to be displayed. For example,FIG. 7 shows acanvas 720 that corresponds to the area in thememory 410 in which the contents from thebuffers 610 are to be stored. Thecanvas 720 has a 2D format and comprises a top-left memory area, a top-right memory area, a bottom-left memory area, and a bottom-right memory area, each of which corresponds to one of the windows in theoutput video picture 730. - The video data from each of the
buffers 610 may be stored in a corresponding memory area in thecanvas 720. For example, the 2D video data, 2D1, associated with the first source, S1, may be stored in the top-left memory area of thecanvas 720. The L2 video data and the R2 video data associated with the second source, S2, are not in 2D format. In such instances, the L2 video data may be used and stored in the top-right memory area of thecanvas 720 while the R2 video data may be discarded. Similarly, the L3 video data and the R3 video data associated with the third source, S3, are not in 2D format. As before, the L3 video data may be used and stored in the bottom-left memory area of thecanvas 720 while the R3 video data may be discarded. Moreover, the 2D video data, 2D4, associated with the fourth source, S4, may be stored in the bottom-right memory area of thecanvas 720. - Once the video data has been captured to the
memory 410 as described above, theVFD module 304 may be utilized to read and feed the entire contents of thecanvas 720 to a single buffer for further processing and to subsequently generate theoutput video picture 730. In this regard, a compositor module, such as the CMP module 222 a or the CMP module 222 b, for example, may be utilized to perform a masking operation in connection with the generation of theoutput video picture 730 in mosaic mode. -
FIGS. 8A-8C are diagrams that illustrate the generation of an output video picture having a 3D output format in mosaic mode, in accordance with embodiments of the invention. Referring toFIG. 8A , there is shown thesources 710, theMPEG decoder module 324, and thebuffers 610 as described above with respect toFIG. 7 . Also shown are aleft picture canvas 810 and aright picture canvas 820 in thememory 410. Theleft picture canvas 810 may correspond to an area in thememory 410 in which the left-eye information from thebuffers 610 may to be stored. Theright picture canvas 820 may correspond to an area in thememory 410 in which the right-eye information from thebuffers 610 may be stored. Both theleft picture canvas 810 and theright picture canvas 820 comprise a top-left memory area, a top-right memory area, a bottom-left memory area, and a bottom-right memory area, each of which corresponds to one of the windows in anoutput video picture 830. Theoutput video picture 830 may have a 3D L/R output format. - The video data from the
buffers 610 may be stored in theleft picture canvas 810 and theright picture canvas 810 as appropriate. For example, the 2D video data, 2D1, associated with the first source, S1, which is not in a 3D format, may be stored in the top-left memory area of both theleft picture canvas 810 and theright picture canvas 820. The L2 video data and the R2 video data associated with the second source, S2, may be stored in the top-right memory area of theleft picture canvas 810 and in the top-right memory area of theright picture canvas 820, respectively. Similarly, the L3 video data and the R3 video data associated with the third source, S3, may be stored in the bottom-left memory area of theleft picture canvas 810 and in the bottom-left memory area of theright picture canvas 820, respectively. Moreover, the 2D video data, 2D4, associated with the fourth source, S4, which is not in a 3D format, may be stored in the bottom-right memory area of both theleft picture canvas 810 and theright picture canvas 820. - Once the video data has been captured to the
memory 410 as described above, theVFD module 304 may be utilized to read and feed the entire contents of theleft picture canvas 810 and of theright picture canvas 820 to a single buffer for further processing and to subsequently generate theoutput video picture 830. Theoutput video picture 830 comprises aleft picture 832L and aright picture 832R in a side-by-side configuration, each of which comprises four windows corresponding to the four sources of the video data. In this regard, a compositor module, such as the CMP module 222 a or the CMP module 222 b, for example, may be utilized to perform a masking operation in connection with the generation of theoutput video picture 830 in mosaic mode. - In one embodiment of the invention, the feeding and capturing of video data into the
left picture canvas 810 may occur before feeding and capturing of video data into theright picture canvas 820. In another embodiment of the invention, the video data in thebuffers 610 may be fed and captured sequentially to thememory 410. - For example, the video data from the first of the
buffers 610, B1, may be fed and captured to thememory 410 first, the video data from the second of thebuffers 610, B2, may be fed and captured to thememory 410 second, the video data from the third of thebuffers 610, B3, may be fed and captured to thememory 410 next, and the video data from the fourth of thebuffers 610, B4, may be fed and captured to thememory 410 last. In this regard, the left-eye information stored in one of thebuffers 610 may be fed and captured to thememory 410 before the right-eye information stored in that same buffer may be fed and captured to thememory 410. - Referring to
FIG. 8B , there is shown thesources 710, theMPEG decoder module 324, thebuffers 610, theleft picture canvas 810, and theright picture canvas 820 as described above with respect toFIG. 8A . InFIG. 8B , once the video data has been captured to thememory 410, theVFD module 304 may be utilized to read and feed the entire contents of theleft picture canvas 810 and aright picture canvas 820 to a single buffer for further processing and to subsequently generate anoutput video picture 840. Theoutput video picture 840 may have a 3D O/U output format. In this regard, theoutput video picture 840 may comprise aleft picture 842L and aright picture 842R in a top-and-bottom configuration, each of which comprises four windows corresponding to the four sources of the video data. Moreover, a compositor module, such as the CMP module 222 a or the CMP module 222 b, for example, may be utilized to perform a masking operation in connection with the generation of theoutput video picture 840 in mosaic mode. - Referring to
FIG. 8C , there is shown thesources 710, theMPEG decoder module 324, thebuffers 610, theleft picture canvas 810, and theright picture canvas 820 as described above with respect toFIG. 8A . InFIG. 8C , once the video data has been captured to thememory 410, theVFD module 304 may be utilized to read and feed the entire contents of theleft picture canvas 810 and aright picture canvas 820 to a single buffer for further processing and to subsequently generate a firstoutput video picture 850 and a secondoutput video picture 855. The firstoutput video picture 850 and the secondoutput video picture 855 may correspond to a 3D sequential output format. In this regard, the firstoutput video picture 850 may correspond to a left picture in the 3D sequential output format and may be generated based on the contents of theleft picture canvas 810. The secondoutput video picture 855 may correspond to a right picture in the 3D sequential output format and may be generated based on the contents of theright picture canvas 820. In some embodiments of the invention, the secondoutput video picture 855 may be generated before the firstoutput video picture 850. Moreover, a compositor module, such as the CMP module 222 a or the CMP module 222 b, for example, may be utilized to perform a masking operation in connection with the generation of the firstoutput video picture 850 and the secondoutput video picture 855 in mosaic mode. - The various embodiments of the invention described above with respect to
FIGS. 7 , 8A, 8B, and 8C are provided by way of illustration and not of limitation. One or more embodiments of the invention may be implemented in which the number of sources and/or the characteristics of the windows in the output video picture may be different from those described above. The various embodiments of the invention described herein may also be utilized when the video data from all of the sources may be in a 2D format or when the video data from all of the sources may be in a 3D format, for example. -
FIG. 9 is a flow chart that illustrates steps for generating output video pictures with multiple windows utilizing mosaic mode, in accordance with an embodiment of the invention. Referring toFIG. 9 , there is shown aflow chart 900 in which, atstep 910, theMPEG decoder module 324 may receive video data from multiple sources. Atstep 920, the video data, after being decoded by theMPEG decoder module 324, may be stored in separate buffers based on the source of the video data. - At
step 930, the video data stored in the various buffers may be fed and captured utilizing theMFD module 302 and one of theCAP modules 320. Multiple captures may be turned on based on the number of buffers from which video data is being captured. Moreover, the capture of the video data in all of the buffers may occur during a single picture capture time. The video data may be captured to memory according to the output format of the output video picture. For example, when the output video picture is to be in a 2D output format, the video data associated with a source, whether it is 2D video data or 3D video data, may be captured into memory utilizing a 2D canvas such as the2D canvas 720. When the output video picture is to be in a 3D output format, the video data associated with a source, whether it is 2D video data or 3D video data, may be captured into memory utilizing a 3D canvas such as theleft picture canvas 810 and theright picture canvas 820. In the case of having to capture video data into memory in a 3D format, the left-eye information and the right-eye information may be stored in different portions of the memory. Moreover, the windowing associated with the output video picture may be achieved through the capturing of the video data into portions of memory that correspond to a particular window in the output video picture. - At
step 940, theVFD module 304 may be utilized to read and feed the captured video data stored in memory to a single buffer for further processing. The video feed may be performed in a single picture feed time. Atstep 950, the output video picture may be generated in the appropriate output format from the video data in the single buffer. - Various embodiments of the invention relate to a processor, such as the
SoC 100 described above with respect toFIG. 1 , for example, which comprises a video feeder that is operable to receive video data from a plurality of sources. The video feeder may be an MPEG video feeder such as theMFD module 302 described above with respect toFIG. 3 , for example. The video data from one or more of the plurality of sources may comprise 3D video data. TheSoC 100 may be operable to store the received video data from each of the plurality of sources into a corresponding area in a memory, wherein the storing of the received video data occurs during a capture time for a single picture. TheSoC 10 may be operable to store the received video data from each source of the plurality of sources at different instants during the capture time for the single picture. The memory may be, for example, thememory module 130 described above with respect toFIG. 1 or thememory module 400 described above with respect toFIG. 4 . Moreover, the area in the memory in which the received video data from a source of the plurality of sources is stored may correspond to a window of a plurality of windows in an output video picture, such as theoutput video pictures - The
SoC 100 may be operable to store the received video data from each source of the plurality of sources in a 2D format or in a 3D format. TheSoC 100 may be operable to store left-eye information associated with a current source of the plurality of sources before storage of right-eye information associated with the current source. TheSoC 100 may subsequently store left-eye information associated with a next source of the plurality of sources before storage of right-eye information associated with the next source. Moreover, theSoC 100 may be operable to store left-eye information associated with two or more sources of the plurality of sources before storage of right-eye information associated with the two or more sources. - The
SoC 100 may be operable to read the stored video data from the memory to a single buffer during a feed time for a single picture. The single buffer may be comprised within a device, component, module, block, circuit, or the like, that follows or is subsequent in operation to one of theVFD modules 304, such as theCMP modules - In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for utilizing mosaic mode to create 3D video.
- Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
- While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/962,995 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
US12/963,035 US20110134218A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for utilizing mosaic mode to create 3d video |
US14/819,728 US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US26772909P | 2009-12-08 | 2009-12-08 | |
US29685110P | 2010-01-20 | 2010-01-20 | |
US33045610P | 2010-05-03 | 2010-05-03 | |
US12/963,035 US20110134218A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for utilizing mosaic mode to create 3d video |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110134218A1 true US20110134218A1 (en) | 2011-06-09 |
Family
ID=44081627
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/963,014 Abandoned US20110134217A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for scaling 3d video |
US12/962,995 Active 2033-01-01 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
US12/963,212 Abandoned US20110134211A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for handling multiple 3-d video formats |
US12/963,320 Expired - Fee Related US8947503B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for processing 3-D video |
US12/963,035 Abandoned US20110134218A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for utilizing mosaic mode to create 3d video |
US14/819,728 Active US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/963,014 Abandoned US20110134217A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for scaling 3d video |
US12/962,995 Active 2033-01-01 US9137513B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for mixing video and graphics |
US12/963,212 Abandoned US20110134211A1 (en) | 2009-12-08 | 2010-12-08 | Method and system for handling multiple 3-d video formats |
US12/963,320 Expired - Fee Related US8947503B2 (en) | 2009-12-08 | 2010-12-08 | Method and system for processing 3-D video |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/819,728 Active US9307223B2 (en) | 2009-12-08 | 2015-08-06 | Method and system for mixing video and graphics |
Country Status (4)
Country | Link |
---|---|
US (6) | US20110134217A1 (en) |
EP (1) | EP2462748A4 (en) |
CN (1) | CN102474632A (en) |
WO (1) | WO2011072016A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110134216A1 (en) * | 2009-12-08 | 2011-06-09 | Darren Neuman | Method and system for mixing video and graphics |
US9069374B2 (en) * | 2012-01-04 | 2015-06-30 | International Business Machines Corporation | Web video occlusion: a method for rendering the videos watched over multiple windows |
US20160182834A1 (en) * | 2014-12-19 | 2016-06-23 | Texas Instruments Incorporated | Generation of a video mosaic display |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008106185A (en) * | 2006-10-27 | 2008-05-08 | Shin Etsu Chem Co Ltd | Method for adhering thermally conductive silicone composition, primer for adhesion of thermally conductive silicone composition and method for production of adhesion composite of thermally conductive silicone composition |
US8565516B2 (en) * | 2010-02-05 | 2013-10-22 | Sony Corporation | Image processing apparatus, image processing method, and program |
US9414042B2 (en) * | 2010-05-05 | 2016-08-09 | Google Technology Holdings LLC | Program guide graphics and video in window for 3DTV |
US8768044B2 (en) | 2010-09-14 | 2014-07-01 | Texas Instruments Incorporated | Automatic convergence of stereoscopic images based on disparity maps |
US9485494B1 (en) * | 2011-04-10 | 2016-11-01 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US9407902B1 (en) | 2011-04-10 | 2016-08-02 | Nextvr Inc. | 3D video encoding and decoding methods and apparatus |
US20120281064A1 (en) * | 2011-05-03 | 2012-11-08 | Citynet LLC | Universal 3D Enabler and Recorder |
US20130044192A1 (en) * | 2011-08-17 | 2013-02-21 | Google Inc. | Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type |
US20130147912A1 (en) * | 2011-12-09 | 2013-06-13 | General Instrument Corporation | Three dimensional video and graphics processing |
WO2015192557A1 (en) * | 2014-06-19 | 2015-12-23 | 杭州立体世界科技有限公司 | Control circuit for high-definition naked-eye portable stereo video player and stereo video conversion method |
CN108419068A (en) * | 2018-05-25 | 2018-08-17 | 张家港康得新光电材料有限公司 | A kind of 3D rendering treating method and apparatus |
CN111263231B (en) * | 2018-11-30 | 2022-07-15 | 西安诺瓦星云科技股份有限公司 | Window setting method, device, system and computer readable medium |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6191772B1 (en) * | 1992-11-02 | 2001-02-20 | Cagent Technologies, Inc. | Resolution enhancement for video display using multi-line interpolation |
US20030103136A1 (en) * | 2001-12-05 | 2003-06-05 | Koninklijke Philips Electronics N.V. | Method and system for 2D/3D illusion generation |
US20030223731A1 (en) * | 2000-03-17 | 2003-12-04 | Carlsgaard Eric Stephen | Method and apparatus for simultaneous recording and displaying two different video programs |
US20040066846A1 (en) * | 2002-10-07 | 2004-04-08 | Kugjin Yun | Data processing system for stereoscopic 3-dimensional video based on MPEG-4 and method thereof |
US20040075664A1 (en) * | 2002-10-22 | 2004-04-22 | Patrick Law | Hardware assisted format change mechanism in a display controller |
US20040201544A1 (en) * | 2003-04-08 | 2004-10-14 | Microsoft Corp | Display source divider |
US20040218269A1 (en) * | 2002-01-14 | 2004-11-04 | Divelbiss Adam W. | General purpose stereoscopic 3D format conversion system and method |
US20040239757A1 (en) * | 2003-05-29 | 2004-12-02 | Alden Ray M. | Time sequenced user space segmentation for multiple program and 3D display |
US20050162566A1 (en) * | 2004-01-02 | 2005-07-28 | Trumpion Microelectronic Inc. | Video system with de-motion-blur processing |
US20060139448A1 (en) * | 2004-12-29 | 2006-06-29 | Samsung Electronics Co., Ltd. | 3D displays with flexible switching capability of 2D/3D viewing modes |
US20070008315A1 (en) * | 2005-07-05 | 2007-01-11 | Myoung-Seop Song | Stereoscopic image display device |
US20070195408A1 (en) * | 2001-01-12 | 2007-08-23 | Divelbiss Adam W | Method and apparatus for stereoscopic display using column interleaved data with digital light processing |
US20080001970A1 (en) * | 2006-06-29 | 2008-01-03 | Jason Herrick | Method and system for mosaic mode display of video |
US20090315979A1 (en) * | 2008-06-24 | 2009-12-24 | Samsung Electronics Co., Ltd. | Method and apparatus for processing 3d video image |
US20100177174A1 (en) * | 2006-04-03 | 2010-07-15 | Sony Computer Entertainment Inc. | 3d shutter glasses with mode switching based on orientation to display device |
US20100225645A1 (en) * | 2008-10-10 | 2010-09-09 | Lg Electronics Inc. | Receiving system and method of processing data |
US7804995B2 (en) * | 2002-07-02 | 2010-09-28 | Reald Inc. | Stereoscopic format converter |
US20110063410A1 (en) * | 2009-09-11 | 2011-03-17 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US20110126160A1 (en) * | 2009-11-23 | 2011-05-26 | Samsung Electronics Co., Ltd. | Method of providing 3d image and 3d display apparatus using the same |
US20120140032A1 (en) * | 2010-11-23 | 2012-06-07 | Circa3D, Llc | Formatting 3d content for low frame-rate displays |
US20130181980A1 (en) * | 2012-01-12 | 2013-07-18 | Kabushiki Kaisha Toshiba | Information processing apparatus and display control method |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010072186A (en) * | 1998-08-03 | 2001-07-31 | 벤자민 에프 커틀러 | Circuit and method for generating filler pixels from the original pixels in a video stream |
US6927783B1 (en) * | 1998-11-09 | 2005-08-09 | Broadcom Corporation | Graphics display system with anti-aliased text and graphics feature |
US6704042B2 (en) * | 1998-12-10 | 2004-03-09 | Canon Kabushiki Kaisha | Video processing apparatus, control method therefor, and storage medium |
US7236207B2 (en) * | 2002-01-22 | 2007-06-26 | Broadcom Corporation | System and method of transmission and reception of progressive content with isolated fields for conversion to interlaced display |
CA2380105A1 (en) * | 2002-04-09 | 2003-10-09 | Nicholas Routhier | Process and system for encoding and playback of stereoscopic video sequences |
US7113221B2 (en) * | 2002-11-06 | 2006-09-26 | Broadcom Corporation | Method and system for converting interlaced formatted video to progressive scan video |
US7154555B2 (en) * | 2003-01-10 | 2006-12-26 | Realnetworks, Inc. | Automatic deinterlacing and inverse telecine |
JP4251907B2 (en) * | 2003-04-17 | 2009-04-08 | シャープ株式会社 | Image data creation device |
US7236525B2 (en) * | 2003-05-22 | 2007-06-26 | Lsi Corporation | Reconfigurable computing based multi-standard video codec |
US6957400B2 (en) * | 2003-05-30 | 2005-10-18 | Cadence Design Systems, Inc. | Method and apparatus for quantifying tradeoffs for multiple competing goals in circuit design |
US20070216808A1 (en) * | 2003-06-30 | 2007-09-20 | Macinnis Alexander G | System, method, and apparatus for scaling pictures |
US7420618B2 (en) * | 2003-12-23 | 2008-09-02 | Genesis Microchip Inc. | Single chip multi-function display controller and method of use thereof |
WO2005083637A1 (en) * | 2004-02-27 | 2005-09-09 | Td Vision Corporation, S.A. De C.V. | Method and system for digital decoding 3d stereoscopic video images |
EP1617370B1 (en) * | 2004-07-15 | 2013-01-23 | Samsung Electronics Co., Ltd. | Image format transformation |
KR100716982B1 (en) * | 2004-07-15 | 2007-05-10 | 삼성전자주식회사 | Multi-dimensional video format transforming apparatus and method |
CN1756317A (en) * | 2004-10-01 | 2006-04-05 | 三星电子株式会社 | The equipment of transforming multidimensional video format and method |
KR100898287B1 (en) * | 2005-07-05 | 2009-05-18 | 삼성모바일디스플레이주식회사 | Stereoscopic image display device |
JP2007080357A (en) * | 2005-09-13 | 2007-03-29 | Toshiba Corp | Information storage medium, information reproducing method, information reproducing apparatus |
US7711200B2 (en) * | 2005-09-29 | 2010-05-04 | Apple Inc. | Video acquisition with integrated GPU processing |
JP2007115293A (en) * | 2005-10-17 | 2007-05-10 | Toshiba Corp | Information storage medium, program, information reproducing method, information reproducing apparatus, data transfer method, and data processing method |
US20070140187A1 (en) * | 2005-12-15 | 2007-06-21 | Rokusek Daniel S | System and method for handling simultaneous interaction of multiple wireless devices in a vehicle |
JP4929819B2 (en) * | 2006-04-27 | 2012-05-09 | 富士通株式会社 | Video signal conversion apparatus and method |
US8330801B2 (en) * | 2006-12-22 | 2012-12-11 | Qualcomm Incorporated | Complexity-adaptive 2D-to-3D video sequence conversion |
US8594180B2 (en) * | 2007-02-21 | 2013-11-26 | Qualcomm Incorporated | 3D video encoding |
US20080285652A1 (en) * | 2007-05-14 | 2008-11-20 | Horizon Semiconductors Ltd. | Apparatus and methods for optimization of image and motion picture memory access |
US8479253B2 (en) * | 2007-12-17 | 2013-07-02 | Ati Technologies Ulc | Method, apparatus and machine-readable medium for video processing capability communication between a video source device and a video sink device |
JP2010140235A (en) * | 2008-12-11 | 2010-06-24 | Sony Corp | Image processing apparatus, image processing method, and program |
US20110293240A1 (en) * | 2009-01-20 | 2011-12-01 | Koninklijke Philips Electronics N.V. | Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays |
US20100254453A1 (en) * | 2009-04-02 | 2010-10-07 | Qualcomm Incorporated | Inverse telecine techniques |
JP4748251B2 (en) * | 2009-05-12 | 2011-08-17 | パナソニック株式会社 | Video conversion method and video conversion apparatus |
EP2439934A4 (en) * | 2009-06-05 | 2014-07-02 | Lg Electronics Inc | Image display device and an operating method therefor |
US8373802B1 (en) * | 2009-09-01 | 2013-02-12 | Disney Enterprises, Inc. | Art-directable retargeting for streaming video |
CN102474632A (en) * | 2009-12-08 | 2012-05-23 | 美国博通公司 | Method and system for handling multiple 3-d video formats |
US8964013B2 (en) * | 2009-12-31 | 2015-02-24 | Broadcom Corporation | Display with elastic light manipulator |
KR20110096494A (en) * | 2010-02-22 | 2011-08-30 | 엘지전자 주식회사 | Electronic device and method for displaying stereo-view or multiview sequence image |
KR101699738B1 (en) * | 2010-04-30 | 2017-02-13 | 엘지전자 주식회사 | Operating Method for Image Display Device and Shutter Glass for the Image Display Device |
US9414042B2 (en) * | 2010-05-05 | 2016-08-09 | Google Technology Holdings LLC | Program guide graphics and video in window for 3DTV |
KR20120126458A (en) * | 2011-05-11 | 2012-11-21 | 엘지전자 주식회사 | Method for processing broadcasting signal and display device thereof |
US20130044192A1 (en) * | 2011-08-17 | 2013-02-21 | Google Inc. | Converting 3d video into 2d video based on identification of format type of 3d video and providing either 2d or 3d video based on identification of display device type |
-
2010
- 2010-12-08 CN CN2010800296617A patent/CN102474632A/en active Pending
- 2010-12-08 US US12/963,014 patent/US20110134217A1/en not_active Abandoned
- 2010-12-08 WO PCT/US2010/059469 patent/WO2011072016A1/en active Application Filing
- 2010-12-08 US US12/962,995 patent/US9137513B2/en active Active
- 2010-12-08 EP EP10836612.1A patent/EP2462748A4/en not_active Withdrawn
- 2010-12-08 US US12/963,212 patent/US20110134211A1/en not_active Abandoned
- 2010-12-08 US US12/963,320 patent/US8947503B2/en not_active Expired - Fee Related
- 2010-12-08 US US12/963,035 patent/US20110134218A1/en not_active Abandoned
-
2015
- 2015-08-06 US US14/819,728 patent/US9307223B2/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6191772B1 (en) * | 1992-11-02 | 2001-02-20 | Cagent Technologies, Inc. | Resolution enhancement for video display using multi-line interpolation |
US20030223731A1 (en) * | 2000-03-17 | 2003-12-04 | Carlsgaard Eric Stephen | Method and apparatus for simultaneous recording and displaying two different video programs |
US20070195408A1 (en) * | 2001-01-12 | 2007-08-23 | Divelbiss Adam W | Method and apparatus for stereoscopic display using column interleaved data with digital light processing |
US20030103136A1 (en) * | 2001-12-05 | 2003-06-05 | Koninklijke Philips Electronics N.V. | Method and system for 2D/3D illusion generation |
US20040218269A1 (en) * | 2002-01-14 | 2004-11-04 | Divelbiss Adam W. | General purpose stereoscopic 3D format conversion system and method |
US7804995B2 (en) * | 2002-07-02 | 2010-09-28 | Reald Inc. | Stereoscopic format converter |
US20040066846A1 (en) * | 2002-10-07 | 2004-04-08 | Kugjin Yun | Data processing system for stereoscopic 3-dimensional video based on MPEG-4 and method thereof |
US20040075664A1 (en) * | 2002-10-22 | 2004-04-22 | Patrick Law | Hardware assisted format change mechanism in a display controller |
US20040201544A1 (en) * | 2003-04-08 | 2004-10-14 | Microsoft Corp | Display source divider |
US20040239757A1 (en) * | 2003-05-29 | 2004-12-02 | Alden Ray M. | Time sequenced user space segmentation for multiple program and 3D display |
US20050162566A1 (en) * | 2004-01-02 | 2005-07-28 | Trumpion Microelectronic Inc. | Video system with de-motion-blur processing |
US20060139448A1 (en) * | 2004-12-29 | 2006-06-29 | Samsung Electronics Co., Ltd. | 3D displays with flexible switching capability of 2D/3D viewing modes |
US20070008315A1 (en) * | 2005-07-05 | 2007-01-11 | Myoung-Seop Song | Stereoscopic image display device |
US20100177174A1 (en) * | 2006-04-03 | 2010-07-15 | Sony Computer Entertainment Inc. | 3d shutter glasses with mode switching based on orientation to display device |
US20080001970A1 (en) * | 2006-06-29 | 2008-01-03 | Jason Herrick | Method and system for mosaic mode display of video |
US20090315979A1 (en) * | 2008-06-24 | 2009-12-24 | Samsung Electronics Co., Ltd. | Method and apparatus for processing 3d video image |
US20100225645A1 (en) * | 2008-10-10 | 2010-09-09 | Lg Electronics Inc. | Receiving system and method of processing data |
US20110063410A1 (en) * | 2009-09-11 | 2011-03-17 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US20110126160A1 (en) * | 2009-11-23 | 2011-05-26 | Samsung Electronics Co., Ltd. | Method of providing 3d image and 3d display apparatus using the same |
US20120140032A1 (en) * | 2010-11-23 | 2012-06-07 | Circa3D, Llc | Formatting 3d content for low frame-rate displays |
US20130181980A1 (en) * | 2012-01-12 | 2013-07-18 | Kabushiki Kaisha Toshiba | Information processing apparatus and display control method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110134216A1 (en) * | 2009-12-08 | 2011-06-09 | Darren Neuman | Method and system for mixing video and graphics |
US9137513B2 (en) * | 2009-12-08 | 2015-09-15 | Broadcom Corporation | Method and system for mixing video and graphics |
US9307223B2 (en) * | 2009-12-08 | 2016-04-05 | Broadcom Corporation | Method and system for mixing video and graphics |
US9069374B2 (en) * | 2012-01-04 | 2015-06-30 | International Business Machines Corporation | Web video occlusion: a method for rendering the videos watched over multiple windows |
US10108309B2 (en) | 2012-01-04 | 2018-10-23 | International Business Machines Corporation | Web video occlusion: a method for rendering the videos watched over multiple windows |
US10613702B2 (en) | 2012-01-04 | 2020-04-07 | International Business Machines Corporation | Rendering video over multiple windows |
US20160182834A1 (en) * | 2014-12-19 | 2016-06-23 | Texas Instruments Incorporated | Generation of a video mosaic display |
US9716913B2 (en) * | 2014-12-19 | 2017-07-25 | Texas Instruments Incorporated | Generation of a video mosaic display |
Also Published As
Publication number | Publication date |
---|---|
US20150341613A1 (en) | 2015-11-26 |
US9307223B2 (en) | 2016-04-05 |
US8947503B2 (en) | 2015-02-03 |
US20110134216A1 (en) | 2011-06-09 |
US9137513B2 (en) | 2015-09-15 |
EP2462748A1 (en) | 2012-06-13 |
EP2462748A4 (en) | 2013-11-13 |
US20110134212A1 (en) | 2011-06-09 |
CN102474632A (en) | 2012-05-23 |
US20110134217A1 (en) | 2011-06-09 |
WO2011072016A1 (en) | 2011-06-16 |
US20110134211A1 (en) | 2011-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110134218A1 (en) | Method and system for utilizing mosaic mode to create 3d video | |
US8537201B2 (en) | Combining video data streams of differing dimensionality for concurrent display | |
US8675138B2 (en) | Method and apparatus for fast source switching and/or automatic source switching | |
US8264610B2 (en) | Shared memory multi video channel display apparatus and methods | |
EP2274739B1 (en) | Video multiviewer system with serial digital interface and related methods | |
KR101796603B1 (en) | Mechanism for memory reduction in picture-in-picture video generation | |
US8295364B2 (en) | System and method of video data encoding with minimum baseband data transmission | |
US20110050850A1 (en) | Video combining device, video display apparatus, and video combining method | |
KR102567633B1 (en) | Adaptive high dynamic range tone mapping using overlay instructions | |
CA2661760C (en) | Video multiviewer system with switcher and distributed scaling and related methods | |
WO2007124004A2 (en) | Shared memory multi video channel display apparatus and methods | |
CA2661768C (en) | Video multiviewer system with distributed scaling and related methods | |
US8184137B2 (en) | System and method for ordering of scaling and capturing in a video system | |
US20090135916A1 (en) | Image processing apparatus and method | |
US20060233518A1 (en) | Method of scaling subpicture data and related apparatus | |
US7705916B2 (en) | Method and apparatus for video decoding and de-interlacing | |
US8670070B2 (en) | Method and system for achieving better picture quality in various zoom modes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NEUMAN, DARREN;HERRICK, JASON;ZHAO, QINGHUA;AND OTHERS;SIGNING DATES FROM 20101115 TO 20101201;REEL/FRAME:025754/0403 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |