US20140253570A1 - Network Display Support in an Integrated Circuit - Google Patents
Network Display Support in an Integrated Circuit Download PDFInfo
- Publication number
- US20140253570A1 US20140253570A1 US13/788,209 US201313788209A US2014253570A1 US 20140253570 A1 US20140253570 A1 US 20140253570A1 US 201313788209 A US201313788209 A US 201313788209A US 2014253570 A1 US2014253570 A1 US 2014253570A1
- Authority
- US
- United States
- Prior art keywords
- display
- frames
- display pipe
- frame
- recited
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/003—Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
- G09G5/006—Details of the interface to the display terminal
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0442—Handling or displaying different aspect ratios, or changing the aspect ratio
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0492—Change of orientation of the displayed image, e.g. upside-down, mirrored
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/12—Frame memory handling
- G09G2360/125—Frame memory handling using unified memory architecture [UMA]
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/10—Use of a protocol of communication by packets in interfaces along the display data pipeline
Definitions
- This invention is related to the field of digital systems and, more particularly, to connecting the systems to network displays.
- Digital systems of various types often include, or are connected to, a display for the user to interact with the device.
- the display can be incorporated into the device. Examples of incorporated displays include the touchscreen on various smart phones, tablet computers, or other personal digital assistants and laptops with the screen in the lid.
- the display can also be connected to the device via a cable. Examples of the connected display include various desktop computers and workstations having a separate display that resides on the desk in front of the user. Some desktops also have an incorporated display (e.g. various iMac® computers from Apple Inc.).
- the display provides a visual interface that the user can view to interact with the system and applications executing on the system. In some cases (e.g. touchscreens), the display also provides a user interface to input to the system. Other user input devices (e.g. keyboards, mice or other pointing devices, etc.) can also be used.
- the digital system includes hardware to interface directly to the display, driving the control signals to control the display of each pixel (e.g. red, green, and blue control signals) in real time as the pixels are displayed on the screen.
- the hardware generates the timing for the display as well, such as the vertical and horizontal blanking Interfaces such as video graphics adapter (VGA), high definition media interface (HDMI) etc. can be used to connect to these displays.
- VGA video graphics adapter
- HDMI high definition media interface
- the connection between the digital system and the display is a network such as Ethernet, WiFi networks, etc.
- the digital system provides a frame of pixels to be displayed as the data payload in one or more packets transmitted over the network, and the network display receives the packets and controls its own internal timing to display the received frames. Accordingly, the network display is no longer truly a real time device.
- latency between the system and the network display is still an important factor, since the user is viewing the display and may be interacting with the system as well.
- the network display interface includes the network protocol stack and the operating system, between the application that generates the frames and the network display.
- the operating system and the network protocol stack are not typically real time, and so the delays can be unpredictable.
- the network display is used to display the same frames as the local display (incorporated or directly connected) in “mirror mode” (e.g. when making a presentation). Again, the latency to provide the frames to the network display affects the user's perception of whether or not the system is working properly.
- a system includes hardware optimized for communication to a network display.
- the hardware may include a display pipe unit that is configured to composite one or more static images and one or more frames from video sequences to form frames for display by a network display.
- the display pipe unit may include a writeback unit configured to write the composite frames back to memory, from which the frames can be optionally encoded using video encoder hardware and packetized for transmission over a network to a network display.
- the display pipe unit may be configured to issue interrupts to the video encoder during generation of a frame, to overlap encoding and frame generation.
- the system may reduce the latency for communicating frames to the network display.
- the system may also include a second display pipe unit that controls the internal/local display.
- the second display pipe unit may generate the frames for display on the local display, and the frames may be the same as the network display frames (except for differences in the displays themselves, e.g. color depth and resolution) in a mirror mode of operation.
- the frames generated by the first display pipe unit may be generated more quickly than the corresponding frames of the second display pipe unit, because the frames are not tied to the pixel clock that the local display uses. In this fashion, the delays in transmitting the packets to the network display may be at least partially offset by the more rapid frame generation, allowing a more true mirror mode functionality to occur.
- FIG. 1 is a block diagram of one embodiment of a system including components of an integrated circuit (IC) forming a system on a chip (SoC).
- IC integrated circuit
- SoC system on a chip
- FIG. 2 is a flowchart illustrating operation of one embodiment of the components and software executed on the system to display on a network display.
- FIG. 3 is a flowchart illustrating operation of one embodiment of the components and software executed on the system to display on a network display in mirror mode with an internal display.
- FIG. 4 is a block diagram of one embodiment of a computer accessible storage medium.
- FIG. 5 is a block diagram of one embodiment of the IC shown in FIG. 1 in a system.
- FIG. 6 is a block diagram of one embodiment of the system coupled to a network display over a wired network.
- FIG. 7 is a block diagram of one embodiment of the system coupled to a network display over a wireless network
- circuits, or other components may be described as “configured to” perform a task or tasks.
- “configured to” is a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation.
- the unit/circuit/component can be configured to perform the task even when the unit/circuit/component is not currently on.
- the circuitry that forms the structure corresponding to “configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation.
- the memory can include volatile memory such as static or dynamic random access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc.
- FIG. 1 a block diagram of one embodiment of a system 5 is shown.
- one or more of the components of the system 5 may be integrated onto a single semiconductor substrate as an integrated circuit “chip” often referred to as a system on a chip (SOC).
- the components may be implemented on two or more discrete chips.
- the components of the system 5 that are incorporated into the SOC include a central processing unit (CPU) complex 14 , display pipe units 16 and 18 , a memory controller 22 , an image signal processor (ISP) 24 , a communication fabric or interconnect 27 , a graphics processing unit (GPU) 34 , a memory scalar/rotator (MSR) 28 , a video encoder (VE) 30 , and a network interface 32 .
- the components 14 , 16 , 18 , 22 , 24 , 28 , 30 , 32 , and 34 may all be coupled to the communication fabric 27 .
- the memory controller 22 may be coupled to a memory 12 during use.
- the ISP 24 may be coupled to one or more image sensors 26 (such as a camera) during use and the display pipe unit 16 may be coupled to a local display 20 during use.
- the display pipe unit 16 may be configured to read one or more video sources 50 A- 50 B stored in the memory 12 , composite frames from the video sources, and display the resulting frames on the internal display 20 . Accordingly, the frames displayed on the internal display 20 may not be directly retained in the system 5 as a result of the operation of the display pipe 16 .
- the display pipe 18 may be configured to read one or more video sources 50 A- 50 B, composite the frames to generate output frames, and may write the output frames to the memory system (e.g. the memory 12 , illustrated in FIG. 1 as the DP 2 result 52 ). Accordingly, output frames may be available for further processing in the system 5 (e.g.
- the output frames written to the memory system may be transmitted to a network display over a network via the circuitry 32 .
- the network may include a wireless fidelity (WiFi) network, a cellular data network, a universal serial bus (USB) network, a wired network such as Ethernet, asynchronous transfer mode (ATM), digital subscriber line (DSL), modem over plain old telephone service (POTS), synchronous optical network (SONET), etc.
- the packetization may be performed using a standard protocol stack such as transport control protocol/Internet protocol (TCP/IP), for example.
- TCP/IP transport control protocol/Internet protocol
- the display pipes 16 and 18 may both process the same video sources 50 A- 50 B in parallel. In other modes, the display pipes 16 and 18 may read different video sources (and only one display pipe 16 or 18 may be active, in some modes).
- a local display such as internal display 20 may be a display that is directly connected to the system 5 and is directly controlled by the system 5 .
- the system 5 may provide various control signals to the display, including timing signals such as one or more clocks and/or the vertical blanking interval and horizontal blanking interval controls.
- the clocks may include the pixel clock indicating that a pixel is being transmitted.
- the data signals may include color signals such as red, green, and blue, for example.
- the system may control the display in real-time, providing the data indicating the pixels to be displayed as the display is displaying the image indicated by the frame.
- the interface to the internal display may be, for example, VGA, HDMI, digital video interface (DVI), a liquid crystal display (LCD) interface, a plasma interface, a cathode ray tube (CRT) interface, any proprietary display interface, etc.
- An internal display may be a display that is integrated into the housing of the system 10 .
- the internal display may include a touchscreen display for a personal digital assistant, smart phone, tablet computer, or other mobile communication device.
- the touchscreen display may form a substantial portion or even all of one of the faces of such mobile communication devices.
- the internal display may also be integrated into the lid of the device such as in a laptop or net top computer, or into the housing of a desktop computer.
- the display pipe 16 may include circuitry to generate the local display controls.
- the display pipes 16 and 18 may be described as having a front end (compositing hardware to produce output frames) and a back end.
- the back end of the display pipe 16 may generate the control interface to the internal display 20 .
- the back end of the display pipe 18 may include circuitry to write the output frames back to the memory system 12 for further processing, packetization for the network display, etc.
- a network may generally refer to any mechanism for general communication between devices according to a defined communication interface and protocol.
- the network may define packets which may be used to communicate among the devices.
- the packet may include, for example, a header that identifies the source and/or destination of the packet on the network (e.g. a source address and/or destination address on the network) and various other information about the packet, as well as a payload or data field containing the data.
- the payload may be a portion or all of the frame to be displayed, for example, when the packets are between the system 5 and the network display.
- the network may be a standard network such as WiFi, Ethernet, and others as set forth above.
- the WiFi standards may include, for example, Institute of Electrical and Electronic Engineers (IEEE) 802.11 versions a, b, g, n, and any other versions.
- the cellular data network may include, e.g., 3G, 4G, long term evolution (LTE), etc.
- the network protocol stack may follow the Open Systems Interconnection (OSI) model of layers, some of which may be implemented in software executed by the processors in the CPU complex 14 .
- OSI Open Systems Interconnection
- the display pipe 18 is shown in greater detail in FIG. 1 to include a user interface pipe 36 , a video pipe 38 , a blend unit 40 , a color space converter 42 , a chroma downsample unit 44 , a bypass path 46 , and a writeback unit 48 .
- the user interface pipe 36 , the video pipe 38 and the blend unit 40 may form the front end of the display pipe 18 .
- the color space converter 42 , the chroma downsample unit 44 , and the bypass path 46 may be viewed as part of the front end as well.
- the back end may be the writeback unit 48 .
- the writeback unit 48 may be configured to generate one or more write operations on interconnect fabric 27 to write frames generated by the display pipe 18 to the memory system.
- the writeback unit 48 may be programmable with a base address of the DP 2 result area 52 , for example, and may write frame data beginning at the base address as the data is provided from the front end.
- the writeback unit 48 may include buffering, if desired, to store a portion or all of the frame to avoid stalling the front end if the write operations are delayed, in some embodiments.
- the display pipe 18 may include line buffers configured to store the output composited frame data for reading by the video encoder 30 . That is, the video encoder 30 may read data from the display pipe 18 rather than the memory controller 22 in such embodiments. The composited frame data may still be written to the DP result 52 in the memory as well (e.g. for use as a reference frame in the encoding process).
- the user interface pipe 36 may include hardware to process a static frame for display. Any set of processing may be performed.
- the user interface pipe 36 may be configured to scale the static frame. Other processing may also be supported (e.g. color space conversion, rotation, etc.) in various embodiments.
- the user interface pipe 36 may be so named because the static images may, in some cases, be overlays displayed on a video sequence. The overlays may provide a visual interface to a user (e.g. play, reverse, fast forward, and pause buttons, a timeline illustrating the progress of the video sequence, etc.). More generally, the user interface pipe 36 may be any circuitry to process static frames. While one user interface pipe 36 is shown in FIG. 1 , there may be more than one user interface pipe to concurrently process multiple static frames for display.
- the user interface pipe 36 may further be configured to generate read operations to read the static frame (e.g. video source 50 B in FIG. 1 ).
- the video pipe 38 may be configured to generate read operations to read a video sequence source (e.g. video source 50 A in FIG. 1 ).
- a video sequence may be data describing a series of frames to be displayed at a given display rate (also referred to as a refresh rate).
- the video pipe 38 may be configured to process each frame for display.
- the video pipe 38 may support dither, scaling, and/or color space conversion.
- the blend unit 40 may be configured to blend in the red, green, blue (RGB) color space, and video sequences may often be rendered in the luma-chroma (YCrCb, or YUV) color space. Accordingly, the video pipe 38 may support YCrCb to RGB color space conversion in such an embodiment. While one video pipe 38 is illustrated in FIG. 1 , other embodiments may include more than one video pipe.
- the blend unit 40 may be configured to blend the frames produced by the user interface pipe 36 and the video pipe 38 .
- the display pipe 16 may be configured to blend the static frames and the video sequence frames to produce output frames for display.
- the blend unit 40 may support alpha blending, where each pixel of each input frame has an alpha value describing the transparency/opaqueness of the pixel.
- the blend unit may multiply the pixel by the alpha value and add the results together to produce the output pixel.
- Other styles of blending may be supported in other embodiments.
- the display pipe 18 may support a color space conversion on the blended output using the color space conversion unit 42 .
- the color space conversion unit 42 may convert from RGB to YCrCb.
- Other embodiments may perform the opposite conversion or other conversions, or may not include the color space conversion unit 42 .
- the color space conversion may be supported for other downstream processing (e.g. for the video encoder 30 , in this embodiment) rather than for the network display itself.
- Some video encoders operate on downsampled chroma color components. That is, the number of samples used to describe chroma components may be less than the number of samples used to describe the luma component. For example, a 4:2:2 scheme uses one sample of luma for every pixel, but one sample of Cb and Cr for every two pixels on each line. A 4:2:0 scheme uses one sample of luma for every pixel, but one sample of Cb and Cr for every two pixels on every alternate line with no samples of Cb and Cr in between. To produce pixels useable by such a video encoder, the chroma downsample unit 44 may be provided to downsample the chroma components.
- Downsampling may generally refer to reducing the number of samples used to express a color component while retaining as much of the color component as possible.
- the bypass path 46 may be used to bypass the chroma downsample unit 44 .
- Other embodiments may not include a chroma downsample unit, as desired.
- Compositing may include in processing by which image data from various images (e.g. frames from each video source) are combined to produce an output image. Compositing may include blending, scaling, rotating, color space conversion, etc.
- a frame may be a data structure storing data describing an image to be displayed.
- the data may describe each pixel to be displayed, in terms of color in a color space.
- Any color space may be used.
- a color space may be a set of color components that describe the color of the pixel.
- the RGB color space may describe the pixels in terms of an intensity (or brightness) of red, green, and blue that form the color.
- the color components are red, green, and blue.
- Another color space is the luma-chroma color space which describes the pixels in terms of luminance and chrominance values.
- the luminance (or luma) component may represent the brightness of a pixel (e.g. the “black and whiteness” or achromatic part of the image/pixel).
- the chrominance (or chroma) components may represent the color information.
- the luma component is often denoted Y and the chrominance components as Cr and Cb (or U and V), so the luma-chroma color space is often referred to as YCrCb (or YUV).
- the luma component may be the weighted sum of the gamma-compressed RGB components, and the Cr and Cb components may be the red component (Cr) or the blue component (Cb) minus the luma component.
- the dashed arrows in FIG. 1 may illustrate the movement of data for processing video sources and providing frames to a network display.
- the display pipe 18 may be configured to read the video sources 50 A- 50 B (and more particularly the user interface pipe 36 may be configured to read the source 50 B and the video pipe 38 may be configured to read the source 50 A—arrows 58 A and 58 B, respectively).
- the resulting output frames may be written to the DP 2 result area 52 in the memory 16 by the display pipe 18 (and more particularly the writeback unit 48 may be configured to perform the writes—arrow 58 C).
- the video encoder 30 may be configured to read the DP 2 result area 52 and encode the frame, providing an encoded result 54 . Encoding the frame may include compressing the frame, for example, using any desired video compression algorithm.
- motion picture estimation group MPEG
- MPEG motion picture estimation group
- Any encoding scheme or schemes may be used in various embodiments.
- the video encoder may write the encoded result to the memory 12 (encoded result 54 , arrow 58 E).
- the encoded result 54 may be processed by the network protocol stack to generate packets for transmission on the network to the network display.
- the network protocol stack is implemented in software executed by the processors in the CPU complex 14 . Accordingly, the CPU complex 14 may read the encoded result 54 (arrow 58 F), packetize the result, and write the packets to another memory area 56 (arrow 58 G).
- the packetized result 58 may be read by the network interface hardware 32 for transmission on the network (arrow 58 H).
- the network interface hardware 32 may be specialized network hardware (e.g. a media access control (MAC) unit and/or data link layer hardware).
- the network interface hardware 32 may be a peripheral interface unit configured to communicate on a peripheral interface to which the network interface controller (NIC) may be coupled.
- peripheral interfaces may include, e.g., USB, peripheral component interconnect (PCI), PCI express (PCIe), etc.
- FIG. 1 illustrates various intermediate results in generating the packets for the network display
- some embodiments may store further intermediate results in the memory 12 as well.
- processing through the various layers of the network protocol stack may include storing the packets in various intermediate forms in the memory 12 .
- there may be multiple copies of various results 52 , 54 , and 56 to allow for overlapped processing e.g. the results 52 , 54 , or 56 may be ping pong buffers of two or more frames of data).
- the video encoder 30 may include various video encoder acceleration hardware, and may also include a local processor 60 which may execute software to control the overall encoding process.
- the display pipe 18 may be configured to generate an interrupt directly to the video encoder 30 (and more particularly to the processor 60 ) to indicate the availability of frame data in the DP 2 result 52 for encoding. That is, the interrupt may not be passed though interrupt controller hardware which may process and prioritize various interrupts in the system 5 , such as interrupts to be presented to the processors in the CPU complex 14 .
- the interrupt is illustrated as dotted line 62 .
- the interrupt may be transmitted via a dedicated wire from the display pipe 18 to the video encoder 30 , or may be an interrupt message transmitted over the interconnect fabric 27 addressed to the video encoder 30 .
- the display pipe 18 may be configured to interrupt the video encoder 30 /processor 60 multiple times during generation and writing back of a frame to the DP 2 result 52 , to overlap encoding and generation of the frame.
- Other embodiments may use a single interrupt at the end of the frame generation.
- the memory controller 22 may generally include the circuitry for receiving memory requests from the other components of the system 5 and for accessing the memory 12 to complete the memory requests.
- the memory controller 22 may include a memory cache 64 to store recently accessed memory data.
- the memory cache 64 may reduce power consumption in the SOC by avoiding reaccess of data from the memory 12 if it is expected to be read again soon.
- the fetches by the display pipe 18 may be placed in the memory cache 64 (or portions of the fetches may be placed in the memory cache 64 ) so that the subsequent reads by the display pipe 16 may detect hits in the memory cache 64 .
- the interconnect fabric 27 may support the transmission of cache hints with the memory requests to identify candidates for storing in the memory cache 64 .
- the memory controller 22 may be configured to access any type of memory 12 .
- the memory 12 may be static random access memory (SRAM), dynamic RAM (DRAM) such as synchronous DRAM (SDRAM) including double data rate (DDR, DDR2, DDR3, etc.) DRAM.
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR double data rate
- DDR double data rate
- LPDDR low power/mobile versions of the DDR DRAM may be supported (e.g. LPDDR, mDDR, etc.).
- the memory cache 64 may also be used to store composited frame data generated by the display pipe 18 . Since the composited frame data may be read by the video encoder 30 within a relatively short period of time after generation, the video encoder reads are likely to hit in the memory cache 64 . Thus, the storing of the composited data in the memory cache 64 may reduce power consumption for these reads and may reduce latency as well.
- the ISP 24 may be configured to receive image sensor data from the image sensors 26 (e.g. one or more cameras) and may be configured to process the data to produce image frames that may be suitable, e.g., for display on the local display 20 and/or a network display.
- Cameras may include, e.g., charge coupled devices (CCDs), complementary metal-oxide-semiconductor (CMOS) sensors, etc.
- the CPU complex 14 may include one or more CPU processors that serve as the CPU of the SOC/system 5 .
- the CPU of the system includes the processor(s) that execute the main control software of the system, such as an operating system. Generally, software executed by the CPU during use may control the other components of the system 5 to realize the desired functionality of the system 5 .
- the CPU processors may also execute other software, such as application programs.
- the application programs may provide user functionality, and may rely on the operating system for lower level device control. Accordingly, the CPU processors may also be referred to as application processors.
- the CPU complex 14 may further include other hardware such as an L2 cache and/or and interface to the other components of the system 5 (e.g. an interface to the communication fabric 27 ).
- the GPU 24 may include one or more GPU processors, and may further include local caches for the GPUs and/or an interface circuit for interfacing to the other components of the system 5 (e.g. an interface to the communication fabric 27 ).
- GPU processors may be processors that are optimized for performing operations in a graphics pipeline to render objects into a frame. For example, the operations may include transformation and lighting, triangle assembly, rasterization, shading, texturizing, etc.
- the MSR 28 may be configured to perform scaling and/or rotation on a frame stored in memory, and to write the resulting frame back to memory.
- the MSR 28 may be used to offload operations that might otherwise be performed in the GPU 24 , and may be more power-efficient than the GPU 24 for such operations.
- any of the MSR 28 , the GPU 34 , the ISP 24 , and/or software executing in the CPU cluster may be sources for the video source data 50 A- 50 B.
- video source data 50 A- 50 B may be downloaded to the memory 12 from the network to which the circuitry 32 is coupled, or from other peripherals in the system 5 (not shown in FIG. 1 ).
- the system 5 may include other peripherals.
- the peripherals may be any set of additional hardware functionality included in the system 5 (and optionally incorporated in the SOC).
- the peripherals 18 A- 18 B may include other video peripherals such as video decoders, etc.
- the peripherals may include audio peripherals such as microphones, speakers, interfaces to microphones and speakers, audio processors, digital signal processors, mixers, etc.
- the peripherals may include interface controllers for various interfaces external to the SOC including interfaces such as Universal Serial Bus (USB), peripheral component interconnect (PCI) including PCI Express (PCIe), serial and parallel ports, etc.
- the peripherals may include networking peripherals such as media access controllers (MACs). Any set of hardware may be included.
- the communication fabric 27 may be any communication interconnect and protocol for communicating among the components of the SOC and/or system 5 .
- the communication fabric 27 may be bus-based, including shared bus configurations, cross bar configurations, and hierarchical buses with bridges.
- the communication fabric 27 may also be packet-based, and may be hierarchical with bridges, cross bar, point-to-point, or other interconnects.
- the number of components of the SOC and/or system 5 may vary from embodiment to embodiment. There may be more or fewer of each component than the number shown in FIG. 1 .
- FIG. 2 a flowchart is shown illustrating operation of one embodiment of the system 5 to operate the network display. While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be performed in parallel by combinatorial logic in the system and/or, for software portions, execution by multiple processors. Blocks, combinations of blocks, and or the flowchart as a whole may be pipelined over multiple clock cycles and/or multiple instructions for execution. Blocks that are implemented in software may represent instructions which, when executed on a processor in the system such as the processors in the CPU complex 14 , may implement the operation describe for the block. Blocks that are implemented in hardware may represent hardware that is configured to perform the operation.
- the system 5 may assemble the source content (e.g. video sources 50 A- 50 B) in the memory 12 (block 70 ). Assembly of the source content may be at least partially implemented in software, in some embodiments. More particularly, the source content may be generated by software executing on the GPU 34 , rendering image data. The source content may be generated by the MSR 28 and/or the ISP 24 , either of which may be programmed by software executing on the CPU complex 14 . The source content may also be downloaded from the network via the circuitry 32 .
- the source content may be generated by software executing on the GPU 34 , rendering image data.
- the source content may be generated by the MSR 28 and/or the ISP 24 , either of which may be programmed by software executing on the CPU complex 14 .
- the source content may also be downloaded from the network via the circuitry 32 .
- Software executing on the CPU complex 14 may program the display pipe 18 to processor a frame of source content from the video sources 50 A- 50 B (block 72 ).
- the programming may be accomplished directly, or through a direct memory access (DMA) of data for the various control registers in the display pipe 18 (not shown in FIG. 1 ).
- DMA direct memory access
- the programming may point the display pipe 16 to the sources 50 A- 50 B in the memory 12 , describe the size and pixel format, etc.
- Software executing on the CPU complex 14 may program the video encoder 30 to process the DP 2 result 52 . Again, the programming may be accomplished directly or through a DMA in various embodiments. The programming may point the video encoder 30 to the DP 2 result 52 in memory, describe the size and pixel format, etc.
- the display pipe 18 and the video encoder 30 may perform their operations (block 76 ) to generate the DP 2 result 52 and the encoded result 54 , respectively.
- the display pipe 18 may be configured to interrupt the video encoder 30 in response to completing DP 2 result 52 , in an embodiment.
- the display pipe 18 may be configured to interrupt the video encoder 30 multiple times during generation of the DP 2 result 52 to overlap generation of the DP 2 result 52 and the encoded result 54 .
- the system 5 may packetize the encoded result 54 to generate the packetized result 56 and may transmit the packet(s) to the network display (block 80 ).
- packetize the result may include processing the result in the standard network protocol stack.
- the network protocol stack may be at least partially implemented in software executed in the CPU complex 14 , in an embodiment, although the link layer and optionally the media access control (MAC) layer may be hardware in the circuitry 32 or a network adapter to which the circuitry 32 is coupled.
- packetization may be overlapped with video encoding.
- the video encoder 30 may be programmed to interrupt the CPU complex 14 or to write a memory location that is monitored by software executing on the CPU complex 14 each time a packet-worth of data is generated by the video encoder 30 .
- the system 5 may return to block 72 to process the next frame.
- the respective blocks 72 and 74 may be skipped.
- the programming performed by the blocks 72 and/or 74 may differ on the initial frame of a video sequence and subsequent frames (e.g. there may be less programming/reprogramming needed after the initial frame).
- the packetization of the encoded result 54 and the generation of the next frame by the display pipe 16 may be overlapped in some embodiments, and the operation of the video encoder 30 may also be overlapped with the packetization in some embodiments.
- FIG. 3 a flowchart is shown illustrating operation of one embodiment of the system 5 to operate the network display and the internal display 20 in mirror mode. While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be performed in parallel by combinatorial logic in the system and/or, for software portions, execution by multiple processors. Blocks, combinations of blocks, and or the flowchart as a whole may be pipelined over multiple clock cycles and/or multiple instructions for execution. Blocks that are implemented in software may represent instructions which, when executed on a processor in the system such as the processors in the CPU complex 14 , may implement the operation describe for the block. Blocks that are implemented in hardware may represent hardware that is configured to perform the operation.
- the source content may be assembled in the memory 12 (block 70 ).
- both the display pipes 16 and 18 may be programmed to process a frame of the source content (block 84 ).
- the video encoder may be programmed to process the DP 2 result 52 (block 74 ), and the display pipe 18 and video encoder 30 may operate to produce the DP 2 result 52 and the encoded result 54 (block 76 ).
- the display pipe 16 may process the frame and display the frame on the internal display (block 86 ).
- the system 5 may packetize the encoded result 54 to produce the packetized result 56 and may transmit the packet to the network display (block 80 ).
- the system 5 may determine if there are more frames to be generated and if so (decision block 82 , “yes” leg), may return to block 84 (and/or may skip blocks 84 and/or 74 , or perform fewer programming operations, as discussed above with regard to FIG. 2 ) to process the next frame.
- a computer accessible storage medium may include any storage media accessible by a computer during use to provide instructions and/or data to the computer.
- a computer accessible storage medium may include storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray.
- Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, or Flash memory.
- SDRAM synchronous dynamic RAM
- RDRAM Rambus DRAM
- SRAM static RAM
- the computer accessible storage medium 200 in FIG. 4 may store code 202 .
- the code 202 may include the code described above with regard to FIG. 2 and/or the code described with regard to FIG. 3 .
- the code 202 may further include any other code, as desired.
- the code 202 may include instructions which, when executed in the system 5 , implement the operation described for various code above, particularly with regard to FIGS. 2 and 3 .
- a carrier medium may include computer accessible storage media as well as transmission media such as wired or wireless transmission.
- the system 150 includes at least one instance of an integrated circuit 158 coupled to one or more peripherals 154 and an external memory 152 .
- a power supply 156 is provided which supplies the supply voltages to the integrated circuit 158 as well as one or more supply voltages to the memory 152 and/or the peripherals 154 .
- more than one instance of the integrated circuit 158 may be included (and more than one memory 152 may be included as well).
- the IC 158 may be the SOC described above with regard to FIG. 1 , and components not included in the SOC may be the external memory 152 and/or the peripherals 154 .
- the peripherals 154 may include any desired circuitry, depending on the type of system 150 .
- the system 150 may be a mobile device (e.g. personal digital assistant (PDA), smart phone, etc.) and the peripherals 154 may include devices for various types of wireless communication, such as wifi, Bluetooth, cellular, global positioning system, etc.
- the peripherals 154 may also include additional storage, including RAM storage, solid state storage, or disk storage.
- the peripherals 154 may include user interface devices such as a display screen, including touch display screens or multitouch display screens, keyboard or other input devices, microphones, speakers, etc.
- the system 150 may be any type of computing system (e.g. desktop personal computer, laptop, workstation, net top etc.).
- the internal display 20 may be one of the peripherals 154 .
- the camera(s) 126 or other image sensors may be peripherals 154 .
- the external memory 152 may include any type of memory.
- the external memory 152 may be SRAM, dynamic RAM (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, RAMBUS DRAM, etc.
- DRAM dynamic RAM
- the external memory 152 may include one or more memory modules to which the memory devices are mounted, such as single inline memory modules (SIMMs), dual inline memory modules (DIMM5), etc.
- SIMMs single inline memory modules
- DIMM5 dual inline memory modules
- the external memory 152 may include one or more memory devices that are mounted on the integrated circuit 158 in a chip-on-chip or package-on-package implementation.
- the external memory 152 may include the memory 12 , in an embodiment.
- FIG. 6 a block diagram of one embodiment of the system 150 is shown (including the internal display peripheral 154 , which may be the internal display 20 as discussed above).
- the system 150 is coupled to a network display 170 over a wired network 172 .
- the wired network 172 may be Ethernet, for example, or any other wired network including the various examples given above.
- the system 150 may include a connector 174 suitable to connect to the network cable, and the network display may similarly include such a connector 174 .
- FIG. 7 is a block diagram of an embodiment of the system 150 coupled to a wireless network 176 .
- the wireless network may be, e.g., WiFi and/or a cellular data network such as 3G, 4G, LTE, etc.
- each of the system 150 and the network display 170 may include an antenna 178 configured to broadcast/receive on the wireless network 176 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
- 1. Field of the Invention
- This invention is related to the field of digital systems and, more particularly, to connecting the systems to network displays.
- 2. Description of the Related Art
- Digital systems of various types often include, or are connected to, a display for the user to interact with the device. The display can be incorporated into the device. Examples of incorporated displays include the touchscreen on various smart phones, tablet computers, or other personal digital assistants and laptops with the screen in the lid. The display can also be connected to the device via a cable. Examples of the connected display include various desktop computers and workstations having a separate display that resides on the desk in front of the user. Some desktops also have an incorporated display (e.g. various iMac® computers from Apple Inc.). The display provides a visual interface that the user can view to interact with the system and applications executing on the system. In some cases (e.g. touchscreens), the display also provides a user interface to input to the system. Other user input devices (e.g. keyboards, mice or other pointing devices, etc.) can also be used.
- In the above cases, the digital system includes hardware to interface directly to the display, driving the control signals to control the display of each pixel (e.g. red, green, and blue control signals) in real time as the pixels are displayed on the screen. The hardware generates the timing for the display as well, such as the vertical and horizontal blanking Interfaces such as video graphics adapter (VGA), high definition media interface (HDMI) etc. can be used to connect to these displays.
- More recently, network displays are becoming popular. In a network display, the connection between the digital system and the display is a network such as Ethernet, WiFi networks, etc. The digital system provides a frame of pixels to be displayed as the data payload in one or more packets transmitted over the network, and the network display receives the packets and controls its own internal timing to display the received frames. Accordingly, the network display is no longer truly a real time device. However, latency between the system and the network display is still an important factor, since the user is viewing the display and may be interacting with the system as well. The network display interface includes the network protocol stack and the operating system, between the application that generates the frames and the network display. The operating system and the network protocol stack are not typically real time, and so the delays can be unpredictable. Additionally, in some cases, the network display is used to display the same frames as the local display (incorporated or directly connected) in “mirror mode” (e.g. when making a presentation). Again, the latency to provide the frames to the network display affects the user's perception of whether or not the system is working properly.
- In an embodiment, a system includes hardware optimized for communication to a network display. The hardware may include a display pipe unit that is configured to composite one or more static images and one or more frames from video sequences to form frames for display by a network display. The display pipe unit may include a writeback unit configured to write the composite frames back to memory, from which the frames can be optionally encoded using video encoder hardware and packetized for transmission over a network to a network display. In an embodiment, the display pipe unit may be configured to issue interrupts to the video encoder during generation of a frame, to overlap encoding and frame generation.
- In some embodiments, the system may reduce the latency for communicating frames to the network display. The system may also include a second display pipe unit that controls the internal/local display. The second display pipe unit may generate the frames for display on the local display, and the frames may be the same as the network display frames (except for differences in the displays themselves, e.g. color depth and resolution) in a mirror mode of operation. The frames generated by the first display pipe unit may be generated more quickly than the corresponding frames of the second display pipe unit, because the frames are not tied to the pixel clock that the local display uses. In this fashion, the delays in transmitting the packets to the network display may be at least partially offset by the more rapid frame generation, allowing a more true mirror mode functionality to occur.
- The following detailed description makes reference to the accompanying drawings, which are now briefly described.
-
FIG. 1 is a block diagram of one embodiment of a system including components of an integrated circuit (IC) forming a system on a chip (SoC). -
FIG. 2 is a flowchart illustrating operation of one embodiment of the components and software executed on the system to display on a network display. -
FIG. 3 is a flowchart illustrating operation of one embodiment of the components and software executed on the system to display on a network display in mirror mode with an internal display. -
FIG. 4 is a block diagram of one embodiment of a computer accessible storage medium. -
FIG. 5 is a block diagram of one embodiment of the IC shown inFIG. 1 in a system. -
FIG. 6 is a block diagram of one embodiment of the system coupled to a network display over a wired network. -
FIG. 7 is a block diagram of one embodiment of the system coupled to a network display over a wireless network - While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to.
- Various units, circuits, or other components may be described as “configured to” perform a task or tasks. In such contexts, “configured to” is a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation. As such, the unit/circuit/component can be configured to perform the task even when the unit/circuit/component is not currently on. In general, the circuitry that forms the structure corresponding to “configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation. The memory can include volatile memory such as static or dynamic random access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc. Similarly, various units/circuits/components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a unit/circuit/component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. §112, paragraph six interpretation for that unit/circuit/component.
- This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment, although embodiments that include any combination of the features are generally contemplated, unless expressly disclaimed herein. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
- Turning now to
FIG. 1 , a block diagram of one embodiment of asystem 5 is shown. In one embodiment, one or more of the components of thesystem 5 may be integrated onto a single semiconductor substrate as an integrated circuit “chip” often referred to as a system on a chip (SOC). In other embodiments, the components may be implemented on two or more discrete chips. In the illustrated embodiment, the components of thesystem 5 that are incorporated into the SOC include a central processing unit (CPU) complex 14,display pipe units memory controller 22, an image signal processor (ISP) 24, a communication fabric orinterconnect 27, a graphics processing unit (GPU) 34, a memory scalar/rotator (MSR) 28, a video encoder (VE) 30, and a network interface 32. Thecomponents communication fabric 27. Thememory controller 22 may be coupled to amemory 12 during use. Similarly, theISP 24 may be coupled to one or more image sensors 26 (such as a camera) during use and thedisplay pipe unit 16 may be coupled to alocal display 20 during use. - The display pipe unit 16 (or more briefly “display pipe”) may be configured to read one or
more video sources 50A-50B stored in thememory 12, composite frames from the video sources, and display the resulting frames on theinternal display 20. Accordingly, the frames displayed on theinternal display 20 may not be directly retained in thesystem 5 as a result of the operation of thedisplay pipe 16. Thedisplay pipe 18, on the other hand, may be configured to read one ormore video sources 50A-50B, composite the frames to generate output frames, and may write the output frames to the memory system (e.g. thememory 12, illustrated inFIG. 1 as the DP2 result 52). Accordingly, output frames may be available for further processing in the system 5 (e.g. encoding by thevideo encoder 30 to produce the encodedresult 54, packetization for network transmission stored as thepacketized result 56, etc.). In one embodiment, the output frames written to the memory system may be transmitted to a network display over a network via the circuitry 32. For example, the network may include a wireless fidelity (WiFi) network, a cellular data network, a universal serial bus (USB) network, a wired network such as Ethernet, asynchronous transfer mode (ATM), digital subscriber line (DSL), modem over plain old telephone service (POTS), synchronous optical network (SONET), etc. The packetization may be performed using a standard protocol stack such as transport control protocol/Internet protocol (TCP/IP), for example. In mirror mode, thedisplay pipes same video sources 50A-50B in parallel. In other modes, thedisplay pipes display pipe - A local display such as
internal display 20 may be a display that is directly connected to thesystem 5 and is directly controlled by thesystem 5. Thesystem 5 may provide various control signals to the display, including timing signals such as one or more clocks and/or the vertical blanking interval and horizontal blanking interval controls. The clocks may include the pixel clock indicating that a pixel is being transmitted. The data signals may include color signals such as red, green, and blue, for example. The system may control the display in real-time, providing the data indicating the pixels to be displayed as the display is displaying the image indicated by the frame. The interface to the internal display may be, for example, VGA, HDMI, digital video interface (DVI), a liquid crystal display (LCD) interface, a plasma interface, a cathode ray tube (CRT) interface, any proprietary display interface, etc. An internal display may be a display that is integrated into the housing of the system 10. For example, the internal display may include a touchscreen display for a personal digital assistant, smart phone, tablet computer, or other mobile communication device. The touchscreen display may form a substantial portion or even all of one of the faces of such mobile communication devices. The internal display may also be integrated into the lid of the device such as in a laptop or net top computer, or into the housing of a desktop computer. Accordingly, in addition to the hardware circuitry to composite thevarious video sources 50A-50B, thedisplay pipe 16 may include circuitry to generate the local display controls. Thedisplay pipes display pipe 16 may generate the control interface to theinternal display 20. The back end of thedisplay pipe 18 may include circuitry to write the output frames back to thememory system 12 for further processing, packetization for the network display, etc. - A network may generally refer to any mechanism for general communication between devices according to a defined communication interface and protocol. The network may define packets which may be used to communicate among the devices. The packet may include, for example, a header that identifies the source and/or destination of the packet on the network (e.g. a source address and/or destination address on the network) and various other information about the packet, as well as a payload or data field containing the data. The payload may be a portion or all of the frame to be displayed, for example, when the packets are between the
system 5 and the network display. - As mentioned previously, the network may be a standard network such as WiFi, Ethernet, and others as set forth above. The WiFi standards may include, for example, Institute of Electrical and Electronic Engineers (IEEE) 802.11 versions a, b, g, n, and any other versions. The cellular data network may include, e.g., 3G, 4G, long term evolution (LTE), etc. The network protocol stack may follow the Open Systems Interconnection (OSI) model of layers, some of which may be implemented in software executed by the processors in the
CPU complex 14. - The
display pipe 18 is shown in greater detail inFIG. 1 to include auser interface pipe 36, avideo pipe 38, ablend unit 40, acolor space converter 42, achroma downsample unit 44, abypass path 46, and awriteback unit 48. Theuser interface pipe 36, thevideo pipe 38 and theblend unit 40 may form the front end of thedisplay pipe 18. Thecolor space converter 42, thechroma downsample unit 44, and thebypass path 46 may be viewed as part of the front end as well. The back end may be thewriteback unit 48. - The
writeback unit 48 may be configured to generate one or more write operations oninterconnect fabric 27 to write frames generated by thedisplay pipe 18 to the memory system. Thewriteback unit 48 may be programmable with a base address of theDP2 result area 52, for example, and may write frame data beginning at the base address as the data is provided from the front end. Thewriteback unit 48 may include buffering, if desired, to store a portion or all of the frame to avoid stalling the front end if the write operations are delayed, in some embodiments. - In an embodiment, the
display pipe 18 may include line buffers configured to store the output composited frame data for reading by thevideo encoder 30. That is, thevideo encoder 30 may read data from thedisplay pipe 18 rather than thememory controller 22 in such embodiments. The composited frame data may still be written to theDP result 52 in the memory as well (e.g. for use as a reference frame in the encoding process). - The
user interface pipe 36 may include hardware to process a static frame for display. Any set of processing may be performed. For example, theuser interface pipe 36 may be configured to scale the static frame. Other processing may also be supported (e.g. color space conversion, rotation, etc.) in various embodiments. Theuser interface pipe 36 may be so named because the static images may, in some cases, be overlays displayed on a video sequence. The overlays may provide a visual interface to a user (e.g. play, reverse, fast forward, and pause buttons, a timeline illustrating the progress of the video sequence, etc.). More generally, theuser interface pipe 36 may be any circuitry to process static frames. While oneuser interface pipe 36 is shown inFIG. 1 , there may be more than one user interface pipe to concurrently process multiple static frames for display. Theuser interface pipe 36 may further be configured to generate read operations to read the static frame (e.g. video source 50B inFIG. 1 ). - The
video pipe 38 may be configured to generate read operations to read a video sequence source (e.g. video source 50A inFIG. 1 ). A video sequence may be data describing a series of frames to be displayed at a given display rate (also referred to as a refresh rate). Thevideo pipe 38 may be configured to process each frame for display. For example, in an embodiment, thevideo pipe 38 may support dither, scaling, and/or color space conversion. In an embodiment, theblend unit 40 may be configured to blend in the red, green, blue (RGB) color space, and video sequences may often be rendered in the luma-chroma (YCrCb, or YUV) color space. Accordingly, thevideo pipe 38 may support YCrCb to RGB color space conversion in such an embodiment. While onevideo pipe 38 is illustrated inFIG. 1 , other embodiments may include more than one video pipe. - The
blend unit 40 may be configured to blend the frames produced by theuser interface pipe 36 and thevideo pipe 38. Thedisplay pipe 16 may be configured to blend the static frames and the video sequence frames to produce output frames for display. In one embodiment, theblend unit 40 may support alpha blending, where each pixel of each input frame has an alpha value describing the transparency/opaqueness of the pixel. The blend unit may multiply the pixel by the alpha value and add the results together to produce the output pixel. Other styles of blending may be supported in other embodiments. - In the illustrated embodiment, the
display pipe 18 may support a color space conversion on the blended output using the colorspace conversion unit 42. For example, if the network display is configured to display frames represented in the YCrCb space and theblend unit 40 produces frames represented in the RGB space, the colorspace conversion unit 42 may convert from RGB to YCrCb. Other embodiments may perform the opposite conversion or other conversions, or may not include the colorspace conversion unit 42. Additionally, the color space conversion may be supported for other downstream processing (e.g. for thevideo encoder 30, in this embodiment) rather than for the network display itself. - Some video encoders operate on downsampled chroma color components. That is, the number of samples used to describe chroma components may be less than the number of samples used to describe the luma component. For example, a 4:2:2 scheme uses one sample of luma for every pixel, but one sample of Cb and Cr for every two pixels on each line. A 4:2:0 scheme uses one sample of luma for every pixel, but one sample of Cb and Cr for every two pixels on every alternate line with no samples of Cb and Cr in between. To produce pixels useable by such a video encoder, the
chroma downsample unit 44 may be provided to downsample the chroma components. Downsampling may generally refer to reducing the number of samples used to express a color component while retaining as much of the color component as possible. For cases in which the video encoder supports full chroma components, thebypass path 46 may be used to bypass thechroma downsample unit 44. Other embodiments may not include a chroma downsample unit, as desired. - The various processing performed by the
display pipes - Generally, a frame may be a data structure storing data describing an image to be displayed. The data may describe each pixel to be displayed, in terms of color in a color space. Any color space may be used. A color space may be a set of color components that describe the color of the pixel. For example, the RGB color space may describe the pixels in terms of an intensity (or brightness) of red, green, and blue that form the color. Thus, the color components are red, green, and blue. Another color space is the luma-chroma color space which describes the pixels in terms of luminance and chrominance values. The luminance (or luma) component may represent the brightness of a pixel (e.g. the “black and whiteness” or achromatic part of the image/pixel). The chrominance (or chroma) components may represent the color information. The luma component is often denoted Y and the chrominance components as Cr and Cb (or U and V), so the luma-chroma color space is often referred to as YCrCb (or YUV). When converting from RGB, the luma component may be the weighted sum of the gamma-compressed RGB components, and the Cr and Cb components may be the red component (Cr) or the blue component (Cb) minus the luma component.
- The dashed arrows in
FIG. 1 may illustrate the movement of data for processing video sources and providing frames to a network display. Thedisplay pipe 18 may be configured to read thevideo sources 50A-50B (and more particularly theuser interface pipe 36 may be configured to read thesource 50B and thevideo pipe 38 may be configured to read thesource 50A—arrows DP2 result area 52 in thememory 16 by the display pipe 18 (and more particularly thewriteback unit 48 may be configured to perform the writes—arrow 58C). Thevideo encoder 30 may be configured to read theDP2 result area 52 and encode the frame, providing an encodedresult 54. Encoding the frame may include compressing the frame, for example, using any desired video compression algorithm. For example, motion picture estimation group (MPEG) encoding may be used, whereby data for a frame can be generated by reference to other frames. Any encoding scheme or schemes may be used in various embodiments. The video encoder may write the encoded result to the memory 12 (encodedresult 54,arrow 58E). - The encoded
result 54 may be processed by the network protocol stack to generate packets for transmission on the network to the network display. In one embodiment, the network protocol stack is implemented in software executed by the processors in theCPU complex 14. Accordingly, theCPU complex 14 may read the encoded result 54 (arrow 58F), packetize the result, and write the packets to another memory area 56 (arrow 58G). The packetized result 58 may be read by the network interface hardware 32 for transmission on the network (arrow 58H). - In an embodiment, the network interface hardware 32 may be specialized network hardware (e.g. a media access control (MAC) unit and/or data link layer hardware). In another embodiment, the network interface hardware 32 may be a peripheral interface unit configured to communicate on a peripheral interface to which the network interface controller (NIC) may be coupled. Such peripheral interfaces may include, e.g., USB, peripheral component interconnect (PCI), PCI express (PCIe), etc.
- It is noted that, while
FIG. 1 illustrates various intermediate results in generating the packets for the network display, some embodiments may store further intermediate results in thememory 12 as well. For example, processing through the various layers of the network protocol stack may include storing the packets in various intermediate forms in thememory 12. Furthermore, there may be multiple copies ofvarious results results - The
video encoder 30 may include various video encoder acceleration hardware, and may also include alocal processor 60 which may execute software to control the overall encoding process. In one embodiment, thedisplay pipe 18 may be configured to generate an interrupt directly to the video encoder 30 (and more particularly to the processor 60) to indicate the availability of frame data in theDP2 result 52 for encoding. That is, the interrupt may not be passed though interrupt controller hardware which may process and prioritize various interrupts in thesystem 5, such as interrupts to be presented to the processors in theCPU complex 14. The interrupt is illustrated as dottedline 62. The interrupt may be transmitted via a dedicated wire from thedisplay pipe 18 to thevideo encoder 30, or may be an interrupt message transmitted over theinterconnect fabric 27 addressed to thevideo encoder 30. In some embodiments, thedisplay pipe 18 may be configured to interrupt thevideo encoder 30/processor 60 multiple times during generation and writing back of a frame to theDP2 result 52, to overlap encoding and generation of the frame. Other embodiments may use a single interrupt at the end of the frame generation. - The
memory controller 22 may generally include the circuitry for receiving memory requests from the other components of thesystem 5 and for accessing thememory 12 to complete the memory requests. In the illustrated embodiment, thememory controller 22 may include amemory cache 64 to store recently accessed memory data. In SOC implementations, for example, thememory cache 64 may reduce power consumption in the SOC by avoiding reaccess of data from thememory 12 if it is expected to be read again soon. In mirror mode, the fetches by thedisplay pipe 18 may be placed in the memory cache 64 (or portions of the fetches may be placed in the memory cache 64) so that the subsequent reads by thedisplay pipe 16 may detect hits in thememory cache 64. Theinterconnect fabric 27 may support the transmission of cache hints with the memory requests to identify candidates for storing in thememory cache 64. Thememory controller 22 may be configured to access any type ofmemory 12. For example, thememory 12 may be static random access memory (SRAM), dynamic RAM (DRAM) such as synchronous DRAM (SDRAM) including double data rate (DDR, DDR2, DDR3, etc.) DRAM. Low power/mobile versions of the DDR DRAM may be supported (e.g. LPDDR, mDDR, etc.). - The
memory cache 64 may also be used to store composited frame data generated by thedisplay pipe 18. Since the composited frame data may be read by thevideo encoder 30 within a relatively short period of time after generation, the video encoder reads are likely to hit in thememory cache 64. Thus, the storing of the composited data in thememory cache 64 may reduce power consumption for these reads and may reduce latency as well. - The
ISP 24 may be configured to receive image sensor data from the image sensors 26 (e.g. one or more cameras) and may be configured to process the data to produce image frames that may be suitable, e.g., for display on thelocal display 20 and/or a network display. Cameras may include, e.g., charge coupled devices (CCDs), complementary metal-oxide-semiconductor (CMOS) sensors, etc. - The
CPU complex 14 may include one or more CPU processors that serve as the CPU of the SOC/system 5. The CPU of the system includes the processor(s) that execute the main control software of the system, such as an operating system. Generally, software executed by the CPU during use may control the other components of thesystem 5 to realize the desired functionality of thesystem 5. The CPU processors may also execute other software, such as application programs. The application programs may provide user functionality, and may rely on the operating system for lower level device control. Accordingly, the CPU processors may also be referred to as application processors. TheCPU complex 14 may further include other hardware such as an L2 cache and/or and interface to the other components of the system 5 (e.g. an interface to the communication fabric 27). - The
GPU 24 may include one or more GPU processors, and may further include local caches for the GPUs and/or an interface circuit for interfacing to the other components of the system 5 (e.g. an interface to the communication fabric 27). Generally, GPU processors may be processors that are optimized for performing operations in a graphics pipeline to render objects into a frame. For example, the operations may include transformation and lighting, triangle assembly, rasterization, shading, texturizing, etc. - The
MSR 28 may be configured to perform scaling and/or rotation on a frame stored in memory, and to write the resulting frame back to memory. TheMSR 28 may be used to offload operations that might otherwise be performed in theGPU 24, and may be more power-efficient than theGPU 24 for such operations. - In general, any of the
MSR 28, theGPU 34, theISP 24, and/or software executing in the CPU cluster may be sources for thevideo source data 50A-50B. Additionally,video source data 50A-50B may be downloaded to thememory 12 from the network to which the circuitry 32 is coupled, or from other peripherals in the system 5 (not shown inFIG. 1 ). - Although not explicitly illustrated in
FIG. 1 , thesystem 5 may include other peripherals. The peripherals may be any set of additional hardware functionality included in the system 5 (and optionally incorporated in the SOC). For example, the peripherals 18A-18B may include other video peripherals such as video decoders, etc. The peripherals may include audio peripherals such as microphones, speakers, interfaces to microphones and speakers, audio processors, digital signal processors, mixers, etc. The peripherals may include interface controllers for various interfaces external to the SOC including interfaces such as Universal Serial Bus (USB), peripheral component interconnect (PCI) including PCI Express (PCIe), serial and parallel ports, etc. The peripherals may include networking peripherals such as media access controllers (MACs). Any set of hardware may be included. - The
communication fabric 27 may be any communication interconnect and protocol for communicating among the components of the SOC and/orsystem 5. Thecommunication fabric 27 may be bus-based, including shared bus configurations, cross bar configurations, and hierarchical buses with bridges. Thecommunication fabric 27 may also be packet-based, and may be hierarchical with bridges, cross bar, point-to-point, or other interconnects. - It is noted that the number of components of the SOC and/or
system 5 may vary from embodiment to embodiment. There may be more or fewer of each component than the number shown inFIG. 1 . - Turning now to
FIG. 2 , a flowchart is shown illustrating operation of one embodiment of thesystem 5 to operate the network display. While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be performed in parallel by combinatorial logic in the system and/or, for software portions, execution by multiple processors. Blocks, combinations of blocks, and or the flowchart as a whole may be pipelined over multiple clock cycles and/or multiple instructions for execution. Blocks that are implemented in software may represent instructions which, when executed on a processor in the system such as the processors in theCPU complex 14, may implement the operation describe for the block. Blocks that are implemented in hardware may represent hardware that is configured to perform the operation. - The
system 5 may assemble the source content (e.g. video sources 50A-50B) in the memory 12 (block 70). Assembly of the source content may be at least partially implemented in software, in some embodiments. More particularly, the source content may be generated by software executing on theGPU 34, rendering image data. The source content may be generated by theMSR 28 and/or theISP 24, either of which may be programmed by software executing on theCPU complex 14. The source content may also be downloaded from the network via the circuitry 32. - Software executing on the
CPU complex 14 may program thedisplay pipe 18 to processor a frame of source content from thevideo sources 50A-50B (block 72). The programming may be accomplished directly, or through a direct memory access (DMA) of data for the various control registers in the display pipe 18 (not shown inFIG. 1 ). For example, the programming may point thedisplay pipe 16 to thesources 50A-50B in thememory 12, describe the size and pixel format, etc. - Software executing on the
CPU complex 14 may program thevideo encoder 30 to process theDP2 result 52. Again, the programming may be accomplished directly or through a DMA in various embodiments. The programming may point thevideo encoder 30 to theDP2 result 52 in memory, describe the size and pixel format, etc. - The
display pipe 18 and thevideo encoder 30 may perform their operations (block 76) to generate theDP2 result 52 and the encodedresult 54, respectively. As mentioned previously, thedisplay pipe 18 may be configured to interrupt thevideo encoder 30 in response to completingDP2 result 52, in an embodiment. In an embodiment, thedisplay pipe 18 may be configured to interrupt thevideo encoder 30 multiple times during generation of theDP2 result 52 to overlap generation of theDP2 result 52 and the encodedresult 54. - When the encoded
result 54 is completed by the video encoder 30 (decision block 78, “yes” leg), thesystem 5 may packetize the encodedresult 54 to generate thepacketized result 56 and may transmit the packet(s) to the network display (block 80). In an embodiment, packetize the result may include processing the result in the standard network protocol stack. The network protocol stack may be at least partially implemented in software executed in theCPU complex 14, in an embodiment, although the link layer and optionally the media access control (MAC) layer may be hardware in the circuitry 32 or a network adapter to which the circuitry 32 is coupled. In an embodiment, packetization may be overlapped with video encoding. For example, thevideo encoder 30 may be programmed to interrupt theCPU complex 14 or to write a memory location that is monitored by software executing on theCPU complex 14 each time a packet-worth of data is generated by thevideo encoder 30. - If there are more frames to be processed (
decision block 82, “yes” leg), thesystem 5 may return to block 72 to process the next frame. Alternatively, if thedisplay pipe 16 and/or thevideo encoder 30 do not need to be reprogrammed, therespective blocks blocks 72 and/or 74 may differ on the initial frame of a video sequence and subsequent frames (e.g. there may be less programming/reprogramming needed after the initial frame). Still further, the packetization of the encodedresult 54 and the generation of the next frame by thedisplay pipe 16 may be overlapped in some embodiments, and the operation of thevideo encoder 30 may also be overlapped with the packetization in some embodiments. - Turning now to
FIG. 3 , a flowchart is shown illustrating operation of one embodiment of thesystem 5 to operate the network display and theinternal display 20 in mirror mode. While the blocks are shown in a particular order for ease of understanding, other orders may be used. Blocks may be performed in parallel by combinatorial logic in the system and/or, for software portions, execution by multiple processors. Blocks, combinations of blocks, and or the flowchart as a whole may be pipelined over multiple clock cycles and/or multiple instructions for execution. Blocks that are implemented in software may represent instructions which, when executed on a processor in the system such as the processors in theCPU complex 14, may implement the operation describe for the block. Blocks that are implemented in hardware may represent hardware that is configured to perform the operation. - Similar to the flowchart of
FIG. 2 , the source content may be assembled in the memory 12 (block 70). In this case, both thedisplay pipes display pipe 18 andvideo encoder 30 may operate to produce theDP2 result 52 and the encoded result 54 (block 76). In parallel with thedisplay pipe 18 andvideo encoder 30, thedisplay pipe 16 may process the frame and display the frame on the internal display (block 86). Once the encodedresult 54 is ready for packetization (or at least one packet worth is ready—decision block 78, “yes” leg), thesystem 5 may packetize the encodedresult 54 to produce thepacketized result 56 and may transmit the packet to the network display (block 80). - The
system 5 may determine if there are more frames to be generated and if so (decision block 82, “yes” leg), may return to block 84 (and/or may skipblocks 84 and/or 74, or perform fewer programming operations, as discussed above with regard toFIG. 2 ) to process the next frame. - Turning now to
FIG. 4 , a block diagram of one embodiment of a computeraccessible storage medium 200 is shown. Generally speaking, a computer accessible storage medium may include any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, or Flash memory. The storage media may be physically included within the computer to which the storage media provides instructions/data. Alternatively, the storage media may be connected to the computer. For example, the storage media may be connected to the computer over a network or wireless link, such as network attached storage. The storage media may be connected through a peripheral interface such as the Universal Serial Bus (USB). Generally, the computeraccessible storage medium 200 may store data in a non-transitory manner, where non-transitory in this context may refer to not transmitting the instructions/data on a signal. For example, non-transitory storage may be volatile (and may lose the stored instructions/data in response to a power down) or non-volatile. - The computer
accessible storage medium 200 inFIG. 4 may storecode 202. Thecode 202 may include the code described above with regard toFIG. 2 and/or the code described with regard toFIG. 3 . Thecode 202 may further include any other code, as desired. Thecode 202 may include instructions which, when executed in thesystem 5, implement the operation described for various code above, particularly with regard toFIGS. 2 and 3 . A carrier medium may include computer accessible storage media as well as transmission media such as wired or wireless transmission. - In an embodiment, the computer
accessible storage medium 200 may include thememory 12 shown inFIG. 1 . - Turning next to
FIG. 5 , a block diagram of one embodiment of asystem 150 is shown. In the illustrated embodiment, thesystem 150 includes at least one instance of anintegrated circuit 158 coupled to one ormore peripherals 154 and anexternal memory 152. Apower supply 156 is provided which supplies the supply voltages to theintegrated circuit 158 as well as one or more supply voltages to thememory 152 and/or theperipherals 154. In some embodiments, more than one instance of theintegrated circuit 158 may be included (and more than onememory 152 may be included as well). TheIC 158 may be the SOC described above with regard toFIG. 1 , and components not included in the SOC may be theexternal memory 152 and/or theperipherals 154. - The
peripherals 154 may include any desired circuitry, depending on the type ofsystem 150. For example, in one embodiment, thesystem 150 may be a mobile device (e.g. personal digital assistant (PDA), smart phone, etc.) and theperipherals 154 may include devices for various types of wireless communication, such as wifi, Bluetooth, cellular, global positioning system, etc. Theperipherals 154 may also include additional storage, including RAM storage, solid state storage, or disk storage. Theperipherals 154 may include user interface devices such as a display screen, including touch display screens or multitouch display screens, keyboard or other input devices, microphones, speakers, etc. In other embodiments, thesystem 150 may be any type of computing system (e.g. desktop personal computer, laptop, workstation, net top etc.). In an embodiment, theinternal display 20 may be one of theperipherals 154. In an embodiment, the camera(s) 126 or other image sensors may beperipherals 154. - The
external memory 152 may include any type of memory. For example, theexternal memory 152 may be SRAM, dynamic RAM (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, RAMBUS DRAM, etc. Theexternal memory 152 may include one or more memory modules to which the memory devices are mounted, such as single inline memory modules (SIMMs), dual inline memory modules (DIMM5), etc. Alternatively, theexternal memory 152 may include one or more memory devices that are mounted on theintegrated circuit 158 in a chip-on-chip or package-on-package implementation. Theexternal memory 152 may include thememory 12, in an embodiment. - Turning now to
FIG. 6 , a block diagram of one embodiment of thesystem 150 is shown (including the internal display peripheral 154, which may be theinternal display 20 as discussed above). Thesystem 150 is coupled to anetwork display 170 over awired network 172. Thewired network 172 may be Ethernet, for example, or any other wired network including the various examples given above. In such embodiments, thesystem 150 may include aconnector 174 suitable to connect to the network cable, and the network display may similarly include such aconnector 174. - Alternatively,
FIG. 7 is a block diagram of an embodiment of thesystem 150 coupled to awireless network 176. The wireless network may be, e.g., WiFi and/or a cellular data network such as 3G, 4G, LTE, etc. In the embodiment ofFIG. 7 , each of thesystem 150 and thenetwork display 170 may include anantenna 178 configured to broadcast/receive on thewireless network 176. - Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/788,209 US9087393B2 (en) | 2013-03-07 | 2013-03-07 | Network display support in an integrated circuit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/788,209 US9087393B2 (en) | 2013-03-07 | 2013-03-07 | Network display support in an integrated circuit |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140253570A1 true US20140253570A1 (en) | 2014-09-11 |
US9087393B2 US9087393B2 (en) | 2015-07-21 |
Family
ID=51487319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/788,209 Active 2033-07-26 US9087393B2 (en) | 2013-03-07 | 2013-03-07 | Network display support in an integrated circuit |
Country Status (1)
Country | Link |
---|---|
US (1) | US9087393B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160210861A1 (en) * | 2015-01-16 | 2016-07-21 | Texas Instruments Incorporated | Integrated fault-tolerant augmented area viewing system |
US11563513B1 (en) * | 2019-09-10 | 2023-01-24 | Valve Corporation | Low-latency wireless display system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040150751A1 (en) * | 2003-01-31 | 2004-08-05 | Qwest Communications International Inc. | Systems and methods for forming picture-in-picture signals |
US20090028047A1 (en) * | 2007-07-25 | 2009-01-29 | Schmidt Brian K | Data stream control for network devices |
US20110249074A1 (en) * | 2010-04-07 | 2011-10-13 | Cranfill Elizabeth C | In Conference Display Adjustments |
US20140075117A1 (en) * | 2012-09-11 | 2014-03-13 | Brijesh Tripathi | Display pipe alternate cache hint |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8259121B2 (en) | 2002-10-22 | 2012-09-04 | Broadcom Corporation | System and method for processing data using a network |
US7098868B2 (en) | 2003-04-08 | 2006-08-29 | Microsoft Corporation | Display source divider |
US8275031B2 (en) | 2005-12-15 | 2012-09-25 | Broadcom Corporation | System and method for analyzing multiple display data rates in a video system |
US9247261B2 (en) | 2011-03-04 | 2016-01-26 | Vixs Systems, Inc. | Video decoder with pipeline processing and methods for use therewith |
-
2013
- 2013-03-07 US US13/788,209 patent/US9087393B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040150751A1 (en) * | 2003-01-31 | 2004-08-05 | Qwest Communications International Inc. | Systems and methods for forming picture-in-picture signals |
US20090028047A1 (en) * | 2007-07-25 | 2009-01-29 | Schmidt Brian K | Data stream control for network devices |
US20110249074A1 (en) * | 2010-04-07 | 2011-10-13 | Cranfill Elizabeth C | In Conference Display Adjustments |
US20140075117A1 (en) * | 2012-09-11 | 2014-03-13 | Brijesh Tripathi | Display pipe alternate cache hint |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160210861A1 (en) * | 2015-01-16 | 2016-07-21 | Texas Instruments Incorporated | Integrated fault-tolerant augmented area viewing system |
US10395541B2 (en) * | 2015-01-16 | 2019-08-27 | Texas Instruments Incorporated | Integrated fault-tolerant augmented area viewing system |
US11563513B1 (en) * | 2019-09-10 | 2023-01-24 | Valve Corporation | Low-latency wireless display system |
Also Published As
Publication number | Publication date |
---|---|
US9087393B2 (en) | 2015-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6652937B2 (en) | Multiple display pipelines driving split displays | |
US11211036B2 (en) | Timestamp based display update mechanism | |
US9495926B2 (en) | Variable frame refresh rate | |
US9990690B2 (en) | Efficient display processing with pre-fetching | |
US9620081B2 (en) | Hardware auxiliary channel for synchronous backlight update | |
US10055809B2 (en) | Systems and methods for time shifting tasks | |
US8798386B2 (en) | Method and system for processing image data on a per tile basis in an image sensor pipeline | |
US8717391B2 (en) | User interface pipe scalers with active regions | |
US9058676B2 (en) | Mechanism to detect idle screen on | |
US9646563B2 (en) | Managing back pressure during compressed frame writeback for idle screens | |
US20160307540A1 (en) | Linear scaling in a display pipeline | |
US9652816B1 (en) | Reduced frame refresh rate | |
US20170018247A1 (en) | Idle frame compression without writeback | |
US9087393B2 (en) | Network display support in an integrated circuit | |
US9472168B2 (en) | Display pipe statistics calculation for video encoder | |
US10546558B2 (en) | Request aggregation with opportunism | |
US9691349B2 (en) | Source pixel component passthrough | |
US9558536B2 (en) | Blur downscale | |
US9135036B2 (en) | Method and system for reducing communication during video processing utilizing merge buffering | |
US9412147B2 (en) | Display pipe line buffer sharing | |
US9953591B1 (en) | Managing two dimensional structured noise when driving a display with multiple display pipes | |
US9472169B2 (en) | Coordinate based QoS escalation | |
US9747658B2 (en) | Arbitration method for multi-request display pipeline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TRIPATHI, BRIJESH;HOLLAND, PETER F.;MILLET, TIMOTHY J.;SIGNING DATES FROM 20130227 TO 20130306;REEL/FRAME:029940/0250 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |