US20150264298A1 - Video frame rate compensation through adjustment of vertical blanking - Google Patents

Video frame rate compensation through adjustment of vertical blanking Download PDF

Info

Publication number
US20150264298A1
US20150264298A1 US14/280,502 US201414280502A US2015264298A1 US 20150264298 A1 US20150264298 A1 US 20150264298A1 US 201414280502 A US201414280502 A US 201414280502A US 2015264298 A1 US2015264298 A1 US 2015264298A1
Authority
US
United States
Prior art keywords
frame
frames
fluctuations
buffer
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/280,502
Other versions
US9332216B2 (en
Inventor
Roelof Roderick Colenbrander
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment LLC
Original Assignee
Sony Computer Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment America LLC filed Critical Sony Computer Entertainment America LLC
Priority to US14/280,502 priority Critical patent/US9332216B2/en
Assigned to SONY COMPUTER ENTERTAINMENT AMERICA LLC reassignment SONY COMPUTER ENTERTAINMENT AMERICA LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COLENBRANDER, Roelof Roderick
Priority to CN201810201744.8A priority patent/CN108449566B/en
Priority to CN201510094508.7A priority patent/CN104917990B/en
Publication of US20150264298A1 publication Critical patent/US20150264298A1/en
Application granted granted Critical
Priority to US15/145,718 priority patent/US9984647B2/en
Publication of US9332216B2 publication Critical patent/US9332216B2/en
Assigned to SONY INTERACTIVE ENTERTAINMENT AMERICA LLC reassignment SONY INTERACTIVE ENTERTAINMENT AMERICA LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SONY COMPUTER ENTERTAINMENT AMERICA LLC
Priority to US15/991,860 priority patent/US10339891B2/en
Priority to US16/424,189 priority patent/US10916215B2/en
Assigned to Sony Interactive Entertainment LLC reassignment Sony Interactive Entertainment LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC
Priority to US17/161,338 priority patent/US11404022B2/en
Priority to US17/850,942 priority patent/US11741916B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/001Arbitration of resources in a display system, e.g. control of access to frame buffer by video controller and/or main processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • H04N7/013Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter the incoming video signal comprising different parts having originally different frame rate, e.g. video and graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/06Details of flat display driving waveforms
    • G09G2310/061Details of flat display driving waveforms for resetting or blanking
    • G09G2310/062Waveforms for resetting a plurality of scan lines at a time
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/08Details of timing specific for flat panels, other than clock recovery
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0435Change or adaptation of the frame rate of the video stream
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/08Power processing, i.e. workload management for processors involved in display operations, such as CPUs or GPUs
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/127Updating a frame memory using a transfer of data from a source area to a destination area
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/12Use of DVI or HDMI protocol in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen

Definitions

  • the present disclosure relates to graphics processing and video transfer. Certain aspects of the present disclosure relate to systems and methods for frame rate compensation when compressing and streaming rendered graphics over a network.
  • Rendering graphics for transfer to a display device in real-time is a complicated process that incorporates many well-developed techniques to ensure that newly generated frames are transferred from the source to the display with proper timing.
  • the process begins with a processing unit, commonly a graphics processing unit (GPU) having a highly parallel architecture tailored to the rendering task, rendering each new frame of source content to a portion of memory known as the frame buffer.
  • the newly generated frames of source content referred to herein as “source frames,” are each temporarily stored in the frame buffer in sequence as images having an array of values that define the visual contents for each pixel in that particular frame. While this is occurring, these images are scanned out of the frame buffer in a process that drives the images sequentially to a display device.
  • the display device traditionally updates the image displayed on the screen periodically at a fixed frequency, known as the refresh rate, using the images that are scanned out from the frame buffer.
  • the images in the frame buffer are typically scanned out line by line and transferred serially (in sequence) over some video interface to the display device.
  • certain “invisible” signals are generated to govern the transfer process, so that what is actually transferred to the display device for each frame that is output from the frame buffer, referred to herein as an “output frame,” includes not only the visible pixel values of the frame's image, but other external signals which may be used by the display device to resolve how the received frame is displayed on the screen. This typically includes, among other things, a vertical synchronization signal that is pulsed between each scanned out frame image.
  • the period of time between each scanned out frame image i.e., between the last line or pixel of one frame image and the first line or pixel of the subsequent frame's image, is known as the “vertical blanking interval.”
  • This vertical blanking interval is generated as part of the scanout process, and this vertical synchronization pulse used for synchronization between graphics source and display.
  • the frequency at which the vertical synchronization pulse occurs during scanout is traditionally fixed in relation to the refresh rate of the display device, so that each image scanned out from the frame buffer coincides with each refresh cycle of the display. If the frame rate of the original graphics content, i.e., the rate at which new source frames are drawn to the frame buffer by the GPU, is perfectly in sync with the refresh rate of the display, each new source frame drawn to the frame buffer by the GPU would correspond 1:1 to each image presented on the display device.
  • each image updated on the screen of the display would perfectly correspond to the source frames generated by the GPU.
  • the frame rate of the source content is often variable over time and may fluctuate upward and downward, e.g., based on the complexity of the current scene or other factors associated with the generation of the frames. For example, if the current state of a video game causes too many virtual objects or too much detail within the current field of view, the frame rate may momentarily dip due to an increased computational load required to render the frame. As a result, the frame rate of the source content rendered to the frame buffer may go out of sync with the scanout of the frames from this buffer and the corresponding refresh cycles of the display device. In other words, each “source frame” that is drawn to the frame buffer may not exactly correspond to each “output frame” that is driven to the display device.
  • tearing occurs when a frame is scanned out of the frame buffer while that portion of memory is being updated with a new subsequent source frame, e.g., the GPU overwrites the image in the buffer with a subsequent source frame before it is finished being scanned out.
  • the output frame that is transferred to the display device actually contains the images from two or more consecutive source frames.
  • the display device updates its screen contents during that refresh cycle, it simultaneously contains images from different consecutive frames of the source content.
  • the frame buffer commonly includes multiple buffers, i.e., a front frame buffer from which the frame images are directly scanned out, and one or more back frame buffers into which the GPU may draw new frames while a prior frame is being scanned out of the front frame buffer.
  • a back frame buffer is swapped with the front frame buffer, e.g., by copying the contents to the front buffer or by changing a pointer value which specifies the memory address for the front buffer, so that the contents of the front buffer may be scanned out to the display device.
  • this is often combined with a restriction that prevents the GPU from swapping the buffers until just after a refresh cycle of the display device.
  • stuttering may result, which may occur when the source frame rate drops and the scanout unit is forced to transfer an identical frame to the display. Stuttering may be especially pronounced when the GPU is restricted to only swapping the buffers between refresh cycles, since the frame rate is effectively restricted to only integral factors of the display refresh rate. Since the GPU must have a completed new source frame in order to perform the swap, if the GPU has not finished rendering the subsequent frame at the time of the synchronization pulse, it must wait another full cycle before it can swap the buffers, even if the new source frame is otherwise finished shortly thereafter. When stuttering occurs, the sudden drop in the perceived frame rate at the display can be distracting to the viewer.
  • cloud gaming and other cloud-based video streaming applications may require rendered frames to be compressed and sent over a network for display in real-time, rather than transferred from the frame buffer directly to a display device.
  • whole source frames are compressed by an encoder and sent to the remote device with minimized latency.
  • the encoder must operate on a restricted budget of resources to ensure the frames reach the remote device on time. If the source frame rate fluctuates and stuttering occurs, valuable compression resources would be wasted towards compressing an identical frame. This may result in poorer image quality in the encoded frames than might otherwise be achieved if the compression resources were more efficiently utilized.
  • limited network bandwidth is wasted on unnecessary frames.
  • FIG. 1 is a flow diagram of an example of processing graphics and scanning out the graphics to a display device.
  • FIG. 2 is a schematic diagram of an example output frame.
  • FIG. 3 is a flow diagram of an example of processing graphics and scanning out the graphics to an encoder for streaming the graphics in real-time.
  • FIG. 4 is a flow diagram of an example method of frame rate compensation according to aspects of the present disclosure.
  • FIG. 5 is a block diagram of an example system according to aspects of the present disclosure.
  • FIG. 6A is a schematic diagram of an example terminal system architecture functioning as a video source.
  • FIG. 6B is an example host system and capture card architecture which may capture and compress video frames from the video source.
  • FIG. 7 is a schematic diagram of an example video capture card design having a specialized processing unit.
  • certain aspects of the present disclosure relate to video transfer, including rendering and scanning out video frames for transfer over a video interface (sometimes referred to herein as a display interface), as well as video streaming to remote devices, including compression and transmission of video frames for cloud gaming implementations. Further illustrative details and examples of these aspects may be found in U.S. Non-Provisional patent application Ser. No. 14/135,374, to Roelof Roderick Colenbrander, entitled “VIDEO LATENCY REDUCTION”, (Attorney Docket No. SCEA13037US00), filed Dec. 19, 2013, the entire contents of which are herein incorporated by reference. It is noted that certain implementations of the present disclosure may be configured in accordance with various systems and methods described in that incorporation by reference document.
  • Various aspects of the present disclosure relate to systems and methods configured to adjust the timing of compression to better match the frame rate at which source content is rendered by a processing unit. In certain implementations, this may be accomplished by adjusting the timing of frame scanout in response to detected fluctuations in the source frame rate. For example, a vertical blanking interval generated during scanout of frames from a frame buffer may be adjusted in response to detected changes in the frame rate at which the source content is generated. In certain implementations, other techniques may be used to adjust or avoid compressing or streaming duplicate frames, and rendered graphics may be streamed over a network for display in real-time on a remote device.
  • FIG. 1 an illustrative example of a technique for processing and transferring graphics to a display device in real-time is depicted in FIG. 1 .
  • the example depicted in FIG. 1 may have certain similarities to conventional techniques of transferring video frames to a local display device that utilizes a regular or fixed refresh rate.
  • graphics may be rendered, as indicated at 104 , by a processing unit in order to generate a plurality of source frames 102 in sequence.
  • the source frames 102 may be rendered based on the state of an application, such as a video game, that determines the content of the source frames 102 .
  • the source frame rate 106 which defines the rate at which new source frames 102 are rendered, may be variable and contain one or more fluctuations over time based on, e.g., the complexity of the graphics or amount of detail in the source frames being rendered at that particular moment of time.
  • the processing unit which renders the source frames may be a GPU that contains a specialized architecture tailored to the task of processing graphics and rendering new source frames 102 .
  • Rendering the source frames may include a number of different steps depending on the configuration of the rendering pipeline, which may culminate in rendering the finished source frames 102 into a frame buffer 108 , a portion of memory which temporarily stores each new source frame in sequence.
  • Each source frame 102 may be stored in the frame buffer 108 as an image defined by an array of pixel data values which define the visual values associated with that particular frame.
  • the frame buffer contents may also be scanned out, as indicated at 114 , as a sequence of output frames 118 and transferred to a display device 116 in sequence over a video interface connection, such as HDMI, DVI, VGA, or another suitable interface standard.
  • the scanout unit may generate a vertical blanking interval at the end of each output frame 118 , as well as various other external signals to govern the process of transferring the graphics frames from frame buffer 108 to display device 116 .
  • each output frame 118 may be understood to contain not only the visible pixel values of the source frames 102 , but also invisible external signals that are used to govern the timing and synchronize the transfer of the frames to display device 116 .
  • the display device 116 may periodically update the image that is presented on its screen at a fixed refresh rate 122 , utilizing the vertical blanking signal and/or various external signals associated with each output frame 118 to resolve the pixel data that is received from the frame buffer 108 and present only those pixel values associated with the image contents from the frame buffer.
  • the vertical blanking interval that is defined at the boundary between each output frame may be timed at a fixed frequency that coincides with the refresh rate 122 of the display.
  • the frame buffer 108 may include multiple buffers, including a front buffer 110 and at least one back buffer 112 .
  • the rendering 104 of the source frames into the frame buffer may be performed in such a manner that new source frames 102 are rendered into the back buffer 112 while the front buffer 110 contains a source frame 102 that has not yet been scanned out to the display device 116 .
  • the front buffer 110 and the back buffer 112 may be swapped only after a new source frame is finished being rendered into the back buffer 112 .
  • the timing of the swap of the buffers 110 , 112 may depend on the current configuration of the system.
  • the swap is timed to coincide with the timing of a pulse within the vertical blanking interval (VBI) 120 , thereby restricting a swap from occurring during the middle of the scanout of any particular image from the front buffer 110 .
  • VBI vertical blanking interval
  • the system may instead be configured to swap the front and back buffer as soon as the new source frames are ready, e.g., as soon as they are finished rendering into the back buffer 112 . In these instances, tearing artifacts may still be reduced, but may not be completely eliminated since it is still possible for the buffers to be swapped in the middle of scan out of a particular source image from the frame buffer 108 .
  • the vertical blanking interval 120 may be restricted to occur at regular, isochronous intervals to ensure proper transfer of the frames in sync with the fixed refresh rate 122 of the display hardware.
  • FIG. 2 an illustrative example of an output frame 216 is depicted.
  • the output frame 216 illustrated in FIG. 2 is a visual depiction of the content of each output frame scanned out of a frame buffer, e.g., as shown in FIG. 1 , that is scanned out for transfer to a display device or other destination.
  • the example output frame 216 may be one frame in a sequence of similarly formatted frames which collectively make up a video stream.
  • the video frame sequence may generated by some video content source, such as a video game application, video file, live stream, and the like.
  • the output frame 216 may be made up of an array of pixels, which can be represented by a corresponding array of pixel data values.
  • the output frame 216 may also be transmitted with additional signals external to the pixel data values, as described below.
  • Each pixel data value in the array may include a plurality of color space components depending on the particular color model used.
  • each pixel data value in the array may include two chroma (color) values and luma (intensity) value for the corresponding pixel, if a YCrCb (digital video) or YPbPr (analog video) is used.
  • RGB color space, or some other set of color space components may be used for the pixel data of each pixel.
  • the pixel data values for each color space component of each pixel may be digitally represented by a plurality of bits. For example, a 24-bit color depth may utilize 8-bits per color space component per pixel.
  • pixel is sometimes used herein as shorthand when referring to that portion of an output frame that corresponds to a single tick of a pixel clock.
  • an output frame may also include external signals in addition to the pixel data values.
  • the external signals may include a signal having information indicating whether the pixel is visible, e.g., a data enable signal indicating whether the pixel is meant to be displayed and therefore has a visible pixel contained in the pixel data values for that pixel.
  • the total number of pixels in the array includes both visible pixels 201 (illustrated in the figure as a grid), and invisible pixels 203 (illustrated in the figure as the blank and lined regions).
  • the visible pixels 201 make up the active image region of the output frame, which may be indicated by a high data enable value
  • the invisible pixels 203 make up the blanking region of the output frame in this example (e.g., including both horizontal and vertical blanking regions), which may be indicated by low data values.
  • the visible pixels 201 in the active image region may collectively make up the visible image of the frame that is meant to be displayed.
  • the active region having visible pixels 201 is made up of the source content and their corresponding pixel values which are retrieved from the frame buffer during the scan out of the frame.
  • the visible pixels of the video frame image coincide with the active region of the frame's format.
  • the present disclosure is not limited to this situation, and certain implementations of the present disclosure may actually include invisible pixels that are not only in the blanking region, but are also within the active image region of the output frame that is retrieved from the frame buffer. For example, as described in U.S. application Ser. No.
  • the active image region of an output frame may include invisible pixels if the GPU is configured to render source frames to the frame buffer in a larger format that contains more pixels than the actual source content, such as, e.g., in accordance with the techniques described with reference to FIGS. 7A-8 of that application Ser. No. 14/135,374.
  • Most devices e.g., consoles, PCs, phones, and other video sources, render video frames to a frame buffer organized in RGB pixels with typically at least 8-bit/1-byte per color component.
  • the video signal generated by a video transmitter which may be part of the GPU that renders the video frames or which may be external to the GPU, may transport pixels in RGB, but it can do so in other color space models, such as YCrCb (digital video) or YPbPr (analog video), in case something (e.g., the transmitter) has to convert from RGB to the other format.
  • a GPU may scan the frame out, which is the process of sending the frame pixel by pixel over some serial connection (e.g., HDMI, DVI, etc.).
  • the scanout process may involve the generation of the external signals of each output frame 216 , and the scanout process may partly depend on the type of video connection, as well as whether the video transmitter is inside the GPU or outside it.
  • the GPU may generate a plurality of signals when scanning the frame out, including signals external to the pixel data values. Generally speaking, these signals may be understood to be separate signals that occur simultaneously with each other during the scanout and transfer of the frame.
  • the signals when scanning the frame out may include:
  • the GPU may retrieve the pixels from the frame buffer that holds the completed source frame (e.g., the frame buffer).
  • the frame buffer that holds the completed source frame (e.g., the frame buffer).
  • the GPU will place a new pixel data value on the data bus signals at each “tick” of the pixel clock signal. Also it will output a high level on the data enable signal corresponding to that pixel.
  • a horizontal blanking period (of duration HTOTAL-HDISPLAY pixels, or pixel clock pulses).
  • HTOTAL-HDISPLAY pixels or pixel clock pulses.
  • a pulse may be generated in the hsync signal to notify a transition to the next line.
  • the data enable signal is made low, which means that any data currently on the data bus signals that ordinarily carry color space components should not be interpreted as pixels (these are the invisible pixels at the end of the line).
  • This process may continue line by line until the end of the frame image.
  • a pulse may be generated in the vsync signal, within the vertical blanking region of the output frame.
  • this interval of time during the scanout process at the end of the output frame and after the full source frame image from the frame buffer has been retrieved is known as the vertical blanking interval 209 .
  • the data enable line is also low.
  • the pixel is invisible, and the pixel data values of the data bus signal do not contain the desired color space values which correspond to the display region of the image. Since there is always an active pixel clock, invisible pixels are essentially generated on the data bus signal. It is noted that horizontal and vertical synchronization signals are separate from the pixel data of the data bus signal.
  • the process of transmitting video signals, e.g., made up of output frames 216 , over a serial interface may depend on the video technology.
  • the described signals are actually directly consumed by the monitor, including the pixel data signal and the external signals associated with the output frame.
  • the external signals may include timing signals directly used for VGA.
  • the pixel data signals may be analog signals in which each color component has its own channel, e.g., a red signal channel, a green signal channel, and a blue signal channel.
  • a Digital to Analog Converter (DAC) may generate the analog pixel signal from the digital data bus signals (from the described 24-bit with 8-bit per channel).
  • a transmitter may accept the above described signals and convert them to a signal appropriate for that technology.
  • the HDMI transmitter has 3 TMDS data channels (TX 0 to TX 2 ) and a TMDS clock, in which the HDMI transmitter at the video source embeds all the signals (hsync signal, vsync signal, pixel data bus signal) and TMDS clock contains the pixel clock signal in some way.
  • the HDMI receiver on the other end of the HDMI connector (e.g., HDMI cable) inside the video sink has these signals as inputs, but recovers hsync, vsync, data, and the other signals. This is also true for other video standards like DVI or DisplayPort.
  • the scanout logic may operate on the described signals, but the scanout logic may also directly output, e.g., HDMI, bypassing the intermediate step for these other signals.
  • the pixel data values and other signals associated with each pixel for the output frame are typically output line by line, with each line containing a plurality of pixels and each frame containing a plurality of lines.
  • these lines are horizontally oriented relative to the image that is displayed, and the pixels in a horizontal line may be transferred in sequence, e.g., serial transfer, from left to right in the line through a video communication interface from the video source to the display device or other video sink device.
  • the horizontal lines may be output in sequence from top to bottom until the end of the frame is reached. Accordingly, all the pixels in the output frame 216 , including both visible 201 and invisible pixels 203 , may have a defined sequence for transfer, and the lines during the vertical blanking interval 209 may be located at the end of this sequence for each output frame 216 .
  • each line of the frame is a horizontal line having a total number of pixels HTOTAL, which may define the total horizontal resolution of the output frame.
  • the example output frame 216 has a total number of lines VTOTAL, which may define the total vertical resolution of the frame.
  • the total horizontal and vertical resolution includes both visible and invisible pixels.
  • the active display region 201 of the frame may include a plurality of active lines VDISPLAY defining the vertical display resolution of the frame, and each active line may include a plurality of active pixels HDISPLAY, which defines the horizontal display resolution of the frame.
  • the active display region 201 may correspond to that source frame that is rendered by a GPU into the frame buffer as described above.
  • the total resolution (e.g., HTOTAL ⁇ VTOTAL) of the output frame 216 may be greater than the display resolution (e.g., HDISPLAY ⁇ VDISPLAY) of the output frame, due to the presence of the blanking region and invisible pixels 203 in the frame, which may be generated during the scan out of the source frame, e.g., as described above.
  • the active display region corresponds to those pixels retrieved from the frame buffer, while the blanking region refers to those pixels generated due to the addition of external signals and extra ticks of the pixel clock generated during scanout.
  • the blanking region may include a plurality of invisible pixels at the end of each line, corresponding to the horizontal blanking interval, and a plurality of invisible lines at the end of each frame, corresponding to the vertical blanking interval 209 .
  • the synchronization pulses in the blanking regions may be provided to synchronize the video stream transfer between the video source and a display, with the horizontal synchronization pulses 205 within the hsync signal generally indicating the transitions between each line in the frame, and the vertical synchronization pulses 207 generally indicating the transitions between each frame in the sequence of output frames that makes up the video stream.
  • the hsync and vsync signals are external signals that are not part of the pixel data, e.g., RGB values and the like, since the GPU always outputs the pixel data and synchronization signals on the pixel clock, there happen to be invisible pixels in the pixel data bus signal during the period when pulses on hsync or vsync lines are active. Likewise, the hsync and vsync signals may be inactive during the period corresponding to those visible pixel values on the pixel data bus signal. In the case of HDMI, hsync and vsync are actually transported in the pixel data. Then, after transport over an HDMI cable, the HDMI receiver would separate the signals again.
  • transferring the pixels of the frame 216 in sequence e.g., pixel by pixel, will result in the pixels corresponding to the horizontal blanking interval 211 being transferred at the end of each line, and the pixels corresponding to the vertical blanking interval 209 being transferred at the end of the frame 216 .
  • the horizontal synchronization signal may include horizontal synchronization pulse 205 during the horizontal blanking interval 211 , with a corresponding horizontal front porch and horizontal back porch (illustrated in the diagram as blank regions before and after the horizontal synchronization pulse 205 , respectively), and the vertical synchronization signal may include a vertical synchronization pulse 207 with a corresponding vertical front porch and vertical back porch (illustrated in the diagram as blank regions before and after the vertical synchronization pulse 207 , respectively), and these pixels may collectively make up the invisible pixels 203 in the example output frame 216 .
  • a video signal may be made up of a plurality of output frames similar to the example illustrated in FIG. 2 , and output frames may be transferred from a video source to a display device, video capture device, or other video sink device through a video interface, such as HDMI, DVI, VGA, and the like.
  • a refresh rate e.g., VRrefresh
  • the timing of the transfer of pixels through an interface between a video source and a display device should be synchronized in order to ensure that the rate of transfer of the pixels in the video stream is synchronized with the display and keeps up with the display refresh rate.
  • a pixel clock which may be an external signal generated by electronics or other components embodied in the video transfer hardware and which may be generated in association with the scan out of the frames as described above, governs the timing for the transfer of each pixel between video source and video sink.
  • the pixel clock will control the timing of the transfer of pixels so that the total number of pixels within each frame is transferred from the video source at a rate that is in sync with the refresh rate of the display device.
  • the pixel clock may be mathematically expressed as a product of the total number of pixels within each line, the total number of lines within each frame, and the vertical refresh rate as follows:
  • Standard video interfaces typically support different display resolutions (e.g., HDISPLAY ⁇ VDISPLAY), such as 720p (1280 ⁇ 720), 1080p (1920 ⁇ 1080), and the like, which each have a different total number of pixels for each frame.
  • a pixel clock generator which may be embodied in the video transfer hardware, may be configured to generate a pixel clock for a given video resolution and/or frame rate based on a formula similar to the mathematical expression shown above, e.g., based on the refresh rate and the resolution of each frame. It is noted that the upper bounds of the pixel clock may be limited due to practical considerations and technical requirements of the electronics and components involved, as well as a practical limit to the frequency at which the pixel clock may be accurately maintained.
  • VDISPLAY active lines
  • VTOTAL total number of lines
  • Various implementations of the present disclosure may incorporate techniques for decreasing the time to transfer an output video frame by artificially increasing the total number of pixels, i.e., ticks of a pixel clock, in a frame beyond what is needed to encompass the visible pixel data and/or synchronization signals within each output frame.
  • a pixel clock rate may be increased to output the greater number of total pixels, causing the desired visible pixels embodying the visible video frame image within the active region of the output frame to be transferred in less time.
  • this may be accomplished by increasing the number of lines at the end of each frame's sequence or otherwise putting the output frames in some frame format that has a greater number of total pixels than the source frame image.
  • VTOTAL is twice the size of VDISPLAY, that is the total number of lines within a frame is double the number of visible/active lines within the frame
  • VDISPLAY/VTOTAL may be made smaller, for example, by adding lines to the frame in some fashion. Further examples of techniques for reducing video transfer latency and forming output frames having artificially increased numbers of pixels are described in U.S. application Ser. No. 14/135,374, entitled “VIDEO LATENCY REDUCTION” and fully incorporated by reference herein. It is noted that implementations of the present disclosure may utilize any of the techniques for forming output frames described in that document.
  • FIG. 3 an illustrative example of a technique for processing and transferring graphics in real-time is depicted.
  • the example depicted in FIG. 3 involves scanning out the rendered frames to a video capture unit instead of directly to a display device that refreshes at fixed intervals.
  • captured frames may be further compressed using a video encoder so that, e.g., rendered graphics may be transmitted over network to a remote device for display in real-time.
  • graphics may be rendered, as indicated at 304 , by a processing unit in order to generate a plurality of source frames 302 in sequence.
  • the source frames 302 may be rendered based on the state of an application, such as a video game, that determines the content of the source frames 302 .
  • the frame rate 306 of the source content which defines the rate at which new source frames 302 are rendered, may be variable and contain one or more fluctuations over time based on a variety of factors, such as the complexity of the scene currently being rendered.
  • the processing unit which renders the source frames may be a GPU that contains a specialized architecture tailored to the task of processing graphics and rendering new source frames 302 .
  • Rendering the source frames may include a number of different steps depending on the configuration of the rendering pipeline, which may culminate in rendering the finished source frames 302 into a frame buffer 308 .
  • Each source frame 302 may be stored in the frame buffer 308 in sequence as an image defined by an array of pixel data values, e.g., a bitmap image (bmp), which define the visual values associated with that particular source frame.
  • bmp bitmap image
  • the frame buffer contents may also be scanned out, as indicated at 314 , as a sequence of output frames 318 and transferred to a video capture unit 324 in sequence over a video interface connection, such as HDMI, DVI, VGA, or another suitable display interface standard.
  • a video capture card or other device may be used for the frame capture 324 , and the video capture unit may be that may be configured to capture only the set of visible pixels within each output frame that correspond to the source frames rendered by the processing unit.
  • the scanout unit may generate several external signals, including a vertical synchronization signal. Generating the vertical synchronization signal may also involve the generation of one or more active vsync pulses after each source frame 302 that is scanned out of the frame buffer 308 , and, as a result, the generation of a vertical blanking interval 320 between each scanned out frame. This may result in the generation of invisible lines at the end of each output frame 318 , e.g., as described above with reference to the illustrated example of FIG. 2 . At least a portion of the vertical blanking interval may correspond to these invisible lines between each scanned-out source frame 302 .
  • the frame capture unit may capture the source frames contained within each received output frame 318 in sequence, and may utilize the vertical blanking interval and/or various external signals associated with each output frame 318 to resolve the pixel data that is received from the frame buffer 308 and capture the rendered source frames corresponding to those visible pixels.
  • Each captured source frame 302 may be then be compressed using a suitable video encoder, e.g., codec, as indicated at 326 .
  • the compressed frames 330 may then be optionally sent over a network for display on a remotely located display device.
  • the frame buffer 308 may include multiple buffers, including a front buffer 310 and at least one back buffer 312 .
  • the rendering 304 of the source frames into the frame buffer may be performed in such a manner that new source frames 302 are rendered into the back buffer 312 while the front buffer 310 contains a source frame 302 that has not yet been scanned out.
  • the front buffer 310 and the back buffer 312 may be swapped only after a new source frame is finished being rendered into the back buffer 312 .
  • the timing of the swap of the buffers 310 , 312 may depend on the configuration of the system.
  • the swap is timed to coincide with the timing of the vertical blanking interval (VBI) 320 , thereby restricting a swap from occurring during the middle of the scanout of any particular image from the front buffer 310 .
  • the back frame buffer 312 may be configured to swap with the front frame buffer 310 only in response to a vertical synchronization pulse that is generated during the vertical blanking interval during scanout 314 of the frames, and the vsync pulse which indicates to the processing unit that the buffers may be swapped may occur at or near the beginning of the vertical blanking interval.
  • each output frame 318 may contain only whole source frames 302 .
  • the system is configured to swap the front and back buffer as soon as the new source frames are ready, e.g., as soon as they are finished rendering into the back buffer 312 tearing may not be completely eliminated since it is still possible for the buffers to be swapped in the middle of scan out of a particular source image from the frame buffer 308 .
  • the source frames within the output frame 318 that is scanned out may actually contain portions of consecutive rendered source frames.
  • frame rate of the source content 306 may be detected, and the vertical blanking interval 320 may be adjusted in response to fluctuations in the frame rate, thereby better matching compression rate and timing 328 to the source content. For example, if it is detected that the source frame rate 306 momentarily drops, e.g., due to the complexity of the scene, the vertical blanking interval 320 may be adjusted to delay scanout of one or more frames in response. This may be beneficial to avoid stuttering or other drawbacks associated with an instantaneous frame rate that is below normal, better matching the rate and timing of the compression and streaming to the source content.
  • the vertical blanking interval 320 may be adjusted to scan frames out of the frame buffer 308 more quickly. This may be beneficial, for example, when an encoder is operating on a fixed budget per frame. If the encoder receives the frames sooner, it may be able to compress the frames at a higher resolution, thereby improving image quality for the remote viewer.
  • FIG. 4 an illustrative example of a method 400 adjusting a vertical blanking interval in response to detected fluctuations in the frame rate of source content is depicted.
  • the illustrative method of FIG. 4 may be involve a graphics processing and transfer technique that is similar to the example depicted in FIG. 3 .
  • the method 400 may include rendering a plurality of source frames in sequence into a frame buffer with a processing unit, as indicated at 431 .
  • Each new source frame may be rendered in response to one or more corresponding draw calls received based on an application 433 output.
  • the processing unit for rendering the source frames may be a GPU having a specialized role of processing graphics for the application, while the application itself may be executed by a CPU.
  • a graphics application programming interface (API) may coordinate draw calls from CPU to GPU in order to coordinate the processing tasks and initiate the generation of new source frames according to the state of the application.
  • API graphics application programming interface
  • the method may include scanning out a plurality of output frames in sequence from a frame buffer.
  • the frame buffer may include both a front buffer and one or more back buffers, and the frames may be scanned out directly from the front buffer while new source frames are being rendered into the back buffer.
  • the scanout of the frames may involve the generation of various external signals in addition to the pixel data that is scanned out of the frame buffer, including a vertical synchronization signal, as well as, e.g., a horizontal synchronization signal, pixel clock signal, and data enable signal as described above.
  • the frames may be scanned out with a scanout unit that may include various components configured to generate the signals described above associated with the transfer of frames.
  • Each output frame that is scanned out of the frame buffer may include an image corresponding to a source frame rendered by the processing unit.
  • the method may include capturing the source frames scanned out of the frame buffer.
  • the frames may be scanned out of the frame buffer and sent through a video interface connection to a video capture unit, which may capture the source content in step 434 .
  • the method 400 may also include compressing the captured frames using an encoder, e.g., a video codec.
  • the frames may be compressed in any of a variety of known video compression formats, such as h.264 or another suitable format for transmission over a network having a limited bandwidth.
  • the frames may be encoded using a low latency encoder for transfer to a remote device in real-time.
  • the compressed frames may then be transmitted to one or more remote devices over a network, such as the Internet, as indicated at 438 .
  • the illustrative method 400 depicted in FIG. 4 may also compensate for frame rate fluctuations in the rendering of source content in accordance to certain aspects of the present disclosure. This may include detecting one or more changes or fluctuations in the frame rate of the source content, as indicated at 440 . In response to one or more fluctuations in the frame rate of the source content, a vertical blanking interval that is generated as part of the scanout process 432 may be adjusted to compensate for the frame rate fluctuations, as indicated at 442 .
  • the timing and rate of compression 436 and streaming 438 may be better matched to the rate at which the source content is generated at 431 .
  • the method may involve delaying a scanout of one or more frames in response to one or more downward fluctuations in the frame rate in which the speed at which the source content is momentarily decreased.
  • method may involve speeding up the rate at which frames are scanned out of the frame buffer in response to one or more upward fluctuations in the frame rate in which the speed at which the source content is increased.
  • the method may involve both depending on the nature of the fluctuations in the source frame rate at different periods of time during the rendering process.
  • the manner in which the frame rate is detected at 440 and the vertical blanking interval is adjusted at 442 may be performed in a variety of ways according to aspects of the present disclosure.
  • the frame rate may be detected, as indicated at 440 , by placing a marker into memory after each new source frame is finished rendering and tracking the timing between each marker.
  • a scanout unit and a processing unit e.g., a GPU
  • a graphics driver sometimes known as a “display driver” or “GPU driver”
  • An application which may optionally be implemented by separate processing unit, e.g., a CPU, may send drawing commands (e.g., draw calls) to the graphics driver, and the GPU may render the source frames to a frame buffer in response.
  • the application may place a marker in a memory buffer, e.g., the back frame buffer, which notifies the GPU that the frame is ready.
  • the graphics driver may track the time since the last “frame ready” marker, and the time between markers may be indicative of the frame rate of the source content. If the frame ready marker has not been received by some deadline, e.g., the time between consecutive markers exceeds some pre-defined time threshold between markers, then the graphics driver may delay scanout for the next frame. Accordingly, the graphics driver may be configured to track the time between each source frame that is finished rendering into the frame buffer, and whenever the time exceeds some pre-defined threshold, it may delay scanout of a subsequent frame for a finite period of time.
  • the graphics driver may modify registers on the GPU (e.g. over the PCI-express bus) to adjust GPU state.
  • the graphics driver may also send commands to the GPU, which is often done by sending one or more commands to the GPU using a command buffer.
  • register access is synchronous (blocking), while sending commands through buffers is asynchronous. In the past everything was done using registers, which was typically slow, so presently registers are used mostly for configuration purposes.
  • the graphics driver may adjust a GPU register or send a command to the GPU through a command buffer that would delay the scanout.
  • the driver could send a command to put the scanout unit to sleep for a finite period of time, e.g., power down the scanout unit to delay scanout for a finite period of time.
  • the vertical blanking interval may be adjusted by maintaining an active vsync signal for a longer period of time between scanout in response to a slower frame rate, e.g., a detected downward fluctuation in the frame rate.
  • the graphics driver may generate dummy lines at the end of each frame, e.g., by maintaining an active vertical synchronization signal, until a frame ready marker is received. If the timing of the vertical blanking interval is synchronized to the time of the frame ready markers, then the rate at which frame are scanned out to a video capture unit and/or encoder may be better synchronized to the frame rate of the source content. This may result in a dynamic vertical blanking interval generated in by the scanout unit having a length varies with the frame rate. The net effect may result in the vertical blanking interval between scanned out frames being longer or shorter in response to changes in the frame rate of the source content.
  • FIG. 4 is provided for purposes of illustration only, and implementations of the present disclosure include other techniques for compensating for frame rate beyond adjusting the vertical blanking interval as shown in FIG. 4 .
  • the rate of compression and streaming may be adjusted after the scanout phase in response to fluctuations in the source frame rate. For example, frames may be scanned out of the frame buffer and transferred using a display interface, which may a video transmitter and receiver. When the frame rate fluctuates downward, one or more components of the display interface may be momentarily disabled to prevent duplicate frames from being received by the encoder, thereby preserving compression resources. For example, the video transmitter may be momentarily disabled to prevent transfer of one or more frames to the encoder. It is noted that, in contrast to some of the implementations described above, this technique would only work to decrease the compression and/or streaming rate, but not increase it.
  • a cloud gaming implementation may involve source graphics content which is generated by a gaming console or other video gaming system, and the graphics frames may be scanned out to a separate streaming server system that then may capture, compress, and stream the frames to a remote client device.
  • the encoder may have no way of adjusting to the frame rate of the source content, since it is on a separate system and receives frames captured after scanout.
  • adjusting the scanout and the vertical blanking interval generated during scanout in response to detected changes in the graphics source's frame rate may better match the timing of the frames received by the separate system.
  • certain implementations may involve an encoder and a streaming unit, such as streaming software, which operate on the same device as the graphics source.
  • a streaming unit such as streaming software
  • the system may be configured to forgo the encoding of one or more frames received during scanout in response, e.g., to preserve compression resources.
  • the system may be configured to still encode the frame, but the streaming unit may forgo sending one or more duplicate frames in response, e.g., to preserve network bandwidth. If these units are all part of the same device as the graphics source, the graphics driver may be configured to notify the encoder and/or streaming software so that they can respond to fluctuations in the frame rate in this manner, which may not be possible when the encoding and streaming device is separate from the graphics source device.
  • FIG. 5 provides an overview of an example hardware/software architecture of a system for generating, capturing, compressing, and streaming video frames according to various implementations of the present disclosure.
  • the system 500 may be configured to compensate for frame rate fluctuations in rendered graphics content according to various aspects of the present disclosure.
  • the system 500 may be configured to perform a method having features in common with the method of FIG. 4 .
  • system 500 may include a first computing device 550 and a second computing device 552 that are connected by a display interface 554 (sometimes also referred to herein as a “video interface”).
  • the first computing device 550 may be a graphics source configured to generate and render graphics
  • the second computing device 552 may be a streaming device configured to the compress frames and send the frames over a network 556 to a remote client device 558 .
  • the graphics source 550 may be a terminal of the streaming server 552 that is configured to scanout rendered frames to the host system 552 through a display interface connection 554 , such as HDMI, VGA, DVI, and the like.
  • the graphics source device 550 may include one or more processing units 560 , 562 and one or more memory units 564 configured to implement various aspects of graphics processing, transfer, and frame rate compensation in accordance with the present disclosure.
  • the one or more processing units include at least two distinct processing units, a central processing unit (CPU) 560 and a graphics processing unit (GPU) 562 .
  • the CPU 560 may be configured to implement an application, e.g., a video game, the state of which may determine the content of graphics to be output.
  • the CPU 560 may be configured to implement one or more graphics drivers 566 to issue drawing commands to the GPU 562 , as well as control scanout of frames.
  • the GPU 562 may be configured to render new source frames into a frame buffer 568 , which may be a portion of the one or more memory units that temporarily holds each rendered source frame in sequence.
  • the frame buffer 568 may include multiple buffers, including a front buffer and one or more back buffers, and the GPU 562 may be configured to swap the buffers when it is finished rendering new source frames in the back buffer.
  • the graphics source may also include a scanout unit 570 , which may be configured to scan rendered frames out of the frame buffer 568 in accordance with various aspects described above.
  • the scanout unit may be configured to scan output frames line by line directly out of a front buffer of the frame buffer 568 , as well as generate a vertical synchronization signal and other external signals during the scanout process, e.g., as described above, and the vertical synchronization signal may be generated so as to generate a vertical blanking interval between each source frame that is retrieved from the frame buffer 568 .
  • the GPU 562 may be controlled by the CPU 560 via the one or more graphics drivers 566 , which may be implemented as one or more software programs that cooperate with an operating system of the graphics source 550 , and which may be embodied in a non-transitory computer readable medium for execution by the CPU or other processing unit.
  • the graphics source device 550 may be configured to detect one or more fluctuations in the frame rate of the source content rendered by the GPU 562 , and the device may be configured to the scanout in response to the one or more fluctuations. This may be accomplished, for example, by any of the techniques described with reference to FIG. 4 .
  • the one or more processing units 560 , 562 may be configured to place a marker into the one or more memory unit 564 when each new source frame is rendered.
  • the one or more memory units 564 may contain a frame buffer 568 into which the GPU 562 renders new frames, and the GPU 562 may be configured to place a marker into the frame buffer 568 when it is finished rendering each new frame.
  • the graphics driver 566 may track the timing of each new marker in the buffer 568 and may make adjustments in response to detected changes, e.g., as described above with reference to FIG. 4 .
  • the driver may be configured to make adjustments in the scanout timing to compensate for detected fluctuations in the frame rate rendered by the GPU 562 .
  • it may be configured to adjust a vertical blanking interval in response to one or more fluctuations to increase or decrease an instantaneous rate of the scanout of frames. This may be accomplished, e.g., by the graphics driver 566 temporarily extending the portion of the vsync signal that is generated between frames or by putting the scanout unit 570 to sleep for a finite period of time. This may also be accomplished by temporarily disabling the display interface 554 momentarily to prevent the transfer of one or more frames.
  • the scanout of the frames by the scanout unit 570 may drive new frames to streaming device 552 over the display interface 554 , as shown in FIG. 5 .
  • the streaming server 552 may include a frame capture unit 576 , such as a video capture card, that is configured to capture the source frame images contained within each output frame transferred over the display interface 554 .
  • the frame capture unit 576 may be specially adapted coordinate with the uniquely tailored frames that may be rendered by the GPU 562 and sent by the scanout unit 570 .
  • the frame capture unit may be configured to count the lines and/or pixels received in order to only capture those visible pixels which contain the desired source content.
  • the streaming computing device 552 may also include an encoder, e.g., a video codec, configured to compress the source frames captured by the frame capture unit 576 .
  • the streaming computing device 552 may also include a streaming unit 580 that is configured to send the compressed frames over the network 566 to one or more client devices 558 .
  • the client 558 may also be a computing device having at least one processor unit 586 coupled to at least one memory unit 588 , and the system 500 may be configured to implement video streaming in real-time, so that the client device 558 may decompress the received frames with a decoder 582 and display the frames with a display device 584 in real-time with minimized latency from when they are rendered by the GPU 562 of the graphics source 550 .
  • FIG. 5 While various components of FIG. 5 are depicted separately for purposes of explanation, it is noted that many of the illustrated components may be physically implemented as common or integral units.
  • the scanout unit 570 may be physically implemented as part of the GPU 562 , or it may be a separate unit. Similarly, in certain implementations the scanout unit may be physically implemented as separate components or may be a physically integrated unit.
  • the scanout unit 570 may generate a plurality of signals, including a vertical synchronization signal, a horizontal synchronization signal, a pixel clock, and the like.
  • the scanout unit may be a single integral unit which contains components for generating all of these signals, or the scanout unit 570 may be made up distinct signal generators for these components. For example, a pixel clock generator of the scanout unit and a vertical synchronization signal generator of the scanout to not need to be part of the same physical chip.
  • the one or more memory units 564 may include a plurality of distinct memory units for different purposes.
  • the memory unit 564 may optionally include a dedicated graphics memory unit that is separate from a main memory unit.
  • the graphics memory may be configured to hold the frame buffer, while the main memory may be configured to hold data and programs implemented by the CPU 560 .
  • the video encoder 578 and/or the streaming unit 580 may optionally be implemented as one or more software programs which are configured to be stored on one or more memory units 574 and executed by the one or more processor units 572 of the streaming computing device 552 .
  • the encoder 578 and the streaming unit 580 may be separate sets of code or may be part of the same program in accordance with implementations of the present disclosure.
  • FIG. 5 is a simplified schematic provided for purposes of explanation, but the system 500 may include many additional aspects to support graphics rendering, compression, streaming, and other features in support of cloud computing. Moreover, configuration of the illustrated example system 500 may be particularly beneficial in implementations involving cloud gaming for console platforms, and it is noted that the system 500 may be configured in accordance with systems described in U.S. application Ser. No. 14/135,374, entitled “VIDEO LATENCY REDUCTION” and fully incorporated by reference herein, to further support such applications.
  • the graphics source system 550 of the present application may have features in common with the terminal system depicted in FIG. 4A of that document, which corresponds to FIG. 6A herein.
  • the streaming server 552 may have features in common with the streaming server depicted in FIG. 4B of that document (corresponding to FIG. 6B herein), and the frame capture unit 576 may have features in common with the video capture card depicted in FIG. 5 of that document (corresponding to FIG. 7 herein).
  • FIGS. 6A and 6B provide an overview of an example hardware/software architecture for generating and capturing video frames according to various implementations of the present disclosure.
  • the example system of FIGS. 6A and 6B may be a system for streaming video games and other applications using a streaming server and a terminal system.
  • FIG. 6A illustrates an architecture for an example video source according to various aspects of the present disclosure
  • FIG. 6B illustrates an architecture for an example video capture system for capturing video from the video source according to various implementations of the present disclosure.
  • the video source 612 may be a terminal configured to run an application for cloud streaming, and may be an existing embedded system, video game console, or other computing device having a specialized architecture.
  • the video capture system 602 may be a streaming server configured to capture and stream the video output from the terminal system to a client device.
  • video sink may be a streaming server configured to capture and stream the video output from the terminal system to a client device.
  • FIGS. 6A and 6B the illustrated architecture of FIGS. 6A and 6B is provided by way of example only, and that various implementations of the present disclosure may involve reducing video transfer time using other architectures and in other contexts beyond cloud gaming and cloud computing applications.
  • the example video source may be a terminal system 612 that is configured to run an application 608 , which may involve a video output to be captured by the video capture system 602 .
  • the application may be a video game having rendered graphics as a video output, which may be transferred to the streaming server 602 for sending over the network.
  • the terminal system may include graphics processing unit (GPU) 650 , which together with the graphics memory 649 may be configured to render the application output 608 as a sequence of images for video frames.
  • GPU graphics processing unit
  • the images may be output as a sequence of video frames that have visible pixels which contain the pixel data for the image of each frame for display on a display device, and the video frame images may be sent to the video capture system 602 through a video interface, such as HDMI, as output frames having both visible and invisible pixels.
  • a video interface such as HDMI
  • the video source may be configured to add extra pixels so that enlarged output frames are sent through the video interface. Further examples of how extra pixels may be added to the output frames are described below.
  • the video source 612 may include a graphics driver 652 configured to interface with the GPU 650 for rendering the application video signal as a sequence of video frame images.
  • the GPU 650 may generate video frame images for video signal output in accordance with the application 608 , and the graphics driver 652 may coordinate with the GPU 650 to render the video frame images into source video frame format having a supported a particular display image resolution, e.g., 720p.
  • the GPU 650 together with the graphics driver 652 may render video frame images in a format having a plurality of visible image lines, with each visible image line having a plurality of visible image pixels.
  • the graphics driver 652 may be configured to add extra pixels in addition to the frame image pixels rendered by the GPU, e.g., by rendering the frame in an enlarged frame having a greater resolution than the number of pixels in the video frame image. Further examples of enlarging a frame by rendering it in an enlarged frame format are described below.
  • the video source 612 may include a frame buffer 651 and a scan out unit 653 , which may be operatively coupled to the GPU 650 , and, in certain implementations, may be embodied in the GPU 650 .
  • the GPU 650 may be configured to render video images to the frame buffer 651 , e.g., based on the output of the application 608
  • the scan out unit 653 may be configured to retrieve the frame images from the frame buffer 651 and generate additional external signals for sending the image as an output frame over the interface, e.g., as described above.
  • the scan out unit 653 may include a pixel clock generator 641 for generating a pixel clock signal the scan out of the frame and/or a sync signal generator 631 for generating the synchronization signals, e.g., hsync and vsync signals, with each output frame.
  • the sync signal generator 631 may add an hsync signal that has a horizontal blanking region at the end of each line of the frame, and corresponds to a plurality of invisible pixels at the end of each line of the frame.
  • the signal generator 631 may also add a vsync signal that has a vertical blanking region at the end of each frame and corresponds to a plurality of invisible lines at the end of the frame.
  • the pixel clock generator 641 may generate a clock signal having a pulse associated with each pixel in the output frame generated for transfer over the video interface, including the total number of active pixels retrieved from the frame buffer 651 and the total number of pixels corresponding to the synchronization regions inserted between the active pixels. It is noted that the pixel clock generator 641 and/or the sync signal generator 631 may be contained as part of the scan out unit 653 , and the scan out unit 653 may be contained as part of the GPU 650 . However, it is emphasized that this is just an illustrative example, and that one or more over the components may be implemented as separate components.
  • the video source may include a video transmitter 656 coupled to a video communication interface, and the transmitter may transfer the video signal to the video capture system 602 through a serial communication interface, e.g., pixel by pixel in sequence, with the sync signals indicating transitions between lines and frames in the sequence accordingly.
  • the a pixel clock generator 641 which may generate a clock signal to synchronize the timing of each pixel, e.g., based on the total number of pixels and frame rate of the video content, as discussed above.
  • the pixel clock generator 641 may generate a pixel clock with increase transfer frequency in each pixel, based on extra pixels contained within the active display region within each image, extra pixels contained within the synchronization region, or both.
  • the video interface may also support audio transfer, such as with an HDMI interface, and an audio signal output from the application may also be submitted through the video interface. In alternative implementations, a separate audio interface may be used.
  • the video source may be configured to send the output video signal to a video capture device 620 coupled to a computing system 602 .
  • the capture device may receive the video pixel data contained in the transferred video signal so that it may be captured in digital form and compressed by the streaming server 602 .
  • the streaming server 602 may include a video capture process 634 and/or an encoder which may be configured to compress each video frame received from the video capture device.
  • a streaming server process 646 may be configured to transmit the compressed video stream to a remotely located device so that the compressed video stream may be decompressed and displayed on a remote display device.
  • the video capture device may contain video capture logic 628 which is specially configured to capture only the visible pixels of a video frame image contained within an enlarged frame in accordance with various aspects of the present disclosure.
  • the graphics rendering components of the video source may be configured to insert the visible image pixels of a video frame image in only a portion of the active display region of a particular format, and the video capture device 620 may be configured to count lines and/or pixels within each frame that is received in order to know when capture of the display image is complete. This may be based on a predetermined configuration of how frames are rendered by the video source.
  • the video capture device 620 may determine that capture is complete based on the presence of a synchronization signal, e.g., a VSYNC signal in implementations where frames are enlarged by adding synchronization lines.
  • the streaming server 602 or other computing device may be configured to begin compression the video frames as soon as capture of the visible display image within each frame is complete.
  • the capture device may receive the video signal through communication interface that is compatible with the video signal output from the video source 612 , and the video interface may be coupled to a video receiver 630 .
  • the video capture device may include one or more ports as part of an audio and/or video communication interface, e.g., HDMI ports or other ports as described below with reference to FIG. 7 .
  • the interface device 602 may include a specialized processing unit containing the logic 628 that is operatively coupled to the video signal interface, with the specialized processing unit having logic 628 that is dedicated to performing functions associated with A/V capture, and optionally other functions associated with cloud streaming, for signals received through a connector from the terminal system 602 .
  • the logic 628 may also support communication with the host system 602 through an additional communication interface, which may communicate with a peripheral bus of the host system 602 in order to interface with an A/V process embodied in the host system.
  • the interface device 620 may be an add-on card which communicates with the host system 602 memory/CPU through an expansion interface, such as peripheral component interconnect (PCI)), PCI-eXtended (PCI-X), PCI-Express (PCIe), or another interface which facilitates communication with the host system 602 e.g., via a peripheral bus.
  • PCI peripheral component interconnect
  • PCI-X PCI-eXtended
  • PCIe PCI-Express
  • the host system may include a capture device driver 626 to support the exchange of signals via the interface device 620 .
  • the specialized processing unit may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or another specialized processing unit having dedicated units of logic configured in accordance with principles described herein.
  • the logic units 628 of the specialized processing unit may also include dedicated logic to support various functions for cloud streaming in addition to audio/video capture of the output from an application 608 running on the terminal system 602 , such as storage virtualization in coordination with a storage process 632 .
  • an A/V capture unit embodied in the logic 628 of the specialized processing unit may communicate with the capture device driver 626 in the host system 602 , and an A/V process 632 embodied in the host system 602 , e.g., a software application running on a central processing unit 604 .
  • the terminal system 612 sends video pixels to the video capture device, this video data may make it through the graphics driver 652 , the video capture unit contained in the logic 628 , the capture device driver 626 , to the A/V process 632 embodied in the host system.
  • the A/V process 632 may then compress the captured video frames, and the compression may begin sooner in accordance with an increase in the pixel clock caused by extra pixels.
  • the video sink 602 may optionally be a streaming server adapted to transmit over a network a stream of video output from the application 608 running on the terminal system 612 .
  • the streaming server 602 may include an Ethernet adapter or other network adapter 636 , and a corresponding Ethernet driver or other network driver 638 for the operating system of the host 602 , with a compatible network library 639 providing protocol support for the network communication.
  • the host system may also include system memory 640 , controlled by a corresponding memory driver 642 (e.g., tmpfs) and supported by a file system library 644 .
  • a streaming server process 646 may be run on the host system 602 to perform functions associated with provide a real time stream to a client device connected over a network (not pictured in FIGS. 6A-6B ).
  • the terminal system 612 may include various other components to support the application 608 , which may be, e.g., video game software designed for an existing embedded platform.
  • the terminal system 612 may include a file system layer 627 to access storage, as well various components to support graphics storage access.
  • systems and the capture device 620 may be configured to implement a storage virtualization technique. An example of such a technique is described in commonly-assigned, co-pending U.S. application Ser. No. 13/135,213, to Roelof Roderick Colenbrander, entitled “MASS STORAGE VIRTUALIZATION FOR CLOUD COMPUTING”, filed Dec. 19, 2013, the entire contents of which are herein incorporated by reference.
  • FIG. 7 a schematic diagram of an example capture device 720 that may be implemented on the interface card 620 , some components on it, and internals of an example specialized processing unit 760 is depicted in accordance with various implementations of the present disclosure.
  • the capture device 720 may be configured as an add-on card having components attached to a printed circuit board (PCB), and the capture card 720 may interface with a peripheral bus of a host system through a host hardware interface 762 , such as a peripheral expansion port or other expansion communication interface which allows communication the peripheral bus of a host system when connected.
  • a host hardware interface 762 such as a peripheral expansion port or other expansion communication interface which allows communication the peripheral bus of a host system when connected.
  • the example capture device 720 of FIG. 7 includes various optional components that are not necessary for video capture, but which may provide additional functionality for cloud computing and other implementations.
  • the example specialized processing unit 760 may include various blocks of logic dedicated to specialized functionality in accordance with various aspects of the present disclosure.
  • the specialized processing unit may be implemented, e.g., as an FPGA, ASIC, or similar specialized processing unit.
  • the specialized processing unit 760 may include a host interface block 764 which implements part of a protocol stack for the communication interface between the interface card 720 and a peripheral bus of a host system (not pictured in FIG. 7 ) for the capture device 760 .
  • Communication busses like PCI-Express can be thought of as a protocol stack having several layers. Different communication protocols have different layers. Typically there is an ‘application layer’ at the top, then some transport related layers in the middle and some physical layer at the bottom.
  • the host interface block 764 need not implement all layers of such a protocol stack. Instead, the host interface block may take care of the physical layer, which is responsible for putting digital information on a communication link, e.g., through electrical or optical signals.
  • the host interface block may also be responsible for portions or possibly all of the ‘transport layers’ of the protocol stack, but need not be responsible for the application layer.
  • the host interface block 764 may be a hard PCIe block for communication through a PCI-Express connection, and which embeds the protocol stack for a PCIe interface or other interface for accessing a local bus of the host system.
  • the host interface block 764 may be integrated into a memory access interface unit 766 which, together with other logic units of the specialized processing unit 760 , may directly access system memory of a host system through the host hardware interface 762 , e.g., using an interrupt of the request to the host system.
  • the memory access interface 766 may include components that provide memory access and interrupt functionality.
  • the host interface block 764 may be configured to provide a connection between an on-chip-interconnect 772 and the host hardware interface 762 in a way that makes any on-chip device accessible from the host system using memory mapped Input/Output (I/O). This functionality would allow the host system to program any device connected to the on-chip-interconnect 772 , such as the mass storage controller 770 , memory controller 776 , or GPIO 782 .
  • the memory access interface 766 may also include an interrupt connection 765 that allows any connected device, e.g., the A/V capture units 778 , to generate an interrupt upon an event (e.g., a captured video frame image is complete). It is desirable for the memory access interface to provide this functionality if there can be only one device interfacing with the host hardware interface hardware 762 .
  • the memory access interface 766 may also (optionally) include a direct memory access (DMA) engine 767 .
  • DMA direct memory access
  • the DMA engine 767 may implement data move operations between the host interface block 764 and the host hardware interface 762 .
  • the memory access interface unit 766 may implement portions of a protocol stack (e.g., PCI Express) not provided by the host interface block 764 , such as connecting the host interface block 764 to the on-chip-interconnect 772 .
  • a protocol stack e.g., PCI Express
  • the capture device 720 may include one or more video and optionally audio/video communication interfaces 780 , which may be implemented in the form of one or more HDMI ports 771 and/or connectors, or other video signal communication interfaces, and which may be attached to a circuit board of the capture device 720 .
  • the interface card 720 may contain two HDMI ports to facilitate connection to two distinct video sources/terminal systems, although it is noted that the capture device may alternatively contain a different number of video connectors so that a single capture device 720 may service a different number of video sources or terminal systems.
  • the video signal connectors 780 there may be a corresponding video capture unit 778 embodied in the specialized processing unit 760 that is compatible with the particular video communication interface (e.g., HDMI, DVI, VGA, etc.).
  • the one or more video capture units 778 of the specialized processing unit may be connected to other logic units of the specialized processing unit 760 through the on-chip interconnect 772 , which may provide each of the video capture units 778 access to host system interface components (e.g., PCI-Express).
  • the on-chip interconnect may be configured to a standard on-chip bus architecture configured to connect functional blocks on a specialized processing unit (e.g., an FPGA or ASIC).
  • a specialized processing unit e.g., an FPGA or ASIC
  • the components of the specialized processing unit may be interconnected using master-slave architecture, e.g., an Advanced Microcontroller Bus Architecture (AMBA), such as AXI 4 or AXI 4 -Lite, or another suitable on-chip bus architecture.
  • AMBA Advanced Microcontroller Bus Architecture
  • AXI 4 may be used for large data transport and AXI-Lite may be used for low performance connections or for configuration purposes.
  • the on-chip interconnections of the specialized processing unit logic blocks may be configured according to a master-slave type configuration as shown in FIG. 7 .
  • “M” and the corresponding bold lines represent represents a master connection
  • “S” and the corresponding dotted lines represent a slave connection
  • “Ctrl” represents control.
  • the interface device 720 may include one or more memory units 774 which may be controlled by a memory controller 776 provided in the logic of the specialized processing unit 760 .
  • the memory unit may support data transport between a terminal system connected through the mass storage interface 768 and a host system connected through the host hardware interface 762 , in accordance with data requests issued by the terminal system, e.g., for mass storage virtualization.
  • the memory unit 774 may be a temporary RAM unit, such as DDR3 RAM, or another volatile memory unit configured to temporarily store data requested by read requests issued by the terminal system, in accordance with principles described herein.
  • the memory controller 776 may be connected to the on chip bus architecture 772 to perform memory read/write operations according to signals received from other logical units of the specialized processing unit 760 .
  • a graphics driver and/or scanout unit of a video source (not pictured in FIG. 7 ) connected through the video interface 780 may generate enlarged output video frames having extra pixels to be captured by the capture card 720 .
  • the video capture unit(s) 778 may be configured to determine when each frame's visible display image pixels have been captured and omit the extra pixels in each frame from capture, discarding these extra pixels because they contain unneeded data.
  • the captured video data for each frame may be transmitted to a video capture process in a host system using an interrupt through the host hardware interface 762 for further processing, compression, and/or transmission over a network. Compression may begin sooner for a given frame rate because a lower proportion of the pixels within each frame need to be transmitted in order to transfer all of the visible image pixels in the frame.
  • each of the one or more A/V capture logic units 778 may be operatively coupled to a corresponding A/V receiver 730 , each of which may in turn be connected to a suitable A/V hardware interface 780 , such as an HDMI port 771 or other A/V connection port as shown in FIG. 7 .
  • A/V output from the terminal system may be connected to the A/V receiver 730 through the A/V interface 780 using a compatible A/V connector.
  • the A/V capture unit 778 may communicate with the interface device driver and A/V process on the host system through the host hardware interface 762 , which may be connected to a host system bus (e.g., a peripheral bus), and the host system may then deliver the A/V stream to a client device over a network.
  • a host system bus e.g., a peripheral bus
  • the interface device may optionally include various other components which provide additional functionality for streaming applications run on a terminal system, such as cloud gaming streaming.
  • the specialized processing unit 760 may also include one or more mass storage device controllers 770 for emulating a storage device for one or more terminal systems.
  • the interface device 782 may also include one or more general purpose input/output (GPIO) blocks to support additional functionality.
  • GPIO general purpose input/output
  • each of the GPIO blocks may be connected to a corresponding one of the terminal system to provide additional functionality, such as power control of the terminal systems and other functionality.
  • the specialized processing unit 760 may be implemented, e.g., as an FPGA, ASIC, or other integrated circuit having blocks dedicated to certain functionality, such as A/V capture, a mass storage device controller, memory controller, DMA engine, and the like, in accordance with various aspects of the present disclosure.
  • one or more of these units may be provided as reusable units of logic or other chip design commonly referred to in the art as IP blocks or IP cores.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Television Systems (AREA)

Abstract

Aspects of the present disclosure relate to systems and methods configured to adjust the timing of frame scanout in response to fluctuations in the source frame rate of graphics content rendered by a processing unit. In certain implementations, a vertical blanking interval generated during scanout of frames from a frame buffer may be adjusted in response to detected changes in the frame rate at which the source content is generated. In certain implementations, rendered graphics may be streamed over a network for display in real-time on a remote device, and the scanout out of frames may be adjusted in order to better match the rate of compression and streaming to a variable frame rate of the source content

Description

    CLAIM OF PRIORITY
  • This application claims the benefit of prior to commonly-assigned, co-pending U.S. Provisional application No. 61/951,729, to Roelof Roderick Colenbrander, entitled “VIDEO FRAME RATE COMPENSATION THROUGH ADJUSTMENT OF VERTICAL BLANKING”, (Attorney Docket No. SCEA13043US00), filed Mar. 12, 2014, the entire contents of which are herein incorporated by reference.
  • FIELD
  • The present disclosure relates to graphics processing and video transfer. Certain aspects of the present disclosure relate to systems and methods for frame rate compensation when compressing and streaming rendered graphics over a network.
  • BACKGROUND
  • Rendering graphics for transfer to a display device in real-time is a complicated process that incorporates many well-developed techniques to ensure that newly generated frames are transferred from the source to the display with proper timing. Typically, the process begins with a processing unit, commonly a graphics processing unit (GPU) having a highly parallel architecture tailored to the rendering task, rendering each new frame of source content to a portion of memory known as the frame buffer. The newly generated frames of source content, referred to herein as “source frames,” are each temporarily stored in the frame buffer in sequence as images having an array of values that define the visual contents for each pixel in that particular frame. While this is occurring, these images are scanned out of the frame buffer in a process that drives the images sequentially to a display device. Meanwhile, the display device traditionally updates the image displayed on the screen periodically at a fixed frequency, known as the refresh rate, using the images that are scanned out from the frame buffer.
  • In order to send the rendered frames to the display, the images in the frame buffer are typically scanned out line by line and transferred serially (in sequence) over some video interface to the display device. During scanout, certain “invisible” signals are generated to govern the transfer process, so that what is actually transferred to the display device for each frame that is output from the frame buffer, referred to herein as an “output frame,” includes not only the visible pixel values of the frame's image, but other external signals which may be used by the display device to resolve how the received frame is displayed on the screen. This typically includes, among other things, a vertical synchronization signal that is pulsed between each scanned out frame image. The period of time between each scanned out frame image, i.e., between the last line or pixel of one frame image and the first line or pixel of the subsequent frame's image, is known as the “vertical blanking interval.” This vertical blanking interval is generated as part of the scanout process, and this vertical synchronization pulse used for synchronization between graphics source and display.
  • The frequency at which the vertical synchronization pulse occurs during scanout, and, as a result, the frequency at which the vertical blanking interval occurs, is traditionally fixed in relation to the refresh rate of the display device, so that each image scanned out from the frame buffer coincides with each refresh cycle of the display. If the frame rate of the original graphics content, i.e., the rate at which new source frames are drawn to the frame buffer by the GPU, is perfectly in sync with the refresh rate of the display, each new source frame drawn to the frame buffer by the GPU would correspond 1:1 to each image presented on the display device. For example, if the display device has a refresh rate of 60 Hz and the GPU were rendering new images to the frame buffer at a frame rate of 60 FPS in phase with the refresh cycle of the display, each image updated on the screen of the display would perfectly correspond to the source frames generated by the GPU.
  • However, in practice the frame rate of the source content is often variable over time and may fluctuate upward and downward, e.g., based on the complexity of the current scene or other factors associated with the generation of the frames. For example, if the current state of a video game causes too many virtual objects or too much detail within the current field of view, the frame rate may momentarily dip due to an increased computational load required to render the frame. As a result, the frame rate of the source content rendered to the frame buffer may go out of sync with the scanout of the frames from this buffer and the corresponding refresh cycles of the display device. In other words, each “source frame” that is drawn to the frame buffer may not exactly correspond to each “output frame” that is driven to the display device.
  • An undesirable consequence which results from this desynchronization between source frame rate and display refresh is a visual artifact known as “tearing,” aptly named because it appears as if there is a horizontal tear in the displayed image for a particular frame. Essentially, tearing occurs when a frame is scanned out of the frame buffer while that portion of memory is being updated with a new subsequent source frame, e.g., the GPU overwrites the image in the buffer with a subsequent source frame before it is finished being scanned out. As a result, the output frame that is transferred to the display device actually contains the images from two or more consecutive source frames. Correspondingly, when the display device updates its screen contents during that refresh cycle, it simultaneously contains images from different consecutive frames of the source content.
  • To minimize or eliminate tearing, the frame buffer commonly includes multiple buffers, i.e., a front frame buffer from which the frame images are directly scanned out, and one or more back frame buffers into which the GPU may draw new frames while a prior frame is being scanned out of the front frame buffer. When a new frame is finished rendering, a back frame buffer is swapped with the front frame buffer, e.g., by copying the contents to the front buffer or by changing a pointer value which specifies the memory address for the front buffer, so that the contents of the front buffer may be scanned out to the display device. In order to completely eliminate tearing artifacts, this is often combined with a restriction that prevents the GPU from swapping the buffers until just after a refresh cycle of the display device. This is typically accomplished by only forcing the GPU to wait for a vsync pulse occurring during the vertical blanking interval before it swaps the buffers. Since this vsync pulse and vertical blanking interval is traditionally generated at fixed intervals in relation to the refresh cycles of the display, it ensures that only whole source frames are scanned out of the frame buffer, preventing tearing artifacts from occurring.
  • While this is effective at preventing tearing, another problem known as “stuttering” may result, which may occur when the source frame rate drops and the scanout unit is forced to transfer an identical frame to the display. Stuttering may be especially pronounced when the GPU is restricted to only swapping the buffers between refresh cycles, since the frame rate is effectively restricted to only integral factors of the display refresh rate. Since the GPU must have a completed new source frame in order to perform the swap, if the GPU has not finished rendering the subsequent frame at the time of the synchronization pulse, it must wait another full cycle before it can swap the buffers, even if the new source frame is otherwise finished shortly thereafter. When stuttering occurs, the sudden drop in the perceived frame rate at the display can be distracting to the viewer.
  • In some instances, rather scanning frames out to a display device, it is desirable to send the frames to some other destination. For example, cloud gaming and other cloud-based video streaming applications may require rendered frames to be compressed and sent over a network for display in real-time, rather than transferred from the frame buffer directly to a display device. In these situations, preferably whole source frames are compressed by an encoder and sent to the remote device with minimized latency. To achieve this task, the encoder must operate on a restricted budget of resources to ensure the frames reach the remote device on time. If the source frame rate fluctuates and stuttering occurs, valuable compression resources would be wasted towards compressing an identical frame. This may result in poorer image quality in the encoded frames than might otherwise be achieved if the compression resources were more efficiently utilized. Furthermore, if identical frames are streamed over the network, limited network bandwidth is wasted on unnecessary frames.
  • It is within this context that aspects of the present disclosure arise.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The teachings of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
  • FIG. 1 is a flow diagram of an example of processing graphics and scanning out the graphics to a display device.
  • FIG. 2 is a schematic diagram of an example output frame.
  • FIG. 3 is a flow diagram of an example of processing graphics and scanning out the graphics to an encoder for streaming the graphics in real-time.
  • FIG. 4 is a flow diagram of an example method of frame rate compensation according to aspects of the present disclosure.
  • FIG. 5 is a block diagram of an example system according to aspects of the present disclosure.
  • FIG. 6A is a schematic diagram of an example terminal system architecture functioning as a video source.
  • FIG. 6B is an example host system and capture card architecture which may capture and compress video frames from the video source.
  • FIG. 7 is a schematic diagram of an example video capture card design having a specialized processing unit.
  • DETAILED DESCRIPTION Introduction
  • Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the invention. Accordingly, the exemplary embodiments of the disclosure described below are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention.
  • It is noted that certain aspects of the present disclosure relate to video transfer, including rendering and scanning out video frames for transfer over a video interface (sometimes referred to herein as a display interface), as well as video streaming to remote devices, including compression and transmission of video frames for cloud gaming implementations. Further illustrative details and examples of these aspects may be found in U.S. Non-Provisional patent application Ser. No. 14/135,374, to Roelof Roderick Colenbrander, entitled “VIDEO LATENCY REDUCTION”, (Attorney Docket No. SCEA13037US00), filed Dec. 19, 2013, the entire contents of which are herein incorporated by reference. It is noted that certain implementations of the present disclosure may be configured in accordance with various systems and methods described in that incorporation by reference document.
  • Various aspects of the present disclosure relate to systems and methods configured to adjust the timing of compression to better match the frame rate at which source content is rendered by a processing unit. In certain implementations, this may be accomplished by adjusting the timing of frame scanout in response to detected fluctuations in the source frame rate. For example, a vertical blanking interval generated during scanout of frames from a frame buffer may be adjusted in response to detected changes in the frame rate at which the source content is generated. In certain implementations, other techniques may be used to adjust or avoid compressing or streaming duplicate frames, and rendered graphics may be streamed over a network for display in real-time on a remote device.
  • Details
  • To better illustrate certain aspects of the present disclosure, an illustrative example of a technique for processing and transferring graphics to a display device in real-time is depicted in FIG. 1. The example depicted in FIG. 1 may have certain similarities to conventional techniques of transferring video frames to a local display device that utilizes a regular or fixed refresh rate.
  • In the example depicted in FIG. 1, graphics may be rendered, as indicated at 104, by a processing unit in order to generate a plurality of source frames 102 in sequence. By way of example, the source frames 102 may be rendered based on the state of an application, such as a video game, that determines the content of the source frames 102. The source frame rate 106, which defines the rate at which new source frames 102 are rendered, may be variable and contain one or more fluctuations over time based on, e.g., the complexity of the graphics or amount of detail in the source frames being rendered at that particular moment of time. In certain implementations, the processing unit which renders the source frames may be a GPU that contains a specialized architecture tailored to the task of processing graphics and rendering new source frames 102.
  • Rendering the source frames, as indicated at 104, may include a number of different steps depending on the configuration of the rendering pipeline, which may culminate in rendering the finished source frames 102 into a frame buffer 108, a portion of memory which temporarily stores each new source frame in sequence. Each source frame 102 may be stored in the frame buffer 108 as an image defined by an array of pixel data values which define the visual values associated with that particular frame.
  • During the process of rendering the source frames 102 into the frame buffer 108, the frame buffer contents may also be scanned out, as indicated at 114, as a sequence of output frames 118 and transferred to a display device 116 in sequence over a video interface connection, such as HDMI, DVI, VGA, or another suitable interface standard. During this process, the scanout unit may generate a vertical blanking interval at the end of each output frame 118, as well as various other external signals to govern the process of transferring the graphics frames from frame buffer 108 to display device 116. As such, each output frame 118 may be understood to contain not only the visible pixel values of the source frames 102, but also invisible external signals that are used to govern the timing and synchronize the transfer of the frames to display device 116.
  • The display device 116 may periodically update the image that is presented on its screen at a fixed refresh rate 122, utilizing the vertical blanking signal and/or various external signals associated with each output frame 118 to resolve the pixel data that is received from the frame buffer 108 and present only those pixel values associated with the image contents from the frame buffer. As such, in the example depicted in FIG. 1, the vertical blanking interval that is defined at the boundary between each output frame may be timed at a fixed frequency that coincides with the refresh rate 122 of the display.
  • In the example depicted in FIG. 1, to minimize or prevent tearing artifacts within each image of the output frames 118 that are transferred to the display, the frame buffer 108 may include multiple buffers, including a front buffer 110 and at least one back buffer 112. The rendering 104 of the source frames into the frame buffer may be performed in such a manner that new source frames 102 are rendered into the back buffer 112 while the front buffer 110 contains a source frame 102 that has not yet been scanned out to the display device 116. The front buffer 110 and the back buffer 112 may be swapped only after a new source frame is finished being rendered into the back buffer 112.
  • The timing of the swap of the buffers 110,112 may depend on the current configuration of the system. In the illustrated example, the swap is timed to coincide with the timing of a pulse within the vertical blanking interval (VBI) 120, thereby restricting a swap from occurring during the middle of the scanout of any particular image from the front buffer 110. However, it is noted that the system may instead be configured to swap the front and back buffer as soon as the new source frames are ready, e.g., as soon as they are finished rendering into the back buffer 112. In these instances, tearing artifacts may still be reduced, but may not be completely eliminated since it is still possible for the buffers to be swapped in the middle of scan out of a particular source image from the frame buffer 108.
  • It is noted that, in the example depicted in FIG. 1, the vertical blanking interval 120 may be restricted to occur at regular, isochronous intervals to ensure proper transfer of the frames in sync with the fixed refresh rate 122 of the display hardware.
  • Turning to FIG. 2, an illustrative example of an output frame 216 is depicted. To better appreciate various aspects of the present disclosure, it is beneficial to discuss in more detail how frames are transferred from a frame buffer through a video interface/display interface, such as HDMI, DVI, VGA, DisplayPort, and the like. The output frame 216 illustrated in FIG. 2 is a visual depiction of the content of each output frame scanned out of a frame buffer, e.g., as shown in FIG. 1, that is scanned out for transfer to a display device or other destination. The example output frame 216 may be one frame in a sequence of similarly formatted frames which collectively make up a video stream. The video frame sequence may generated by some video content source, such as a video game application, video file, live stream, and the like.
  • As shown in FIG. 2, the output frame 216 may be made up of an array of pixels, which can be represented by a corresponding array of pixel data values. The output frame 216 may also be transmitted with additional signals external to the pixel data values, as described below.
  • Each pixel data value in the array may include a plurality of color space components depending on the particular color model used. For example, each pixel data value in the array may include two chroma (color) values and luma (intensity) value for the corresponding pixel, if a YCrCb (digital video) or YPbPr (analog video) is used. Alternatively, RGB color space, or some other set of color space components may be used for the pixel data of each pixel. Moreover, the pixel data values for each color space component of each pixel may be digitally represented by a plurality of bits. For example, a 24-bit color depth may utilize 8-bits per color space component per pixel. The term “pixel” is sometimes used herein as shorthand when referring to that portion of an output frame that corresponds to a single tick of a pixel clock.
  • In addition to pixel data values, an output frame may also include external signals in addition to the pixel data values. The external signals may include a signal having information indicating whether the pixel is visible, e.g., a data enable signal indicating whether the pixel is meant to be displayed and therefore has a visible pixel contained in the pixel data values for that pixel. As can be seen in the example output frame 216 of FIG. 2, the total number of pixels in the array includes both visible pixels 201 (illustrated in the figure as a grid), and invisible pixels 203 (illustrated in the figure as the blank and lined regions). In the example frame of FIG. 2, the visible pixels 201 make up the active image region of the output frame, which may be indicated by a high data enable value, and the invisible pixels 203 make up the blanking region of the output frame in this example (e.g., including both horizontal and vertical blanking regions), which may be indicated by low data values. The visible pixels 201 in the active image region may collectively make up the visible image of the frame that is meant to be displayed. In particular, in this example the active region having visible pixels 201 is made up of the source content and their corresponding pixel values which are retrieved from the frame buffer during the scan out of the frame.
  • It is noted that in the example output frame 216 of FIG. 2, the visible pixels of the video frame image coincide with the active region of the frame's format. However, it is noted that the present disclosure is not limited to this situation, and certain implementations of the present disclosure may actually include invisible pixels that are not only in the blanking region, but are also within the active image region of the output frame that is retrieved from the frame buffer. For example, as described in U.S. application Ser. No. 14/135,374, entitled “VIDEO LATENCY REDUCTION” and fully incorporated by reference herein, the active image region of an output frame may include invisible pixels if the GPU is configured to render source frames to the frame buffer in a larger format that contains more pixels than the actual source content, such as, e.g., in accordance with the techniques described with reference to FIGS. 7A-8 of that application Ser. No. 14/135,374.
  • Most devices, e.g., consoles, PCs, phones, and other video sources, render video frames to a frame buffer organized in RGB pixels with typically at least 8-bit/1-byte per color component. The video signal generated by a video transmitter, which may be part of the GPU that renders the video frames or which may be external to the GPU, may transport pixels in RGB, but it can do so in other color space models, such as YCrCb (digital video) or YPbPr (analog video), in case something (e.g., the transmitter) has to convert from RGB to the other format.
  • Once a GPU has completed rendering, it may scan the frame out, which is the process of sending the frame pixel by pixel over some serial connection (e.g., HDMI, DVI, etc.). The scanout process may involve the generation of the external signals of each output frame 216, and the scanout process may partly depend on the type of video connection, as well as whether the video transmitter is inside the GPU or outside it. In general the GPU may generate a plurality of signals when scanning the frame out, including signals external to the pixel data values. Generally speaking, these signals may be understood to be separate signals that occur simultaneously with each other during the scanout and transfer of the frame. The signals when scanning the frame out may include:
      • pixel clock signal
      • data enable signal
      • horizontal synchronization (hsync) signal
      • vertical synchronization (vsync) signal
      • data bus signals that carry color space components for active pixels (e.g., RGB, 24-bit wide, with 8-bits per color space component)
  • During scanout of the rendered frame, the GPU may retrieve the pixels from the frame buffer that holds the completed source frame (e.g., the frame buffer). As an example, say the GPU is currently at the first pixel of a line. For the given line, it will place a new pixel data value on the data bus signals at each “tick” of the pixel clock signal. Also it will output a high level on the data enable signal corresponding to that pixel.
  • At the end of the line there is a horizontal blanking period (of duration HTOTAL-HDISPLAY pixels, or pixel clock pulses). During the blanking period several signals change. First of all, a pulse may be generated in the hsync signal to notify a transition to the next line. The data enable signal is made low, which means that any data currently on the data bus signals that ordinarily carry color space components should not be interpreted as pixels (these are the invisible pixels at the end of the line).
  • This process may continue line by line until the end of the frame image. At the end of the output frame, after the full source frame image has been retrieved, e.g., after all the visible pixel data values have been retrieved from the frame buffer, a pulse may be generated in the vsync signal, within the vertical blanking region of the output frame. In particular, this interval of time during the scanout process, at the end of the output frame and after the full source frame image from the frame buffer has been retrieved is known as the vertical blanking interval 209. For any of the invisible lines during the vertical blanking interval at the end of the output frame 216, the data enable line is also low.
  • Generally, whenever the data enable signal is low, the pixel is invisible, and the pixel data values of the data bus signal do not contain the desired color space values which correspond to the display region of the image. Since there is always an active pixel clock, invisible pixels are essentially generated on the data bus signal. It is noted that horizontal and vertical synchronization signals are separate from the pixel data of the data bus signal.
  • The process of transmitting video signals, e.g., made up of output frames 216, over a serial interface may depend on the video technology. For classic VGA, the described signals are actually directly consumed by the monitor, including the pixel data signal and the external signals associated with the output frame. The external signals may include timing signals directly used for VGA. The pixel data signals may be analog signals in which each color component has its own channel, e.g., a red signal channel, a green signal channel, and a blue signal channel. A Digital to Analog Converter (DAC) may generate the analog pixel signal from the digital data bus signals (from the described 24-bit with 8-bit per channel). For other technologies like DVI, HDMI, or DisplayPort, a transmitter may accept the above described signals and convert them to a signal appropriate for that technology. In case of HDMI, the HDMI transmitter has 3 TMDS data channels (TX0 to TX2) and a TMDS clock, in which the HDMI transmitter at the video source embeds all the signals (hsync signal, vsync signal, pixel data bus signal) and TMDS clock contains the pixel clock signal in some way. The HDMI receiver on the other end of the HDMI connector (e.g., HDMI cable) inside the video sink (e.g., the HDMI receiver inside the display device, video capture card, or other video sink), has these signals as inputs, but recovers hsync, vsync, data, and the other signals. This is also true for other video standards like DVI or DisplayPort.
  • If the video transmitter is internal to the GPU, the scanout logic may operate on the described signals, but the scanout logic may also directly output, e.g., HDMI, bypassing the intermediate step for these other signals.
  • The pixel data values and other signals associated with each pixel for the output frame are typically output line by line, with each line containing a plurality of pixels and each frame containing a plurality of lines. Normally, these lines are horizontally oriented relative to the image that is displayed, and the pixels in a horizontal line may be transferred in sequence, e.g., serial transfer, from left to right in the line through a video communication interface from the video source to the display device or other video sink device. Similarly, the horizontal lines may be output in sequence from top to bottom until the end of the frame is reached. Accordingly, all the pixels in the output frame 216, including both visible 201 and invisible pixels 203, may have a defined sequence for transfer, and the lines during the vertical blanking interval 209 may be located at the end of this sequence for each output frame 216.
  • In the example output frame 216 of FIG. 2, each line of the frame is a horizontal line having a total number of pixels HTOTAL, which may define the total horizontal resolution of the output frame. Similarly, the example output frame 216 has a total number of lines VTOTAL, which may define the total vertical resolution of the frame. Thus, the total horizontal and vertical resolution includes both visible and invisible pixels.
  • The active display region 201 of the frame, e.g., the region retrieved from the frame buffer, may include a plurality of active lines VDISPLAY defining the vertical display resolution of the frame, and each active line may include a plurality of active pixels HDISPLAY, which defines the horizontal display resolution of the frame. The active display region 201 may correspond to that source frame that is rendered by a GPU into the frame buffer as described above.
  • It can be appreciated from the forgoing that the total resolution (e.g., HTOTAL×VTOTAL) of the output frame 216 may be greater than the display resolution (e.g., HDISPLAY×VDISPLAY) of the output frame, due to the presence of the blanking region and invisible pixels 203 in the frame, which may be generated during the scan out of the source frame, e.g., as described above. Specifically, the active display region corresponds to those pixels retrieved from the frame buffer, while the blanking region refers to those pixels generated due to the addition of external signals and extra ticks of the pixel clock generated during scanout. The blanking region may include a plurality of invisible pixels at the end of each line, corresponding to the horizontal blanking interval, and a plurality of invisible lines at the end of each frame, corresponding to the vertical blanking interval 209. Generally speaking, the synchronization pulses in the blanking regions may be provided to synchronize the video stream transfer between the video source and a display, with the horizontal synchronization pulses 205 within the hsync signal generally indicating the transitions between each line in the frame, and the vertical synchronization pulses 207 generally indicating the transitions between each frame in the sequence of output frames that makes up the video stream. While the hsync and vsync signals are external signals that are not part of the pixel data, e.g., RGB values and the like, since the GPU always outputs the pixel data and synchronization signals on the pixel clock, there happen to be invisible pixels in the pixel data bus signal during the period when pulses on hsync or vsync lines are active. Likewise, the hsync and vsync signals may be inactive during the period corresponding to those visible pixel values on the pixel data bus signal. In the case of HDMI, hsync and vsync are actually transported in the pixel data. Then, after transport over an HDMI cable, the HDMI receiver would separate the signals again.
  • As can be seen in the example of FIG. 2, transferring the pixels of the frame 216 in sequence, e.g., pixel by pixel, will result in the pixels corresponding to the horizontal blanking interval 211 being transferred at the end of each line, and the pixels corresponding to the vertical blanking interval 209 being transferred at the end of the frame 216. The horizontal synchronization signal may include horizontal synchronization pulse 205 during the horizontal blanking interval 211, with a corresponding horizontal front porch and horizontal back porch (illustrated in the diagram as blank regions before and after the horizontal synchronization pulse 205, respectively), and the vertical synchronization signal may include a vertical synchronization pulse 207 with a corresponding vertical front porch and vertical back porch (illustrated in the diagram as blank regions before and after the vertical synchronization pulse 207, respectively), and these pixels may collectively make up the invisible pixels 203 in the example output frame 216.
  • With reference to the example of FIG. 2, a video signal may be made up of a plurality of output frames similar to the example illustrated in FIG. 2, and output frames may be transferred from a video source to a display device, video capture device, or other video sink device through a video interface, such as HDMI, DVI, VGA, and the like. Generally speaking, a refresh rate (e.g., VRrefresh) of display hardware should correspond to the rate at which frames are scanned out of the frame buffer in an output video stream. As a consequence, the timing of the transfer of pixels through an interface between a video source and a display device should be synchronized in order to ensure that the rate of transfer of the pixels in the video stream is synchronized with the display and keeps up with the display refresh rate.
  • Typically, a pixel clock, which may be an external signal generated by electronics or other components embodied in the video transfer hardware and which may be generated in association with the scan out of the frames as described above, governs the timing for the transfer of each pixel between video source and video sink. Generally speaking, the pixel clock will control the timing of the transfer of pixels so that the total number of pixels within each frame is transferred from the video source at a rate that is in sync with the refresh rate of the display device. For a serial interface in which pixels are transferred sequentially, one after another, the pixel clock may be mathematically expressed as a product of the total number of pixels within each line, the total number of lines within each frame, and the vertical refresh rate as follows:

  • Pixel Clock=HTOTAL*VTOTAL*VRefresh
  • Standard video interfaces typically support different display resolutions (e.g., HDISPLAY×VDISPLAY), such as 720p (1280×720), 1080p (1920×1080), and the like, which each have a different total number of pixels for each frame. A pixel clock generator, which may be embodied in the video transfer hardware, may be configured to generate a pixel clock for a given video resolution and/or frame rate based on a formula similar to the mathematical expression shown above, e.g., based on the refresh rate and the resolution of each frame. It is noted that the upper bounds of the pixel clock may be limited due to practical considerations and technical requirements of the electronics and components involved, as well as a practical limit to the frequency at which the pixel clock may be accurately maintained. For example, display manufacturers typically want to keep the pixel clock as low as possible because the higher it is, the more it complicates the design of the electronics and the component costs. As a result, with reference to FIG. 2, the number of active lines (VDISPLAY) and total number of lines (VTOTAL) in a frame are typically close in value because only a small number of lines are required for the vertical synchronization signal, and conventional wisdom generally dictates that utilizing more lines than necessary is undesirable.
  • Various implementations of the present disclosure may incorporate techniques for decreasing the time to transfer an output video frame by artificially increasing the total number of pixels, i.e., ticks of a pixel clock, in a frame beyond what is needed to encompass the visible pixel data and/or synchronization signals within each output frame. As a result, a pixel clock rate may be increased to output the greater number of total pixels, causing the desired visible pixels embodying the visible video frame image within the active region of the output frame to be transferred in less time. In some implementations, this may be accomplished by increasing the number of lines at the end of each frame's sequence or otherwise putting the output frames in some frame format that has a greater number of total pixels than the source frame image.
  • By way of example, and not by way of limitation, for the example output video frame 216 depicted in FIG. 2, assuming a 60 Hz refresh rate (VRefresh), all the visible image lines of an output frame will have been output every VDISPLAY/VTOTAL* 1/60 Hz, e.g., based on the ratio of the active display lines to the total number of lines in the frame. Where VDISPLAY and VTOTAL are close in value, the time to output the image 201 within the frame would be roughly ˜ 1/60 Hz, which corresponds to 16.7 ms. In accordance with various aspects of the present disclosure, this time to output the visible image lines of a frame may be reduced by making VTOTAL significantly larger than VDISPLAY. Thus, for the same 60 Hz vertical refresh rate mentioned above, if VTOTAL is twice the size of VDISPLAY, that is the total number of lines within a frame is double the number of visible/active lines within the frame, the transfer time reduces to 8.3 ms, since after VDISPLAY/VTOTAL=0.5*16.7, the desired image within the frame would be transferred. VDISPLAY/VTOTAL may be made smaller, for example, by adding lines to the frame in some fashion. Further examples of techniques for reducing video transfer latency and forming output frames having artificially increased numbers of pixels are described in U.S. application Ser. No. 14/135,374, entitled “VIDEO LATENCY REDUCTION” and fully incorporated by reference herein. It is noted that implementations of the present disclosure may utilize any of the techniques for forming output frames described in that document.
  • Turning now to FIG. 3, an illustrative example of a technique for processing and transferring graphics in real-time is depicted. Unlike the example depicted in FIG. 1, the example depicted in FIG. 3 involves scanning out the rendered frames to a video capture unit instead of directly to a display device that refreshes at fixed intervals. In the example depicted in FIG. 3, captured frames may be further compressed using a video encoder so that, e.g., rendered graphics may be transmitted over network to a remote device for display in real-time.
  • Turning to the example depicted in FIG. 3 in more detail, graphics may be rendered, as indicated at 304, by a processing unit in order to generate a plurality of source frames 302 in sequence. By way of example, and not by way of limitation, the source frames 302 may be rendered based on the state of an application, such as a video game, that determines the content of the source frames 302.
  • The frame rate 306 of the source content, which defines the rate at which new source frames 302 are rendered, may be variable and contain one or more fluctuations over time based on a variety of factors, such as the complexity of the scene currently being rendered. In certain implementations, the processing unit which renders the source frames may be a GPU that contains a specialized architecture tailored to the task of processing graphics and rendering new source frames 302.
  • Rendering the source frames, as indicated at 304, may include a number of different steps depending on the configuration of the rendering pipeline, which may culminate in rendering the finished source frames 302 into a frame buffer 308. Each source frame 302 may be stored in the frame buffer 308 in sequence as an image defined by an array of pixel data values, e.g., a bitmap image (bmp), which define the visual values associated with that particular source frame.
  • During the process of rendering the source frames 302 into the frame buffer 308, the frame buffer contents may also be scanned out, as indicated at 314, as a sequence of output frames 318 and transferred to a video capture unit 324 in sequence over a video interface connection, such as HDMI, DVI, VGA, or another suitable display interface standard. In certain implementations, a video capture card or other device may be used for the frame capture 324, and the video capture unit may be that may be configured to capture only the set of visible pixels within each output frame that correspond to the source frames rendered by the processing unit.
  • During this process, the scanout unit may generate several external signals, including a vertical synchronization signal. Generating the vertical synchronization signal may also involve the generation of one or more active vsync pulses after each source frame 302 that is scanned out of the frame buffer 308, and, as a result, the generation of a vertical blanking interval 320 between each scanned out frame. This may result in the generation of invisible lines at the end of each output frame 318, e.g., as described above with reference to the illustrated example of FIG. 2. At least a portion of the vertical blanking interval may correspond to these invisible lines between each scanned-out source frame 302.
  • The frame capture unit may capture the source frames contained within each received output frame 318 in sequence, and may utilize the vertical blanking interval and/or various external signals associated with each output frame 318 to resolve the pixel data that is received from the frame buffer 308 and capture the rendered source frames corresponding to those visible pixels. Each captured source frame 302 may be then be compressed using a suitable video encoder, e.g., codec, as indicated at 326. The compressed frames 330 may then be optionally sent over a network for display on a remotely located display device.
  • In the example depicted in FIG. 3, to minimize or prevent tearing artifacts within each image of the output frames 318 that are scanned out, the frame buffer 308 may include multiple buffers, including a front buffer 310 and at least one back buffer 312. The rendering 304 of the source frames into the frame buffer may be performed in such a manner that new source frames 302 are rendered into the back buffer 312 while the front buffer 310 contains a source frame 302 that has not yet been scanned out. Generally speaking, the front buffer 310 and the back buffer 312 may be swapped only after a new source frame is finished being rendered into the back buffer 312.
  • The timing of the swap of the buffers 310,312 may depend on the configuration of the system. In the illustrated example, the swap is timed to coincide with the timing of the vertical blanking interval (VBI) 320, thereby restricting a swap from occurring during the middle of the scanout of any particular image from the front buffer 310. More specifically, in the illustrated example, the back frame buffer 312 may be configured to swap with the front frame buffer 310 only in response to a vertical synchronization pulse that is generated during the vertical blanking interval during scanout 314 of the frames, and the vsync pulse which indicates to the processing unit that the buffers may be swapped may occur at or near the beginning of the vertical blanking interval. As a result, each output frame 318 may contain only whole source frames 302. If, alternatively, the system is configured to swap the front and back buffer as soon as the new source frames are ready, e.g., as soon as they are finished rendering into the back buffer 312 tearing may not be completely eliminated since it is still possible for the buffers to be swapped in the middle of scan out of a particular source image from the frame buffer 308. In these instances, the source frames within the output frame 318 that is scanned out may actually contain portions of consecutive rendered source frames.
  • In accordance with certain aspects of the present disclosure, frame rate of the source content 306 may be detected, and the vertical blanking interval 320 may be adjusted in response to fluctuations in the frame rate, thereby better matching compression rate and timing 328 to the source content. For example, if it is detected that the source frame rate 306 momentarily drops, e.g., due to the complexity of the scene, the vertical blanking interval 320 may be adjusted to delay scanout of one or more frames in response. This may be beneficial to avoid stuttering or other drawbacks associated with an instantaneous frame rate that is below normal, better matching the rate and timing of the compression and streaming to the source content. Conversely, if it is detected that the source frame rate 306 is high, the vertical blanking interval 320 may be adjusted to scan frames out of the frame buffer 308 more quickly. This may be beneficial, for example, when an encoder is operating on a fixed budget per frame. If the encoder receives the frames sooner, it may be able to compress the frames at a higher resolution, thereby improving image quality for the remote viewer.
  • Turning now to FIG. 4, an illustrative example of a method 400 adjusting a vertical blanking interval in response to detected fluctuations in the frame rate of source content is depicted. The illustrative method of FIG. 4 may be involve a graphics processing and transfer technique that is similar to the example depicted in FIG. 3.
  • As shown in FIG. 4, the method 400 may include rendering a plurality of source frames in sequence into a frame buffer with a processing unit, as indicated at 431. Each new source frame may be rendered in response to one or more corresponding draw calls received based on an application 433 output. By way of example, and not by way of limitation, the processing unit for rendering the source frames may be a GPU having a specialized role of processing graphics for the application, while the application itself may be executed by a CPU. A graphics application programming interface (API) may coordinate draw calls from CPU to GPU in order to coordinate the processing tasks and initiate the generation of new source frames according to the state of the application.
  • As indicated at 432, the method may include scanning out a plurality of output frames in sequence from a frame buffer. In certain implementations, the frame buffer may include both a front buffer and one or more back buffers, and the frames may be scanned out directly from the front buffer while new source frames are being rendered into the back buffer. The scanout of the frames, as indicated at 432, may involve the generation of various external signals in addition to the pixel data that is scanned out of the frame buffer, including a vertical synchronization signal, as well as, e.g., a horizontal synchronization signal, pixel clock signal, and data enable signal as described above. The frames may be scanned out with a scanout unit that may include various components configured to generate the signals described above associated with the transfer of frames.
  • Each output frame that is scanned out of the frame buffer may include an image corresponding to a source frame rendered by the processing unit. As indicated at 434, the method may include capturing the source frames scanned out of the frame buffer. In certain implementations, the frames may be scanned out of the frame buffer and sent through a video interface connection to a video capture unit, which may capture the source content in step 434.
  • The method 400 may also include compressing the captured frames using an encoder, e.g., a video codec. The frames may be compressed in any of a variety of known video compression formats, such as h.264 or another suitable format for transmission over a network having a limited bandwidth. In certain implementations, the frames may be encoded using a low latency encoder for transfer to a remote device in real-time. The compressed frames may then be transmitted to one or more remote devices over a network, such as the Internet, as indicated at 438.
  • The illustrative method 400 depicted in FIG. 4 may also compensate for frame rate fluctuations in the rendering of source content in accordance to certain aspects of the present disclosure. This may include detecting one or more changes or fluctuations in the frame rate of the source content, as indicated at 440. In response to one or more fluctuations in the frame rate of the source content, a vertical blanking interval that is generated as part of the scanout process 432 may be adjusted to compensate for the frame rate fluctuations, as indicated at 442.
  • It is noted that by adjusting the vertical blanking interval in response to detected changes in the frame rate, the timing and rate of compression 436 and streaming 438 may be better matched to the rate at which the source content is generated at 431. In certain implementations, the method may involve delaying a scanout of one or more frames in response to one or more downward fluctuations in the frame rate in which the speed at which the source content is momentarily decreased. In other implementations, method may involve speeding up the rate at which frames are scanned out of the frame buffer in response to one or more upward fluctuations in the frame rate in which the speed at which the source content is increased. In yet further implementations, the method may involve both depending on the nature of the fluctuations in the source frame rate at different periods of time during the rendering process.
  • It is noted that the manner in which the frame rate is detected at 440 and the vertical blanking interval is adjusted at 442 may be performed in a variety of ways according to aspects of the present disclosure.
  • In certain implementations, the frame rate may be detected, as indicated at 440, by placing a marker into memory after each new source frame is finished rendering and tracking the timing between each marker. For example, a scanout unit and a processing unit, e.g., a GPU, may be controlled by a graphics driver (sometimes known as a “display driver” or “GPU driver”) of a system that renders the source graphics. An application, which may optionally be implemented by separate processing unit, e.g., a CPU, may send drawing commands (e.g., draw calls) to the graphics driver, and the GPU may render the source frames to a frame buffer in response. When the frame is ready, the application may place a marker in a memory buffer, e.g., the back frame buffer, which notifies the GPU that the frame is ready. The graphics driver may track the time since the last “frame ready” marker, and the time between markers may be indicative of the frame rate of the source content. If the frame ready marker has not been received by some deadline, e.g., the time between consecutive markers exceeds some pre-defined time threshold between markers, then the graphics driver may delay scanout for the next frame. Accordingly, the graphics driver may be configured to track the time between each source frame that is finished rendering into the frame buffer, and whenever the time exceeds some pre-defined threshold, it may delay scanout of a subsequent frame for a finite period of time.
  • According to certain aspects of the present disclosure, the graphics driver may modify registers on the GPU (e.g. over the PCI-express bus) to adjust GPU state. In addition, the graphics driver may also send commands to the GPU, which is often done by sending one or more commands to the GPU using a command buffer. The main difference is that register access is synchronous (blocking), while sending commands through buffers is asynchronous. In the past everything was done using registers, which was typically slow, so presently registers are used mostly for configuration purposes.
  • In certain implementations of the present disclosure, in order to adjust the vertical blanking interval, as indicated at 442, the graphics driver may adjust a GPU register or send a command to the GPU through a command buffer that would delay the scanout. In certain implementations, the driver could send a command to put the scanout unit to sleep for a finite period of time, e.g., power down the scanout unit to delay scanout for a finite period of time. In certain implementations, the vertical blanking interval may be adjusted by maintaining an active vsync signal for a longer period of time between scanout in response to a slower frame rate, e.g., a detected downward fluctuation in the frame rate. For example, the graphics driver may generate dummy lines at the end of each frame, e.g., by maintaining an active vertical synchronization signal, until a frame ready marker is received. If the timing of the vertical blanking interval is synchronized to the time of the frame ready markers, then the rate at which frame are scanned out to a video capture unit and/or encoder may be better synchronized to the frame rate of the source content. This may result in a dynamic vertical blanking interval generated in by the scanout unit having a length varies with the frame rate. The net effect may result in the vertical blanking interval between scanned out frames being longer or shorter in response to changes in the frame rate of the source content.
  • It is noted that, in some instances, there may be a threshold in which a frame may need to be regenerated in order to maintain some activity. This threshold time may be between 0.5 T and 2 T, where T=1/F, with F being a standard frame rate, e.g., 60 frames/sec.
  • It is noted that the implementation depicted in FIG. 4 is provided for purposes of illustration only, and implementations of the present disclosure include other techniques for compensating for frame rate beyond adjusting the vertical blanking interval as shown in FIG. 4.
  • In certain implementations, the rate of compression and streaming may be adjusted after the scanout phase in response to fluctuations in the source frame rate. For example, frames may be scanned out of the frame buffer and transferred using a display interface, which may a video transmitter and receiver. When the frame rate fluctuates downward, one or more components of the display interface may be momentarily disabled to prevent duplicate frames from being received by the encoder, thereby preserving compression resources. For example, the video transmitter may be momentarily disabled to prevent transfer of one or more frames to the encoder. It is noted that, in contrast to some of the implementations described above, this technique would only work to decrease the compression and/or streaming rate, but not increase it.
  • It is noted that many of the techniques described above may be particularly useful in implementations involving real-time streaming where the graphics content is generated by a first device and streamed by a separate device. For example, a cloud gaming implementation may involve source graphics content which is generated by a gaming console or other video gaming system, and the graphics frames may be scanned out to a separate streaming server system that then may capture, compress, and stream the frames to a remote client device. In these situations, the encoder may have no way of adjusting to the frame rate of the source content, since it is on a separate system and receives frames captured after scanout. Thus, adjusting the scanout and the vertical blanking interval generated during scanout in response to detected changes in the graphics source's frame rate may better match the timing of the frames received by the separate system.
  • However, certain implementations may involve an encoder and a streaming unit, such as streaming software, which operate on the same device as the graphics source. In these examples, it may be possible to perform the scanout in a conventional manner, and configure the encoder and/or the streaming unit to omit certain frames in response to detected changes in the frame rate of the source content. By way of example, and not by way of limitation, if the system detects that the source frame rate has momentarily dropped, the system may be configured to forgo the encoding of one or more frames received during scanout in response, e.g., to preserve compression resources. By way of further example, if the system detects that the source frame rate has momentarily dropped, the system may be configured to still encode the frame, but the streaming unit may forgo sending one or more duplicate frames in response, e.g., to preserve network bandwidth. If these units are all part of the same device as the graphics source, the graphics driver may be configured to notify the encoder and/or streaming software so that they can respond to fluctuations in the frame rate in this manner, which may not be possible when the encoding and streaming device is separate from the graphics source device.
  • Turning now to FIG. 5, an example system 500 is depicted to illustrate various aspects of the present disclosure. FIG. 5 provides an overview of an example hardware/software architecture of a system for generating, capturing, compressing, and streaming video frames according to various implementations of the present disclosure. The system 500 may be configured to compensate for frame rate fluctuations in rendered graphics content according to various aspects of the present disclosure. For example, the system 500 may be configured to perform a method having features in common with the method of FIG. 4.
  • Turning to FIG. 5 in more detail, system 500 may include a first computing device 550 and a second computing device 552 that are connected by a display interface 554 (sometimes also referred to herein as a “video interface”). The first computing device 550 may be a graphics source configured to generate and render graphics, and the second computing device 552 may be a streaming device configured to the compress frames and send the frames over a network 556 to a remote client device 558. The graphics source 550 may be a terminal of the streaming server 552 that is configured to scanout rendered frames to the host system 552 through a display interface connection 554, such as HDMI, VGA, DVI, and the like.
  • The graphics source device 550 may include one or more processing units 560,562 and one or more memory units 564 configured to implement various aspects of graphics processing, transfer, and frame rate compensation in accordance with the present disclosure. In the illustrated example, the one or more processing units include at least two distinct processing units, a central processing unit (CPU) 560 and a graphics processing unit (GPU) 562. The CPU 560 may be configured to implement an application, e.g., a video game, the state of which may determine the content of graphics to be output. The CPU 560 may be configured to implement one or more graphics drivers 566 to issue drawing commands to the GPU 562, as well as control scanout of frames. In response to the drawing commands issued by the graphics driver 566, the GPU 562 may be configured to render new source frames into a frame buffer 568, which may be a portion of the one or more memory units that temporarily holds each rendered source frame in sequence. In certain implementations, the frame buffer 568 may include multiple buffers, including a front buffer and one or more back buffers, and the GPU 562 may be configured to swap the buffers when it is finished rendering new source frames in the back buffer.
  • The graphics source may also include a scanout unit 570, which may be configured to scan rendered frames out of the frame buffer 568 in accordance with various aspects described above. The scanout unit may be configured to scan output frames line by line directly out of a front buffer of the frame buffer 568, as well as generate a vertical synchronization signal and other external signals during the scanout process, e.g., as described above, and the vertical synchronization signal may be generated so as to generate a vertical blanking interval between each source frame that is retrieved from the frame buffer 568.
  • Various aspects of the scanout unit 570 the GPU 562 may be controlled by the CPU 560 via the one or more graphics drivers 566, which may be implemented as one or more software programs that cooperate with an operating system of the graphics source 550, and which may be embodied in a non-transitory computer readable medium for execution by the CPU or other processing unit. The graphics source device 550 may be configured to detect one or more fluctuations in the frame rate of the source content rendered by the GPU 562, and the device may be configured to the scanout in response to the one or more fluctuations. This may be accomplished, for example, by any of the techniques described with reference to FIG. 4.
  • In certain implementations, to detect the frame rate the one or more processing units 560,562 may be configured to place a marker into the one or more memory unit 564 when each new source frame is rendered. For example, the one or more memory units 564 may contain a frame buffer 568 into which the GPU 562 renders new frames, and the GPU 562 may be configured to place a marker into the frame buffer 568 when it is finished rendering each new frame. The graphics driver 566 may track the timing of each new marker in the buffer 568 and may make adjustments in response to detected changes, e.g., as described above with reference to FIG. 4.
  • In certain implementations, the driver may be configured to make adjustments in the scanout timing to compensate for detected fluctuations in the frame rate rendered by the GPU 562. For example, it may be configured to adjust a vertical blanking interval in response to one or more fluctuations to increase or decrease an instantaneous rate of the scanout of frames. This may be accomplished, e.g., by the graphics driver 566 temporarily extending the portion of the vsync signal that is generated between frames or by putting the scanout unit 570 to sleep for a finite period of time. This may also be accomplished by temporarily disabling the display interface 554 momentarily to prevent the transfer of one or more frames.
  • The scanout of the frames by the scanout unit 570 may drive new frames to streaming device 552 over the display interface 554, as shown in FIG. 5. The streaming server 552 may include a frame capture unit 576, such as a video capture card, that is configured to capture the source frame images contained within each output frame transferred over the display interface 554. In certain implementations, the frame capture unit 576 may be specially adapted coordinate with the uniquely tailored frames that may be rendered by the GPU 562 and sent by the scanout unit 570. For example, in certain implementations, the frame capture unit may be configured to count the lines and/or pixels received in order to only capture those visible pixels which contain the desired source content.
  • The streaming computing device 552 may also include an encoder, e.g., a video codec, configured to compress the source frames captured by the frame capture unit 576. The streaming computing device 552 may also include a streaming unit 580 that is configured to send the compressed frames over the network 566 to one or more client devices 558. In certain implementations, the client 558 may also be a computing device having at least one processor unit 586 coupled to at least one memory unit 588, and the system 500 may be configured to implement video streaming in real-time, so that the client device 558 may decompress the received frames with a decoder 582 and display the frames with a display device 584 in real-time with minimized latency from when they are rendered by the GPU 562 of the graphics source 550.
  • While various components of FIG. 5 are depicted separately for purposes of explanation, it is noted that many of the illustrated components may be physically implemented as common or integral units.
  • For example, in certain implementations, the scanout unit 570 may be physically implemented as part of the GPU 562, or it may be a separate unit. Similarly, in certain implementations the scanout unit may be physically implemented as separate components or may be a physically integrated unit. The scanout unit 570 may generate a plurality of signals, including a vertical synchronization signal, a horizontal synchronization signal, a pixel clock, and the like. The scanout unit may be a single integral unit which contains components for generating all of these signals, or the scanout unit 570 may be made up distinct signal generators for these components. For example, a pixel clock generator of the scanout unit and a vertical synchronization signal generator of the scanout to not need to be part of the same physical chip.
  • By way of further example, the one or more memory units 564 may include a plurality of distinct memory units for different purposes. For example, the memory unit 564 may optionally include a dedicated graphics memory unit that is separate from a main memory unit. The graphics memory may be configured to hold the frame buffer, while the main memory may be configured to hold data and programs implemented by the CPU 560.
  • By way of further example, the video encoder 578 and/or the streaming unit 580 may optionally be implemented as one or more software programs which are configured to be stored on one or more memory units 574 and executed by the one or more processor units 572 of the streaming computing device 552. The encoder 578 and the streaming unit 580 may be separate sets of code or may be part of the same program in accordance with implementations of the present disclosure.
  • It is noted that the example depicted in FIG. 5 is a simplified schematic provided for purposes of explanation, but the system 500 may include many additional aspects to support graphics rendering, compression, streaming, and other features in support of cloud computing. Moreover, configuration of the illustrated example system 500 may be particularly beneficial in implementations involving cloud gaming for console platforms, and it is noted that the system 500 may be configured in accordance with systems described in U.S. application Ser. No. 14/135,374, entitled “VIDEO LATENCY REDUCTION” and fully incorporated by reference herein, to further support such applications.
  • For example, the graphics source system 550 of the present application may have features in common with the terminal system depicted in FIG. 4A of that document, which corresponds to FIG. 6A herein. By way of further example, the streaming server 552 may have features in common with the streaming server depicted in FIG. 4B of that document (corresponding to FIG. 6B herein), and the frame capture unit 576 may have features in common with the video capture card depicted in FIG. 5 of that document (corresponding to FIG. 7 herein).
  • FIGS. 6A and 6B provide an overview of an example hardware/software architecture for generating and capturing video frames according to various implementations of the present disclosure. In particular, the example system of FIGS. 6A and 6B may be a system for streaming video games and other applications using a streaming server and a terminal system. FIG. 6A illustrates an architecture for an example video source according to various aspects of the present disclosure, and FIG. 6B illustrates an architecture for an example video capture system for capturing video from the video source according to various implementations of the present disclosure. In some implementations, the video source 612 may be a terminal configured to run an application for cloud streaming, and may be an existing embedded system, video game console, or other computing device having a specialized architecture. In some implementations, the video capture system 602 (video sink) may be a streaming server configured to capture and stream the video output from the terminal system to a client device. However, it is emphasized that the illustrated architecture of FIGS. 6A and 6B is provided by way of example only, and that various implementations of the present disclosure may involve reducing video transfer time using other architectures and in other contexts beyond cloud gaming and cloud computing applications.
  • Turning to FIG. 6A, the example video source may be a terminal system 612 that is configured to run an application 608, which may involve a video output to be captured by the video capture system 602. By way of example, and not by way of limitation, the application may be a video game having rendered graphics as a video output, which may be transferred to the streaming server 602 for sending over the network. In particular, the terminal system may include graphics processing unit (GPU) 650, which together with the graphics memory 649 may be configured to render the application output 608 as a sequence of images for video frames. The images may be output as a sequence of video frames that have visible pixels which contain the pixel data for the image of each frame for display on a display device, and the video frame images may be sent to the video capture system 602 through a video interface, such as HDMI, as output frames having both visible and invisible pixels. However, in order to reduce delay stemming from the video capture process, the video source may be configured to add extra pixels so that enlarged output frames are sent through the video interface. Further examples of how extra pixels may be added to the output frames are described below.
  • In order to support the output of the video signal, the video source 612 may include a graphics driver 652 configured to interface with the GPU 650 for rendering the application video signal as a sequence of video frame images. In particular, the GPU 650 may generate video frame images for video signal output in accordance with the application 608, and the graphics driver 652 may coordinate with the GPU 650 to render the video frame images into source video frame format having a supported a particular display image resolution, e.g., 720p. The GPU 650 together with the graphics driver 652 may render video frame images in a format having a plurality of visible image lines, with each visible image line having a plurality of visible image pixels. In certain implementations, the graphics driver 652 may be configured to add extra pixels in addition to the frame image pixels rendered by the GPU, e.g., by rendering the frame in an enlarged frame having a greater resolution than the number of pixels in the video frame image. Further examples of enlarging a frame by rendering it in an enlarged frame format are described below.
  • More specifically, the video source 612 may include a frame buffer 651 and a scan out unit 653, which may be operatively coupled to the GPU 650, and, in certain implementations, may be embodied in the GPU 650. The GPU 650 may be configured to render video images to the frame buffer 651, e.g., based on the output of the application 608, and the scan out unit 653 may be configured to retrieve the frame images from the frame buffer 651 and generate additional external signals for sending the image as an output frame over the interface, e.g., as described above.
  • In particular, the scan out unit 653 may include a pixel clock generator 641 for generating a pixel clock signal the scan out of the frame and/or a sync signal generator 631 for generating the synchronization signals, e.g., hsync and vsync signals, with each output frame. For example, the sync signal generator 631 may add an hsync signal that has a horizontal blanking region at the end of each line of the frame, and corresponds to a plurality of invisible pixels at the end of each line of the frame. The signal generator 631 may also add a vsync signal that has a vertical blanking region at the end of each frame and corresponds to a plurality of invisible lines at the end of the frame. The pixel clock generator 641 may generate a clock signal having a pulse associated with each pixel in the output frame generated for transfer over the video interface, including the total number of active pixels retrieved from the frame buffer 651 and the total number of pixels corresponding to the synchronization regions inserted between the active pixels. It is noted that the pixel clock generator 641 and/or the sync signal generator 631 may be contained as part of the scan out unit 653, and the scan out unit 653 may be contained as part of the GPU 650. However, it is emphasized that this is just an illustrative example, and that one or more over the components may be implemented as separate components.
  • The video source may include a video transmitter 656 coupled to a video communication interface, and the transmitter may transfer the video signal to the video capture system 602 through a serial communication interface, e.g., pixel by pixel in sequence, with the sync signals indicating transitions between lines and frames in the sequence accordingly. The a pixel clock generator 641 which may generate a clock signal to synchronize the timing of each pixel, e.g., based on the total number of pixels and frame rate of the video content, as discussed above. In certain implementations, the pixel clock generator 641 may generate a pixel clock with increase transfer frequency in each pixel, based on extra pixels contained within the active display region within each image, extra pixels contained within the synchronization region, or both. Optionally, the video interface may also support audio transfer, such as with an HDMI interface, and an audio signal output from the application may also be submitted through the video interface. In alternative implementations, a separate audio interface may be used.
  • The video source may be configured to send the output video signal to a video capture device 620 coupled to a computing system 602. The capture device may receive the video pixel data contained in the transferred video signal so that it may be captured in digital form and compressed by the streaming server 602. The streaming server 602 may include a video capture process 634 and/or an encoder which may be configured to compress each video frame received from the video capture device. A streaming server process 646 may be configured to transmit the compressed video stream to a remotely located device so that the compressed video stream may be decompressed and displayed on a remote display device.
  • In certain implementations, the video capture device may contain video capture logic 628 which is specially configured to capture only the visible pixels of a video frame image contained within an enlarged frame in accordance with various aspects of the present disclosure. For example, in certain implementations, the graphics rendering components of the video source may be configured to insert the visible image pixels of a video frame image in only a portion of the active display region of a particular format, and the video capture device 620 may be configured to count lines and/or pixels within each frame that is received in order to know when capture of the display image is complete. This may be based on a predetermined configuration of how frames are rendered by the video source. Alternatively, the video capture device 620 may determine that capture is complete based on the presence of a synchronization signal, e.g., a VSYNC signal in implementations where frames are enlarged by adding synchronization lines. The streaming server 602 or other computing device may be configured to begin compression the video frames as soon as capture of the visible display image within each frame is complete.
  • The capture device may receive the video signal through communication interface that is compatible with the video signal output from the video source 612, and the video interface may be coupled to a video receiver 630. By way of example, and not by way of limitation, the video capture device may include one or more ports as part of an audio and/or video communication interface, e.g., HDMI ports or other ports as described below with reference to FIG. 7.
  • The interface device 602 may include a specialized processing unit containing the logic 628 that is operatively coupled to the video signal interface, with the specialized processing unit having logic 628 that is dedicated to performing functions associated with A/V capture, and optionally other functions associated with cloud streaming, for signals received through a connector from the terminal system 602. The logic 628 may also support communication with the host system 602 through an additional communication interface, which may communicate with a peripheral bus of the host system 602 in order to interface with an A/V process embodied in the host system. By way of example, and not by way of limitation, the interface device 620 may be an add-on card which communicates with the host system 602 memory/CPU through an expansion interface, such as peripheral component interconnect (PCI)), PCI-eXtended (PCI-X), PCI-Express (PCIe), or another interface which facilitates communication with the host system 602 e.g., via a peripheral bus. The host system may include a capture device driver 626 to support the exchange of signals via the interface device 620.
  • In certain implementations, the specialized processing unit may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or another specialized processing unit having dedicated units of logic configured in accordance with principles described herein. The logic units 628 of the specialized processing unit may also include dedicated logic to support various functions for cloud streaming in addition to audio/video capture of the output from an application 608 running on the terminal system 602, such as storage virtualization in coordination with a storage process 632.
  • In the example depicted in FIGS. 6A-6B, an A/V capture unit embodied in the logic 628 of the specialized processing unit may communicate with the capture device driver 626 in the host system 602, and an A/V process 632 embodied in the host system 602, e.g., a software application running on a central processing unit 604. For example if the terminal system 612 sends video pixels to the video capture device, this video data may make it through the graphics driver 652, the video capture unit contained in the logic 628, the capture device driver 626, to the A/V process 632 embodied in the host system. The A/V process 632 may then compress the captured video frames, and the compression may begin sooner in accordance with an increase in the pixel clock caused by extra pixels. In certain implementations, the video sink 602 may optionally be a streaming server adapted to transmit over a network a stream of video output from the application 608 running on the terminal system 612. For example, the streaming server 602 may include an Ethernet adapter or other network adapter 636, and a corresponding Ethernet driver or other network driver 638 for the operating system of the host 602, with a compatible network library 639 providing protocol support for the network communication. The host system may also include system memory 640, controlled by a corresponding memory driver 642 (e.g., tmpfs) and supported by a file system library 644. A streaming server process 646 may be run on the host system 602 to perform functions associated with provide a real time stream to a client device connected over a network (not pictured in FIGS. 6A-6B).
  • The terminal system 612 may include various other components to support the application 608, which may be, e.g., video game software designed for an existing embedded platform. The terminal system 612 may include a file system layer 627 to access storage, as well various components to support graphics storage access. In some implementations, systems and the capture device 620 may be configured to implement a storage virtualization technique. An example of such a technique is described in commonly-assigned, co-pending U.S. application Ser. No. 13/135,213, to Roelof Roderick Colenbrander, entitled “MASS STORAGE VIRTUALIZATION FOR CLOUD COMPUTING”, filed Dec. 19, 2013, the entire contents of which are herein incorporated by reference.
  • Turning now to FIG. 7, a schematic diagram of an example capture device 720 that may be implemented on the interface card 620, some components on it, and internals of an example specialized processing unit 760 is depicted in accordance with various implementations of the present disclosure. By way of example, and not by way of limitation, the capture device 720 may be configured as an add-on card having components attached to a printed circuit board (PCB), and the capture card 720 may interface with a peripheral bus of a host system through a host hardware interface 762, such as a peripheral expansion port or other expansion communication interface which allows communication the peripheral bus of a host system when connected. It is noted that the example capture device 720 of FIG. 7 includes various optional components that are not necessary for video capture, but which may provide additional functionality for cloud computing and other implementations.
  • The example specialized processing unit 760 may include various blocks of logic dedicated to specialized functionality in accordance with various aspects of the present disclosure. The specialized processing unit may be implemented, e.g., as an FPGA, ASIC, or similar specialized processing unit. The specialized processing unit 760 may include a host interface block 764 which implements part of a protocol stack for the communication interface between the interface card 720 and a peripheral bus of a host system (not pictured in FIG. 7) for the capture device 760.
  • Communication busses like PCI-Express can be thought of as a protocol stack having several layers. Different communication protocols have different layers. Typically there is an ‘application layer’ at the top, then some transport related layers in the middle and some physical layer at the bottom. The host interface block 764 need not implement all layers of such a protocol stack. Instead, the host interface block may take care of the physical layer, which is responsible for putting digital information on a communication link, e.g., through electrical or optical signals. The host interface block may also be responsible for portions or possibly all of the ‘transport layers’ of the protocol stack, but need not be responsible for the application layer.
  • By way of example, and not by way of limitation, the host interface block 764 may be a hard PCIe block for communication through a PCI-Express connection, and which embeds the protocol stack for a PCIe interface or other interface for accessing a local bus of the host system. The host interface block 764 may be integrated into a memory access interface unit 766 which, together with other logic units of the specialized processing unit 760, may directly access system memory of a host system through the host hardware interface 762, e.g., using an interrupt of the request to the host system.
  • In some implementations, the memory access interface 766 may include components that provide memory access and interrupt functionality. In particular, the host interface block 764 may be configured to provide a connection between an on-chip-interconnect 772 and the host hardware interface 762 in a way that makes any on-chip device accessible from the host system using memory mapped Input/Output (I/O). This functionality would allow the host system to program any device connected to the on-chip-interconnect 772, such as the mass storage controller 770, memory controller 776, or GPIO 782.
  • The memory access interface 766 may also include an interrupt connection 765 that allows any connected device, e.g., the A/V capture units 778, to generate an interrupt upon an event (e.g., a captured video frame image is complete). It is desirable for the memory access interface to provide this functionality if there can be only one device interfacing with the host hardware interface hardware 762.
  • The memory access interface 766 may also (optionally) include a direct memory access (DMA) engine 767. As used herein, and as is generally understood by those skilled in the art, the term direct memory access (DMA) refers to a feature that allows certain hardware subsystems within a computer to access system memory independently of the computer's central processing unit (CPU). The DMA engine 767 may implement data move operations between the host interface block 764 and the host hardware interface 762. In some implementations, the memory access interface unit 766 may implement portions of a protocol stack (e.g., PCI Express) not provided by the host interface block 764, such as connecting the host interface block 764 to the on-chip-interconnect 772.
  • For purposes of functionality for video capture, the capture device 720 may include one or more video and optionally audio/video communication interfaces 780, which may be implemented in the form of one or more HDMI ports 771 and/or connectors, or other video signal communication interfaces, and which may be attached to a circuit board of the capture device 720. By way of example, and not by way of limitation, the interface card 720 may contain two HDMI ports to facilitate connection to two distinct video sources/terminal systems, although it is noted that the capture device may alternatively contain a different number of video connectors so that a single capture device 720 may service a different number of video sources or terminal systems. For each of the video signal connectors 780, there may be a corresponding video capture unit 778 embodied in the specialized processing unit 760 that is compatible with the particular video communication interface (e.g., HDMI, DVI, VGA, etc.).
  • The one or more video capture units 778 of the specialized processing unit may be connected to other logic units of the specialized processing unit 760 through the on-chip interconnect 772, which may provide each of the video capture units 778 access to host system interface components (e.g., PCI-Express). The on-chip interconnect may be configured to a standard on-chip bus architecture configured to connect functional blocks on a specialized processing unit (e.g., an FPGA or ASIC). For example, if the specialized processing unit 760 is an FPGA, the components of the specialized processing unit may be interconnected using master-slave architecture, e.g., an Advanced Microcontroller Bus Architecture (AMBA), such as AXI4 or AXI4-Lite, or another suitable on-chip bus architecture. AXI4 may be used for large data transport and AXI-Lite may be used for low performance connections or for configuration purposes. The on-chip interconnections of the specialized processing unit logic blocks may be configured according to a master-slave type configuration as shown in FIG. 7. In the illustrated schematic, “M” and the corresponding bold lines represent represents a master connection, “S” and the corresponding dotted lines represent a slave connection, and “Ctrl” represents control.
  • The interface device 720 may include one or more memory units 774 which may be controlled by a memory controller 776 provided in the logic of the specialized processing unit 760. The memory unit may support data transport between a terminal system connected through the mass storage interface 768 and a host system connected through the host hardware interface 762, in accordance with data requests issued by the terminal system, e.g., for mass storage virtualization. For example, the memory unit 774 may be a temporary RAM unit, such as DDR3 RAM, or another volatile memory unit configured to temporarily store data requested by read requests issued by the terminal system, in accordance with principles described herein. The memory controller 776 may be connected to the on chip bus architecture 772 to perform memory read/write operations according to signals received from other logical units of the specialized processing unit 760.
  • During operation, a graphics driver and/or scanout unit of a video source (not pictured in FIG. 7) connected through the video interface 780 may generate enlarged output video frames having extra pixels to be captured by the capture card 720. Upon receiving the video output frames, the video capture unit(s) 778 may be configured to determine when each frame's visible display image pixels have been captured and omit the extra pixels in each frame from capture, discarding these extra pixels because they contain unneeded data. The captured video data for each frame may be transmitted to a video capture process in a host system using an interrupt through the host hardware interface 762 for further processing, compression, and/or transmission over a network. Compression may begin sooner for a given frame rate because a lower proportion of the pixels within each frame need to be transmitted in order to transfer all of the visible image pixels in the frame.
  • It is noted that each of the one or more A/V capture logic units 778 may be operatively coupled to a corresponding A/V receiver 730, each of which may in turn be connected to a suitable A/V hardware interface 780, such as an HDMI port 771 or other A/V connection port as shown in FIG. 7. A/V output from the terminal system may be connected to the A/V receiver 730 through the A/V interface 780 using a compatible A/V connector. The A/V capture unit 778 may communicate with the interface device driver and A/V process on the host system through the host hardware interface 762, which may be connected to a host system bus (e.g., a peripheral bus), and the host system may then deliver the A/V stream to a client device over a network.
  • The interface device may optionally include various other components which provide additional functionality for streaming applications run on a terminal system, such as cloud gaming streaming. For example, the specialized processing unit 760 may also include one or more mass storage device controllers 770 for emulating a storage device for one or more terminal systems. The interface device 782 may also include one or more general purpose input/output (GPIO) blocks to support additional functionality. By way of example, and not by way of limitation, each of the GPIO blocks may be connected to a corresponding one of the terminal system to provide additional functionality, such as power control of the terminal systems and other functionality.
  • As noted above, the specialized processing unit 760 may be implemented, e.g., as an FPGA, ASIC, or other integrated circuit having blocks dedicated to certain functionality, such as A/V capture, a mass storage device controller, memory controller, DMA engine, and the like, in accordance with various aspects of the present disclosure. In certain implementations of the present disclosure, one or more of these units may be provided as reusable units of logic or other chip design commonly referred to in the art as IP blocks or IP cores.
  • CONCLUSION
  • While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “a”, or “an” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means- or step-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”

Claims (29)

What is claimed is:
1. A method comprising:
rendering a plurality of source frames into a buffer, each said source frame being rendered sequentially into the buffer at a variable frame rate;
scanning out a plurality of output frames from the buffer with a scanout unit, each said output frame being scanned out sequentially with a vertical blanking interval at the end of each said output frame;
compressing a source frame within each said output frame that is scanned out of the buffer;
detecting one or more fluctuations in the variable frame rate; and
adjusting a timing of the compressing of one or more of the source frames in response to the one or more fluctuations.
2. The method of claim 1,
wherein said adjusting the timing of the compressing includes adjusting a timing of one or more of the output frames of said scanning out by modifying the vertical blanking interval in response to the one or more fluctuations.
3. The method of claim 1, further comprising:
capturing the source frame within each said output frame that is scanned out of the buffer before said compressing the source frame.
4. The method of claim 1, further comprising:
capturing the source frame within each said output frame that is scanned out of the buffer before said compressing the source frame; and
sending each said compressed frame to one or more remote devices over a network.
5. The method of claim 1,
wherein the buffer includes a front frame buffer and at least one back frame buffer,
wherein each said output frame is scanned out of the front frame buffer,
wherein the back frame buffer is swapped with the front frame buffer after scan out of each said output frame from the front frame buffer.
6. The method of claim 1,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations.
7. The method of claim 1,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames.
8. The method of claim 1,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames,
wherein said extending the length of the vertical blanking interval includes putting the scanout unit to sleep for a period of time.
9. The method of claim 1, further comprising:
placing a marker into a memory unit after each said frame is rendered into the frame buffer,
wherein said detecting the one or more fluctuations includes tracking a time at which the marker for each said frame is placed into the buffer.
10. The method of claim 1,
wherein said adjusting the timing of the compressing includes extending the length of the vertical blanking interval by putting the scanout unit to sleep for a period of time.
11. The method of claim 1,
wherein said adjusting the timing of the compressing includes delaying scanout of at least one of the output frames by changing a register value to a value that delays scanout.
12. The method of claim 1,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes disabling a display interface in response to the one or more downward fluctuations to prevent transfer of one or more of the output frames.
13. A system comprising:
at least one processor unit;
at least one memory unit coupled to the processor unit; and
a scanout unit coupled to the processor unit, wherein the processor unit is configured to perform a method, the method comprising:
rendering a plurality of source frames into a buffer, each said source frame being rendered sequentially into the buffer at a variable frame rate;
scanning out a plurality of output frames from the buffer with a scanout unit, each said output frame being scanned out sequentially with a vertical blanking interval at the end of each said output frame;
compressing a source frame within each said output frame that is scanned out of the buffer;
detecting one or more fluctuations in the variable frame rate; and
adjusting a timing of the compressing of one or more of the source frames in response to the one or more fluctuations.
14. The system of claim 13,
wherein said adjusting the timing of the compressing includes adjusting a timing of one or more of the output frames of said scanning out by modifying the vertical blanking interval in response to the one or more fluctuations.
15. The system of claim 13, wherein the method further comprises:
capturing the source frame within each said output frame that is scanned out of the buffer before said compressing the source frame.
16. The system of claim 13, wherein the method further comprises:
capturing the source frame within each said output frame that is scanned out of the buffer before said compressing the source frame; and
sending each said compressed frame to one or more remote devices over a network.
17. The system of claim 13,
wherein the buffer includes a front frame buffer and at least one back frame buffer,
wherein each said output frame is scanned out of the front frame buffer,
wherein the back frame buffer is swapped with the front frame buffer after scan out of each said output frame from the front frame buffer.
18. The system of claim 13,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations.
19. The system of claim 13,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames.
20. The system of claim 13,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames,
wherein said extending the length of the vertical blanking interval includes putting the scanout unit to sleep for a period of time.
21. The system of claim 13, wherein the method further comprises:
placing a marker into a memory unit after each said frame is rendered into the frame buffer,
wherein said detecting the one or more fluctuations includes tracking a time at which the marker for each said frame is placed into the buffer.
22. The system of claim 13,
wherein said adjusting the timing of the compressing includes extending the length of the vertical blanking interval by putting the scanout unit to sleep for a period of time.
23. The system of claim 13,
wherein said adjusting the timing of the compressing includes delaying scanout of at least one of the output frames by changing a register value to a value that delays scanout.
24. The system of claim 13,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes disabling a display interface in response to the one or more downward fluctuations to prevent transfer of one or more of the output frames.
25. A non-transitory computer readable medium having processor-executable instructions embodied therein, wherein execution of the instructions by a processor causes the processor to implement a method, the method comprising
rendering a plurality of source frames into a buffer, each said source frame being rendered sequentially into the buffer at a variable frame rate;
scanning out a plurality of output frames from the buffer with a scanout unit, each said output frame being scanned out sequentially with a vertical blanking interval at the end of each said output frame;
compressing a source frame within each said output frame that is scanned out of the buffer;
detecting one or more fluctuations in the variable frame rate; and
adjusting a timing of the compressing of one or more of the source frames in response to the one or more fluctuations.
26. The non-transitory computer readable medium of claim 25,
wherein said adjusting the timing of the compressing includes adjusting a timing of one or more of the output frames of said scanning out by modifying the blanking interval in response to the one or more fluctuations.
27. The non-transitory computer readable medium of claim 25,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations.
28. The non-transitory computer readable medium of claim 25,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames.
29. The non-transitory computer readable medium of claim 25,
wherein the one or more fluctuations include one or more downward fluctuations in which the variable frame rate decreases,
wherein said adjusting the timing of the compressing includes delaying a scan out of one or more of the output frames in response to the downward fluctuations,
wherein said delaying the scan out includes adjusting the vertical blanking interval to by extending a length of the vertical blanking interval between two subsequent frames,
wherein said extending the length of the vertical blanking interval includes putting the scanout unit to sleep for a period of time.
US14/280,502 2014-03-12 2014-05-16 Video frame rate compensation through adjustment of vertical blanking Active 2034-12-13 US9332216B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US14/280,502 US9332216B2 (en) 2014-03-12 2014-05-16 Video frame rate compensation through adjustment of vertical blanking
CN201810201744.8A CN108449566B (en) 2014-03-12 2015-03-03 Video frame rate compensation by adjusting vertical blanking
CN201510094508.7A CN104917990B (en) 2014-03-12 2015-03-03 Video frame rate compensation is carried out by adjusting vertical blanking
US15/145,718 US9984647B2 (en) 2014-03-12 2016-05-03 Video frame rate compensation through adjustment of vertical blanking
US15/991,860 US10339891B2 (en) 2014-03-12 2018-05-29 Video frame rate compensation through adjustment of vertical blanking
US16/424,189 US10916215B2 (en) 2014-03-12 2019-05-28 Video frame rate compensation through adjustment of vertical blanking
US17/161,338 US11404022B2 (en) 2014-03-12 2021-01-28 Video frame rate compensation through adjustment of vertical blanking
US17/850,942 US11741916B2 (en) 2014-03-12 2022-06-27 Video frame rate compensation through adjustment of timing of scanout

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461951729P 2014-03-12 2014-03-12
US14/280,502 US9332216B2 (en) 2014-03-12 2014-05-16 Video frame rate compensation through adjustment of vertical blanking

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/145,718 Continuation US9984647B2 (en) 2014-03-12 2016-05-03 Video frame rate compensation through adjustment of vertical blanking

Publications (2)

Publication Number Publication Date
US20150264298A1 true US20150264298A1 (en) 2015-09-17
US9332216B2 US9332216B2 (en) 2016-05-03

Family

ID=54070408

Family Applications (6)

Application Number Title Priority Date Filing Date
US14/280,502 Active 2034-12-13 US9332216B2 (en) 2014-03-12 2014-05-16 Video frame rate compensation through adjustment of vertical blanking
US15/145,718 Active US9984647B2 (en) 2014-03-12 2016-05-03 Video frame rate compensation through adjustment of vertical blanking
US15/991,860 Active US10339891B2 (en) 2014-03-12 2018-05-29 Video frame rate compensation through adjustment of vertical blanking
US16/424,189 Active US10916215B2 (en) 2014-03-12 2019-05-28 Video frame rate compensation through adjustment of vertical blanking
US17/161,338 Active US11404022B2 (en) 2014-03-12 2021-01-28 Video frame rate compensation through adjustment of vertical blanking
US17/850,942 Active US11741916B2 (en) 2014-03-12 2022-06-27 Video frame rate compensation through adjustment of timing of scanout

Family Applications After (5)

Application Number Title Priority Date Filing Date
US15/145,718 Active US9984647B2 (en) 2014-03-12 2016-05-03 Video frame rate compensation through adjustment of vertical blanking
US15/991,860 Active US10339891B2 (en) 2014-03-12 2018-05-29 Video frame rate compensation through adjustment of vertical blanking
US16/424,189 Active US10916215B2 (en) 2014-03-12 2019-05-28 Video frame rate compensation through adjustment of vertical blanking
US17/161,338 Active US11404022B2 (en) 2014-03-12 2021-01-28 Video frame rate compensation through adjustment of vertical blanking
US17/850,942 Active US11741916B2 (en) 2014-03-12 2022-06-27 Video frame rate compensation through adjustment of timing of scanout

Country Status (2)

Country Link
US (6) US9332216B2 (en)
CN (2) CN108449566B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150222764A1 (en) * 2014-02-04 2015-08-06 Norio Sakai Image processing apparatus, image processing method, and recording medium
US20150381990A1 (en) * 2014-06-26 2015-12-31 Seh W. Kwa Display Interface Bandwidth Modulation
US20160189685A1 (en) * 2014-12-31 2016-06-30 Texas Instruments Incorporated Methods and Apparatus for Displaying Video Including Variable Frame Rates
US20160284264A1 (en) * 2015-03-27 2016-09-29 Samsung Electronics Co., Ltd. Electronic device and method for controlling display in electronic device
US20170087464A1 (en) * 2015-09-30 2017-03-30 Sony Interactive Entertainment America Llc Multi-user demo streaming service for cloud gaming
US20170142296A1 (en) * 2015-11-12 2017-05-18 Pfu Limited Video-processing apparatus and video-processing method
US20170161865A1 (en) * 2013-10-28 2017-06-08 Vmware, Inc. Method and System to Virtualize Graphic Processing Services
US9728166B2 (en) * 2015-08-20 2017-08-08 Qualcomm Incorporated Refresh rate matching with predictive time-shift compensation
US20170324944A1 (en) * 2014-12-17 2017-11-09 Hitachi Maxell, Ltd. Video display apparatus, video display system, and video display method
US9984647B2 (en) 2014-03-12 2018-05-29 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of vertical blanking
US10002088B2 (en) 2012-10-04 2018-06-19 Sony Interactive Entertainment LLC Method and apparatus for improving decreasing presentation latency in response to receipt of latency reduction mode signal
US20180174551A1 (en) * 2016-12-21 2018-06-21 Intel Corporation Sending frames using adjustable vertical blanking intervals
US20180261144A1 (en) * 2017-03-13 2018-09-13 Novatek Microelectronics Corp. Display device, control circuit and associated control method
US20180268512A1 (en) * 2017-03-15 2018-09-20 Microsoft Technology Licensing, Llc Techniques for reducing perceptible delay in rendering graphics
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
EP3547300A1 (en) * 2018-03-31 2019-10-02 INTEL Corporation Asynchronous single frame update for self-refreshing panels
US20200104973A1 (en) * 2018-09-28 2020-04-02 Qualcomm Incorporated Methods and apparatus for frame composition alignment
US10744407B2 (en) 2015-09-08 2020-08-18 Sony Interactive Entertainment LLC Dynamic network storage for cloud console server
US10847117B1 (en) * 2019-05-13 2020-11-24 Adobe Inc. Controlling an augmented reality display with transparency control using multiple sets of video buffers
CN112181633A (en) * 2019-07-03 2021-01-05 索尼互动娱乐有限责任公司 Asset aware computing architecture for graphics processing
WO2021067321A1 (en) * 2019-10-01 2021-04-08 Sony Interactive Entertainment Inc. High speed scan-out of server display buffer for cloud gaming applications
US10974142B1 (en) 2019-10-01 2021-04-13 Sony Interactive Entertainment Inc. Synchronization and offset of VSYNC between cloud gaming server and client
CN112995431A (en) * 2019-12-17 2021-06-18 瑞昱半导体股份有限公司 Display port to high-definition multimedia interface converter and signal conversion method
WO2021164004A1 (en) * 2020-02-21 2021-08-26 Qualcomm Incorporated Reduced display processing unit transfer time to compensate for delayed graphics processing unit render time
EP3903896A1 (en) * 2020-04-30 2021-11-03 INTEL Corporation Cloud gaming adaptive synchronization mechanism
US20220130015A1 (en) * 2020-10-28 2022-04-28 Qualcomm Incorporated System and method to process images of a video stream
US11344799B2 (en) 2019-10-01 2022-05-31 Sony Interactive Entertainment Inc. Scene change hint and client bandwidth used at encoder for handling video frames after a scene change in cloud gaming applications
US11420118B2 (en) 2019-10-01 2022-08-23 Sony Interactive Entertainment Inc. Overlapping encode and transmit at the server
US11438562B2 (en) * 2019-04-26 2022-09-06 Canon Kabushiki Kaisha Display apparatus and control method thereof
US11446571B2 (en) 2020-04-30 2022-09-20 Intel Corporation Cloud gaming adaptive synchronization mechanism
US11539960B2 (en) 2019-10-01 2022-12-27 Sony Interactive Entertainment Inc. Game application providing scene change hint for encoding at a cloud gaming server
US12067959B1 (en) * 2023-02-22 2024-08-20 Meta Platforms Technologies, Llc Partial rendering and tearing avoidance

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9795879B2 (en) * 2014-12-31 2017-10-24 Sony Interactive Entertainment America Llc Game state save, transfer and resume for cloud gaming
CN107995974A (en) * 2016-09-28 2018-05-04 深圳市柔宇科技有限公司 System performance method for improving, system performance lifting device and display device
US12079642B2 (en) * 2016-10-31 2024-09-03 Ati Technologies Ulc Method and apparatus for dynamically reducing application render-to-on screen time in a desktop environment
CN106792148A (en) * 2016-12-09 2017-05-31 广东威创视讯科技股份有限公司 A kind of method and system for improving image fluency
CN106776259B (en) * 2017-01-10 2020-03-10 Oppo广东移动通信有限公司 Mobile terminal frame rate detection method and device and mobile terminal
US10462336B2 (en) 2017-03-15 2019-10-29 Microsoft Licensing Technology, LLC Low latency tearing without user perception
CA3010471C (en) 2017-07-06 2024-04-23 Aidan Fabius Display buffering methods and systems
KR102566790B1 (en) * 2018-02-12 2023-08-16 삼성디스플레이 주식회사 Method of operating a display device supporting a variable frame mode, and the display device
CA3044477A1 (en) 2018-06-01 2019-12-01 Gregory Szober Display buffering methods and systems
US10643525B2 (en) 2018-06-29 2020-05-05 Intel Corporation Dynamic sleep for a display panel
CN108924640A (en) * 2018-07-11 2018-11-30 湖南双鸿科技有限公司 Video transmission method, device and computer readable storage medium
CN109302637B (en) * 2018-11-05 2023-02-17 腾讯科技(成都)有限公司 Image processing method, image processing device and electronic equipment
CN109830219B (en) * 2018-12-20 2021-10-29 武汉精立电子技术有限公司 Method for reducing eDP signal link power consumption
CN109326266B (en) * 2018-12-24 2020-06-30 合肥惠科金扬科技有限公司 Method, system, display and storage medium for improving screen flicker
US11164496B2 (en) 2019-01-04 2021-11-02 Channel One Holdings Inc. Interrupt-free multiple buffering methods and systems
WO2020140145A1 (en) * 2019-01-04 2020-07-09 Tomislav Malnar Interrupt-free multiple buffering methods and systems
KR20200091062A (en) * 2019-01-21 2020-07-30 삼성디스플레이 주식회사 Display device and driving method thereof
JP7089114B2 (en) * 2019-03-29 2022-06-21 株式会社ソニー・インタラクティブエンタテインメント Boundary display control device, boundary display control method and program
US11295660B2 (en) 2019-06-10 2022-04-05 Ati Technologies Ulc Frame replay for variable rate refresh display
CN111083547B (en) * 2019-12-16 2022-07-29 珠海亿智电子科技有限公司 Method, apparatus, and medium for balancing video frame rate error based on trigger mode
KR20210092571A (en) 2020-01-16 2021-07-26 삼성전자주식회사 Electronic device and screen refresh method thereof
CN111510772B (en) * 2020-03-23 2022-03-29 珠海亿智电子科技有限公司 Method, device, equipment and storage medium for balancing video frame rate error
US11948520B2 (en) * 2020-03-31 2024-04-02 Google Llc Variable refresh rate control using PWM-aligned frame periods
KR20230091100A (en) 2020-10-22 2023-06-22 퀄컴 인코포레이티드 Dynamic frame rate optimization
CN112422873B (en) * 2020-11-30 2022-09-16 Oppo(重庆)智能科技有限公司 Frame insertion method and device, electronic equipment and storage medium
CN112462542A (en) * 2020-12-04 2021-03-09 深圳市华星光电半导体显示技术有限公司 Liquid crystal display panel, driving method and display device
CN115660940B (en) * 2022-11-11 2023-04-28 北京麟卓信息科技有限公司 Graphic application frame rate synchronization method based on vertical blanking simulation
TWI835567B (en) * 2023-02-20 2024-03-11 瑞昱半導體股份有限公司 Method for reading and writing frame images with variable frame rates and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100278230A1 (en) * 2009-05-01 2010-11-04 Macinnis Alexander G Method And System For Scalable Video Compression And Transmission
US20120206461A1 (en) * 2011-02-10 2012-08-16 David Wyatt Method and apparatus for controlling a self-refreshing display device coupled to a graphics controller

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3852024B2 (en) * 2001-02-28 2006-11-29 株式会社日立製作所 Image display system
CN101534405B (en) * 2004-03-31 2011-06-01 松下电器产业株式会社 Video recording device for recording variable frame rate
JP4040052B2 (en) * 2005-05-24 2008-01-30 株式会社日立国際電気 Image data compression device
US7728909B2 (en) * 2005-06-13 2010-06-01 Seiko Epson Corporation Method and system for estimating motion and compensating for perceived motion blur in digital video
KR100634238B1 (en) * 2005-08-12 2006-10-16 삼성전자주식회사 Tab tape for tape carrier package
US8204104B2 (en) * 2006-03-09 2012-06-19 Sony Corporation Frame rate conversion system, method of converting frame rate, transmitter, and receiver
JP4863767B2 (en) * 2006-05-22 2012-01-25 ソニー株式会社 Video signal processing apparatus and image display apparatus
US8063910B2 (en) * 2008-07-08 2011-11-22 Seiko Epson Corporation Double-buffering of video data
JP5161706B2 (en) * 2008-08-27 2013-03-13 キヤノン株式会社 Imaging apparatus and control method thereof
WO2011060442A2 (en) * 2009-11-16 2011-05-19 Citrix Systems, Inc. Methods and systems for selective implementation of progressive display techniques
US8711207B2 (en) * 2009-12-28 2014-04-29 A&B Software Llc Method and system for presenting live video from video capture devices on a computer monitor
US20120269259A1 (en) * 2010-10-15 2012-10-25 Mark Sauer System and Method for Encoding VBR MPEG Transport Streams in a Bounded Constant Bit Rate IP Network
JP5879628B2 (en) * 2011-03-17 2016-03-08 日本碍子株式会社 Ceramic kiln shuttle kiln for firing
JP5951195B2 (en) * 2011-06-28 2016-07-13 株式会社小糸製作所 Vehicle lamp control device
US9165537B2 (en) * 2011-07-18 2015-10-20 Nvidia Corporation Method and apparatus for performing burst refresh of a self-refreshing display device
US10728545B2 (en) * 2011-11-23 2020-07-28 Texas Instruments Incorporated Method and system of bit rate control
US8990446B2 (en) 2012-10-04 2015-03-24 Sony Computer Entertainment America, LLC Method and apparatus for decreasing presentation latency
US9196014B2 (en) * 2012-10-22 2015-11-24 Industrial Technology Research Institute Buffer clearing apparatus and method for computer graphics
US20150098020A1 (en) * 2013-10-07 2015-04-09 Nvidia Corporation Method and system for buffer level based frame rate recovery
US10353633B2 (en) 2013-12-19 2019-07-16 Sony Interactive Entertainment LLC Mass storage virtualization for cloud computing
US9497358B2 (en) 2013-12-19 2016-11-15 Sony Interactive Entertainment America Llc Video latency reduction
US20150189126A1 (en) * 2014-01-02 2015-07-02 Nvidia Corporation Controlling content frame rate based on refresh rate of a display
US9332216B2 (en) 2014-03-12 2016-05-03 Sony Computer Entertainment America, LLC Video frame rate compensation through adjustment of vertical blanking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100278230A1 (en) * 2009-05-01 2010-11-04 Macinnis Alexander G Method And System For Scalable Video Compression And Transmission
US20120206461A1 (en) * 2011-02-10 2012-08-16 David Wyatt Method and apparatus for controlling a self-refreshing display device coupled to a graphics controller

Cited By (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10002088B2 (en) 2012-10-04 2018-06-19 Sony Interactive Entertainment LLC Method and apparatus for improving decreasing presentation latency in response to receipt of latency reduction mode signal
USRE49144E1 (en) 2012-10-04 2022-07-19 Sony Interactive Entertainment LLC Method and apparatus for improving presentation latency in response to receipt of latency reduction mode signal
US10127628B2 (en) * 2013-10-28 2018-11-13 Vmware, Inc. Method and system to virtualize graphic processing services
US20170161865A1 (en) * 2013-10-28 2017-06-08 Vmware, Inc. Method and System to Virtualize Graphic Processing Services
US9344589B2 (en) * 2014-02-04 2016-05-17 Ricoh Company, Ltd. Image processing apparatus, image processing method, and recording medium
US20150222764A1 (en) * 2014-02-04 2015-08-06 Norio Sakai Image processing apparatus, image processing method, and recording medium
US10916215B2 (en) 2014-03-12 2021-02-09 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of vertical blanking
US11741916B2 (en) 2014-03-12 2023-08-29 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of timing of scanout
US11404022B2 (en) 2014-03-12 2022-08-02 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of vertical blanking
US10339891B2 (en) 2014-03-12 2019-07-02 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of vertical blanking
US9984647B2 (en) 2014-03-12 2018-05-29 Sony Interactive Entertainment LLC Video frame rate compensation through adjustment of vertical blanking
US20150381990A1 (en) * 2014-06-26 2015-12-31 Seh W. Kwa Display Interface Bandwidth Modulation
US10049002B2 (en) * 2014-06-26 2018-08-14 Intel Corporation Display interface bandwidth modulation
US20170324944A1 (en) * 2014-12-17 2017-11-09 Hitachi Maxell, Ltd. Video display apparatus, video display system, and video display method
US10511823B2 (en) * 2014-12-17 2019-12-17 Maxell, Ltd. Video display apparatus, video display system, and video display method
US9842572B2 (en) * 2014-12-31 2017-12-12 Texas Instruments Incorporated Methods and apparatus for displaying video including variable frame rates
US20160189685A1 (en) * 2014-12-31 2016-06-30 Texas Instruments Incorporated Methods and Apparatus for Displaying Video Including Variable Frame Rates
KR102305765B1 (en) * 2015-03-27 2021-09-28 삼성전자주식회사 Electronic device, and method for controlling display in the electronic device
US10062314B2 (en) * 2015-03-27 2018-08-28 Samsung Electronics Co., Ltd. Electronic device and method for controlling display in electronic device
US20160284264A1 (en) * 2015-03-27 2016-09-29 Samsung Electronics Co., Ltd. Electronic device and method for controlling display in electronic device
KR20160115476A (en) * 2015-03-27 2016-10-06 삼성전자주식회사 Electronic device, and method for controlling display in the electronic device
US20180366054A1 (en) * 2015-03-27 2018-12-20 Samsung Electronics Co., Ltd. Electronic device and method for controlling display in electronic device
US10810927B2 (en) * 2015-03-27 2020-10-20 Samsung Electronics Co., Ltd. Electronic device and method for controlling display in electronic device
US9728166B2 (en) * 2015-08-20 2017-08-08 Qualcomm Incorporated Refresh rate matching with predictive time-shift compensation
US10744407B2 (en) 2015-09-08 2020-08-18 Sony Interactive Entertainment LLC Dynamic network storage for cloud console server
US20170087464A1 (en) * 2015-09-30 2017-03-30 Sony Interactive Entertainment America Llc Multi-user demo streaming service for cloud gaming
US11752429B2 (en) * 2015-09-30 2023-09-12 Sony Interactive Entertainment LLC Multi-user demo streaming service for cloud gaming
US20210316214A1 (en) * 2015-09-30 2021-10-14 Sony Interactive Entertainment LLC Multi-user demo streaming service for cloud gaming
US11040281B2 (en) * 2015-09-30 2021-06-22 Sony Interactive Entertainment LLC Multi-user demo streaming service for cloud gaming
US20170142296A1 (en) * 2015-11-12 2017-05-18 Pfu Limited Video-processing apparatus and video-processing method
US10049642B2 (en) * 2016-12-21 2018-08-14 Intel Corporation Sending frames using adjustable vertical blanking intervals
US20180174551A1 (en) * 2016-12-21 2018-06-21 Intel Corporation Sending frames using adjustable vertical blanking intervals
US20180261144A1 (en) * 2017-03-13 2018-09-13 Novatek Microelectronics Corp. Display device, control circuit and associated control method
US10262576B2 (en) * 2017-03-13 2019-04-16 Novatek Microelectronics Corp. Display device, control circuit and associated control method
US20180268512A1 (en) * 2017-03-15 2018-09-20 Microsoft Technology Licensing, Llc Techniques for reducing perceptible delay in rendering graphics
US10679314B2 (en) * 2017-03-15 2020-06-09 Microsoft Technology Licensing, Llc Techniques for reducing perceptible delay in rendering graphics
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
EP3547300A1 (en) * 2018-03-31 2019-10-02 INTEL Corporation Asynchronous single frame update for self-refreshing panels
US10559285B2 (en) 2018-03-31 2020-02-11 Intel Corporation Asynchronous single frame update for self-refreshing panels
US20200104973A1 (en) * 2018-09-28 2020-04-02 Qualcomm Incorporated Methods and apparatus for frame composition alignment
US11438562B2 (en) * 2019-04-26 2022-09-06 Canon Kabushiki Kaisha Display apparatus and control method thereof
US10847117B1 (en) * 2019-05-13 2020-11-24 Adobe Inc. Controlling an augmented reality display with transparency control using multiple sets of video buffers
CN112181633A (en) * 2019-07-03 2021-01-05 索尼互动娱乐有限责任公司 Asset aware computing architecture for graphics processing
TWI750676B (en) * 2019-07-03 2021-12-21 美商索尼互動娛樂有限責任公司 Asset aware computing architecture for graphics processing
US20210001220A1 (en) * 2019-07-03 2021-01-07 Sony Interactive Entertainment LLC Asset aware computing architecture for graphics processing
US10981059B2 (en) * 2019-07-03 2021-04-20 Sony Interactive Entertainment LLC Asset aware computing architecture for graphics processing
US20230016903A1 (en) * 2019-10-01 2023-01-19 Sony Interactive Entertainment Inc. Beginning scan-out process at flip-time for cloud gaming applications
US11458391B2 (en) 2019-10-01 2022-10-04 Sony Interactive Entertainment Inc. System and method for improving smoothness in cloud gaming applications
WO2021067317A3 (en) * 2019-10-01 2021-06-17 Sony Interactive Entertainment Inc. Synchronization and offset of vsync between cloud gaming server and client
US11235235B2 (en) 2019-10-01 2022-02-01 Sony Interactive Entertainment Inc. Synchronization and offset of VSYNC between gaming devices
JP7494293B2 (en) 2019-10-01 2024-06-03 株式会社ソニー・インタラクティブエンタテインメント Fast scanout of server display buffers for cloud gaming applications
US11344799B2 (en) 2019-10-01 2022-05-31 Sony Interactive Entertainment Inc. Scene change hint and client bandwidth used at encoder for handling video frames after a scene change in cloud gaming applications
US11020661B2 (en) 2019-10-01 2021-06-01 Sony Interactive Entertainment Inc. Reducing latency in cloud gaming applications by overlapping reception and decoding of video frames and their display
US11395963B2 (en) * 2019-10-01 2022-07-26 Sony Interactive Entertainment Inc. High speed scan-out of server display buffer for cloud gaming applications
US11865434B2 (en) 2019-10-01 2024-01-09 Sony Interactive Entertainment Inc. Reducing latency in cloud gaming applications by overlapping receive and decode of video frames and their display at the client
US11420118B2 (en) 2019-10-01 2022-08-23 Sony Interactive Entertainment Inc. Overlapping encode and transmit at the server
US10974142B1 (en) 2019-10-01 2021-04-13 Sony Interactive Entertainment Inc. Synchronization and offset of VSYNC between cloud gaming server and client
US11446572B2 (en) * 2019-10-01 2022-09-20 Sony Interactive Entertainment Inc. Early scan-out of server display buffer at flip-time for cloud gaming applications
US11110349B2 (en) 2019-10-01 2021-09-07 Sony Interactive Entertainment Inc. Dynamic client buffering and usage of received video frames for cloud gaming
WO2021067321A1 (en) * 2019-10-01 2021-04-08 Sony Interactive Entertainment Inc. High speed scan-out of server display buffer for cloud gaming applications
US20220355196A1 (en) * 2019-10-01 2022-11-10 Sony Interactive Entertainment Inc. Scan-out of server display buffer based on a frame rate setting for cloud gaming applications
US11524230B2 (en) 2019-10-01 2022-12-13 Sony Interactive Entertainment Inc. Encoder tuning to improve tradeoffs between latency and video quality in cloud gaming applications
US11539960B2 (en) 2019-10-01 2022-12-27 Sony Interactive Entertainment Inc. Game application providing scene change hint for encoding at a cloud gaming server
CN112995431A (en) * 2019-12-17 2021-06-18 瑞昱半导体股份有限公司 Display port to high-definition multimedia interface converter and signal conversion method
WO2021164004A1 (en) * 2020-02-21 2021-08-26 Qualcomm Incorporated Reduced display processing unit transfer time to compensate for delayed graphics processing unit render time
US20230073736A1 (en) * 2020-02-21 2023-03-09 Qualcomm Incorporated Reduced display processing unit transfer time to compensate for delayed graphics processing unit render time
EP3903896A1 (en) * 2020-04-30 2021-11-03 INTEL Corporation Cloud gaming adaptive synchronization mechanism
US11446571B2 (en) 2020-04-30 2022-09-20 Intel Corporation Cloud gaming adaptive synchronization mechanism
US11538136B2 (en) * 2020-10-28 2022-12-27 Qualcomm Incorporated System and method to process images of a video stream
US20220130015A1 (en) * 2020-10-28 2022-04-28 Qualcomm Incorporated System and method to process images of a video stream
US12067959B1 (en) * 2023-02-22 2024-08-20 Meta Platforms Technologies, Llc Partial rendering and tearing avoidance

Also Published As

Publication number Publication date
US10916215B2 (en) 2021-02-09
US20210158772A1 (en) 2021-05-27
US9984647B2 (en) 2018-05-29
US11404022B2 (en) 2022-08-02
CN108449566A (en) 2018-08-24
US11741916B2 (en) 2023-08-29
US10339891B2 (en) 2019-07-02
CN104917990B (en) 2018-04-13
US20190279592A1 (en) 2019-09-12
CN104917990A (en) 2015-09-16
US20220328018A1 (en) 2022-10-13
US20160247481A1 (en) 2016-08-25
US9332216B2 (en) 2016-05-03
CN108449566B (en) 2020-06-23
US20180277054A1 (en) 2018-09-27

Similar Documents

Publication Publication Date Title
US11404022B2 (en) Video frame rate compensation through adjustment of vertical blanking
US9497358B2 (en) Video latency reduction
US10049642B2 (en) Sending frames using adjustable vertical blanking intervals
US9786255B2 (en) Dynamic frame repetition in a variable refresh rate system
KR101467127B1 (en) Techniques to control display activity
WO2015195219A1 (en) Multiple display pipelines driving a divided display
JP2024038128A (en) System and method for driving display
CN105657485A (en) Audio/video playing equipment
GB2538797B (en) Managing display data
US9070198B2 (en) Methods and systems to reduce display artifacts when changing display clock rate
US20170329574A1 (en) Display controller
Yu et al. Design of 3D-TV Horizontal Parallax Obtaining System Based on FPGA
EP2315443A1 (en) Instant image processing system, method for processing instant image and image transferring device
JP2017122867A (en) Timing controller

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC, CALIFORNI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:COLENBRANDER, ROELOF RODERICK;REEL/FRAME:032925/0078

Effective date: 20140514

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637

Effective date: 20160331

Owner name: SONY INTERACTIVE ENTERTAINMENT AMERICA LLC, CALIFO

Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT AMERICA LLC;REEL/FRAME:038626/0637

Effective date: 20160331

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: SONY INTERACTIVE ENTERTAINMENT LLC, CALIFORNIA

Free format text: MERGER;ASSIGNOR:SONY INTERACTIVE ENTERTAINMENT AMERICA LLC;REEL/FRAME:053323/0567

Effective date: 20180315

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8