WO2023000484A1 - 一种帧率稳定输出方法、系统及智能终端 - Google Patents

一种帧率稳定输出方法、系统及智能终端 Download PDF

Info

Publication number
WO2023000484A1
WO2023000484A1 PCT/CN2021/119999 CN2021119999W WO2023000484A1 WO 2023000484 A1 WO2023000484 A1 WO 2023000484A1 CN 2021119999 W CN2021119999 W CN 2021119999W WO 2023000484 A1 WO2023000484 A1 WO 2023000484A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
screen image
frame rate
stable output
video
Prior art date
Application number
PCT/CN2021/119999
Other languages
English (en)
French (fr)
Inventor
曲宝庆
林良松
胡超
寇光学
Original Assignee
惠州Tcl云创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州Tcl云创科技有限公司 filed Critical 惠州Tcl云创科技有限公司
Publication of WO2023000484A1 publication Critical patent/WO2023000484A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • the present disclosure relates to the technical field of audio and video interaction, and in particular to a frame rate stable output method, system, intelligent terminal and computer-readable storage medium.
  • Android mobile phone screen projection projects the screen content displayed on the mobile phone screen to the large screen in real time for display, realizing the effect of multi-screen interaction.
  • VirtualDisplay virtual display module under Android, and there are many usage scenarios of VirtualDisplay, such as screen recording , WFD display, etc., its function is to capture the content displayed on the screen, VirtualDisplay captures the screen content, there are many ways to implement it, and the ImageReader is provided in the API to read the content in the VirtualDisplay) module to support users to capture screen images requirements, but VirtualDisplay does not provide a guarantee mechanism for frame rate, and frame rate is an important factor in real-time audio and video interaction scenarios.
  • the main purpose of the present disclosure is to provide a frame rate stable output method, system, intelligent terminal and computer-readable storage medium, aiming to solve the problem of unstable frame rate in the Android mobile phone screen projection scene in the prior art.
  • the present disclosure provides a frame rate stable output method on the one hand, and the frame rate stable output method includes the following steps:
  • the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer, and the screen image data is YUV data;
  • the video encoding layer After the video encoding layer receives the screen image data, it compresses and encodes the screen image data, and sends the compressed and encoded screen image data to the video sending layer;
  • the video sending layer sends the compressed and encoded screen image data to the remote end through the network module.
  • the frame rate stable output method wherein the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer, specifically including:
  • the video collection layer collects new screen image data, and sends the collected new screen image data to the video coding layer;
  • the timer drives the data temporary storage queue to fetch the last frame of screen image data and send it to the video coding layer.
  • the timer triggers the data temporary storage queue to fetch the last frame of screen image data at preset intervals.
  • the callback period of the timer is a preset time/frame rate.
  • the preset time is 1000 ms
  • the frame rate is 60 frames.
  • a second aspect of the present disclosure provides a frame rate stable output method, the frame rate stable output method includes the following steps:
  • the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer;
  • the video encoding layer After the video encoding layer receives the screen image data, it compresses and encodes the screen image data, and sends the compressed and encoded screen image data to the video sending layer;
  • the video sending layer sends the compressed and encoded screen image data to the remote end through the network module.
  • the frame rate stable output method wherein the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer, also includes:
  • a data temporary storage queue for temporarily storing data and a timer for setting a trigger time are set in advance at the video acquisition layer.
  • the frame rate stable output method wherein the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer, specifically including:
  • the video collection layer collects new screen image data, and sends the collected new screen image data to the video coding layer;
  • the timer drives the data temporary storage queue to fetch the last frame of screen image data and send it to the video coding layer.
  • the timer triggers the data temporary storage queue to fetch the last frame of screen image data at preset intervals.
  • the callback period of the timer is a preset time/frame rate.
  • the preset time is 1000 ms
  • the frame rate is 60 frames.
  • the screen image data is YUV data.
  • the frame rate stable output method wherein the network module is a WIFI module.
  • the third aspect of the present disclosure also provides a frame rate stable output system, wherein the frame rate stable output system includes: a video acquisition layer, a video encoding layer and a video sending layer;
  • the video collection layer is used to collect screen image data, and send the screen image data to the video coding layer;
  • the video encoding layer is configured to compress and encode the screen image data after receiving the screen image data, and send the compressed and encoded screen image data to the video sending layer;
  • the video sending layer is used to send the compressed and coded screen image data to the remote end through the network module.
  • the frame rate stable output system is characterized in that the video acquisition layer is provided with a data temporary storage queue for temporarily storing data and a timer for setting a trigger time.
  • the frame rate stable output system wherein, when the screen is refreshed, the video acquisition layer puts the last frame of screen image data into the data temporary storage queue, and judges whether there is new screen image data Generate; when detecting that new screen image data is produced, the video acquisition layer collects new screen image data, and sends the new screen image data collected to the video encoding layer; when no new screen image data is detected When the screen image data is generated, the timer drives the data temporary storage queue to take out the last frame of screen image data and send it to the video coding layer.
  • the frame rate stable output system wherein, the timer triggers the data temporary storage queue to take out the last frame of screen image data every preset time interval; the callback period of the timer is the preset time /frame rate; the preset time is 1000ms, and the frame rate is 60 frames.
  • the screen image data is YUV data.
  • the fourth aspect of the present disclosure also provides an intelligent terminal, wherein the intelligent terminal includes: a memory, a processor, and a frame rate data stored in the memory and operable on the processor.
  • a stable output program when the frame rate stable output program is executed by the processor, the steps of the above frame rate stable output method are realized.
  • the fifth aspect of the present disclosure also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a frame rate stable output program, and the frame rate stable output program is executed by a processor When realizing the steps of the frame rate stable output method as described above.
  • the present disclosure collects screen image data through the video acquisition layer, and sends the screen image data to the video coding layer; after receiving the screen image data, the video coding layer compresses and encodes the screen image data, and compresses and encodes the screen image data
  • the final screen image data is sent to the video sending layer; the video sending layer sends the compressed and encoded screen image data to the remote end through the network module.
  • This disclosure temporarily stores the last frame of data obtained from the virtual display module each time by setting a data temporary storage queue and a timer, and judges whether there is new screen image data generated, and starts the timer if not
  • the driver software takes out the last frame of screen image data from the data temporary storage queue to fill the current missing screen image data, and can achieve stable output at any frame rate.
  • FIG. 1 is a flowchart of a preferred embodiment of the frame rate stable output method of the present disclosure
  • Fig. 2 is a schematic flow chart of the entire implementation process in a preferred embodiment of the frame rate stable output method of the present disclosure
  • FIG. 3 is a schematic diagram of a preferred embodiment of the frame rate stable output system of the present disclosure.
  • FIG. 4 is a schematic diagram of an operating environment of a preferred embodiment of the smart terminal of the present disclosure.
  • Android mobile phone screen projection projects the screen content displayed on the mobile phone screen to the large screen in real time for display to achieve the effect of multi-screen interaction.
  • Android has opened the VirtualDisplay module to support users' needs for capturing screen images, but VirtualDisplay does not provide frame rate. Guarantee mechanism, and frame rate is an important factor in real-time audio and video interaction scenarios.
  • Function based on Android's screen refresh logic, if there is no content in the screen to refresh, no new image data will be generated, that is to say, the VirtualDisplay cannot get new screen content at this time, and the frame rate will become 0, which is obviously not What developers and users want.
  • this disclosure provides a timer to take out the last frame of YUV data from the temporary storage queue according to the frame rate driver to satisfy the user's arbitrary A method to stabilize the frame rate output.
  • the frame rate stable output method includes the following steps:
  • Step S10 the video collection layer collects screen image data, and sends the screen image data to the video coding layer.
  • the screen image data is collected at the video collection layer, and the screen image data is sent to the video coding layer, before also including: setting a data temporary storage queue for temporarily storing data at the video collection layer in advance (The queue is a tool for storing data. Data is stored from the bottom, and when one is stored to the end of the row, the access right of the data is moved up one bit, and the one at the top is taken out, and the access right of the data is also moved up one bit) and a user A timer (electronic device for timing) used to set the trigger time.
  • a data temporary storage queue for temporarily storing data at the video collection layer in advance
  • the queue is a tool for storing data. Data is stored from the bottom, and when one is stored to the end of the row, the access right of the data is moved up one bit, and the one at the top is taken out, and the access right of the data is also moved up one bit
  • a user A timer electronic device for timing
  • the last frame of screen image data is put into the data temporary storage queue (temporary storage), and judge Whether new screen image data is generated; when detecting that new screen image data is generated, the video acquisition layer collects new screen image data, and sends the collected new screen image data to the video encoding layer ; When no new screen image data is detected, the timer drives the data temporary storage queue to take out the last frame of screen image data and send it to the video coding layer.
  • the timer triggers the data temporary storage queue to take out the last frame of screen image data every preset time interval;
  • the callback cycle of the timer is a preset time/frame rate, for example, the preset time is 1000ms,
  • the temporal sensitivity and resolution of human vision vary according to the type and characteristics of visual stimuli and vary between individuals.
  • the human visual system can process 10 to 12 images per second and perceive them individually, with higher rates perceived as motion.
  • the flicker fusion threshold can be much higher, hundreds of hertz.
  • image recognition it has been found that people recognize a particular image in an uninterrupted series of different images, each lasting as little as 13 milliseconds.
  • Persistence of vision sometimes results in very short single-millisecond visual stimuli with perceived durations between 100 ms and 400 ms. Very short multiple stimuli are sometimes perceived as a single stimulus, for example a 10 ms green flash followed by a 10 ms red flash is perceived as a single yellow flash.
  • Frames per second (fps), or frame rate indicates how many times per second the graphics processor can update a field. A high frame rate results in smoother, more realistic animations. Generally speaking, 30fps is acceptable, but increasing the performance to 60fps can significantly improve the sense of interaction and realism, but generally speaking, it is not easy to notice a significant improvement in fluency if it exceeds 75fps.
  • Newer video standards support 120, 240, or 300 frames per second, so frames can be multiplied evenly by common frame rates, such as 24fps movie and 30fps video, and 25 and 50fps video in the case of a 300fps display. These standards also support video natively at higher frame rates, and video with interpolated frames between its original frames. Some modern films are experimenting with frame rates higher than 24fps, such as 48 and 60fps.
  • the screen image data represents YUV data
  • YUV is a color coding method, which is often used in various video processing components.
  • YUV is the type of compiling true-color color space (color space).
  • Y'UV, YUV, YCbCr, YPbPr and other proper nouns can all be called YUV, and they overlap with each other.
  • Y stands for brightness (Luminance or Luma), that is, the gray scale value
  • "U” and "V” stand for chroma (Chrominance or Chroma), which are used to describe the color and saturation of the image, and are used to specify pixels s color.
  • Y'UV The invention of Y'UV was due to the transition period between color TV and black and white TV.
  • Black and white video only has Y (Luma, Luminance) video, which is the grayscale value.
  • Y (Luma, Luminance) video which is the grayscale value.
  • C Chrominance or Chroma
  • UV is regarded as C (Chrominance or Chroma) representing chroma. If the C signal is ignored, then the remaining Y (Luma) signal is It is the same as the previous black-and-white TV number, which solves the compatibility problem between color TV and black-and-white TV.
  • the biggest advantage of Y'UV is that it only takes up very little bandwidth. Because UV represents different color signals, the R and B signals are directly used to represent the UV of chroma.
  • the UV signal tells the TV to shift the color of a certain pixel without changing its brightness.
  • the UV signal tells the display to shift the brightness of a certain color from a certain reference.
  • the higher the UV value the more saturated the color of the pixel.
  • Common color image recording formats include RGB, YUV, CMYK, etc.
  • RGB three primary colors to transmit simultaneously. This design method is three times the bandwidth of the original black and white, which was not a good design at the time.
  • RGB appeals to the human eye's perception of color, while YUV focuses on the sensitivity of vision to brightness.
  • Y represents brightness
  • UV represents chroma (so black and white movies can omit UV, which is similar to RGB).
  • Cb so YUV records are usually presented in Y:UV format.
  • Step S20 after receiving the screen image data, the video encoding layer compresses and encodes the screen image data, and sends the compressed and encoded screen image data to the video sending layer.
  • image compression coding data compression coding means that the length of the encoded information is shorter than the original information
  • image compression coding data compression coding means that the length of the encoded information is shorter than the original information
  • image compression coding data compression coding means that the length of the encoded information is shorter than the original information
  • one type of compression is reversible, that is, the original image can be completely restored from the compressed data, and the information is not
  • the loss is called lossless compression coding
  • the other type of compression is irreversible, that is, the original image cannot be completely restored from the compressed data, and there is a certain loss of information, which is called lossy compression coding.
  • transform, code and compress the image data remove redundant data and reduce the amount of data required to represent digital images, so as to facilitate the storage and transmission of images. That is to say, the technique of expressing the original pixel matrix lossily or losslessly with a small amount of data is also called image coding.
  • the video coding layer After the video coding layer receives the screen image data, it compresses and encodes the screen image data, performs compression encoding to facilitate data transmission, improves data transmission efficiency, and compresses and encodes the screen image data Sent to the video sending layer.
  • Step S30 the video sending layer sends the compressed and coded screen image data to the remote end through the network module.
  • the video sending layer sends the compressed and encoded screen image data to the remote end (for example, to a smart TV for screen projection and display through a network module (such as a WIFI module).
  • the smart TV is based on Internet application technology and has an open
  • the operating system and chip have an open application platform, which can realize two-way human-computer interaction functions, integrate audio-visual, entertainment, data and other functions into one TV product to meet the diverse and personalized needs of users, and its purpose is to bring A more convenient user experience has become the trend of TV; smart TV has a fully open platform and is equipped with an operating system. Users can install and uninstall various application software by themselves while enjoying ordinary TV content. New TV products with expanded and upgraded functions. Smart TVs can continue to bring users rich and personalized experiences that are different from cable digital TV receivers (set-top boxes).
  • the flow of the frame rate stable output method of the present disclosure is as follows:
  • the video acquisition layer puts the last frame of screen image data into the data temporary storage queue, and judges whether new screen image data is generated;
  • the video acquisition layer collects new screen image data, and sends the collected new screen image data to the video encoding layer;
  • the timer drives the data temporary storage queue to take out the last frame of screen image data and send it to the video coding layer;
  • the video encoding layer After the video encoding layer receives the screen image data, it compresses and encodes the screen image data, and sends the compressed and encoded screen image data to the video sending layer;
  • the video sending layer sends the compressed and coded screen image data to the remote end through the network module
  • touch screen Touch Panel is also called “touch screen”, “touch panel”, is An inductive liquid crystal display device that can receive input signals such as contacts.
  • touch screen Touch Panel
  • touch panel is An inductive liquid crystal display device that can receive input signals such as contacts.
  • the tactile feedback system on the screen can drive various linking devices according to a pre-programmed program, which can be used to replace the mechanical one.
  • the video acquisition layer of the present disclosure is the data generation layer, which often needs to generate stable and continuous data, which is the current pain point solved by the present disclosure.
  • the data is generated, it is sent to the video coding layer for data coding and compression, and finally the video sending layer
  • this disclosure designs a data temporary storage queue and a timer at the video acquisition layer.
  • the timer drives the program to take out data from the temporary storage queue cyclically (not delete) to fill the current lack of frame rate, and achieve the purpose of stably outputting video data.
  • the direct effect of this disclosure is the stable output of frame rate, which will not cause no data from the acquisition node to the lower-level module due to no data on the screen. delivery.
  • the present disclosure also provides a frame rate stable output system correspondingly, wherein the frame rate stable output system includes: a video acquisition layer 51, a video coding layer 52 and video sending layer 53;
  • the video collection layer 51 is used to collect screen image data, and send the screen image data to the video coding layer 52;
  • the video encoding layer 52 is configured to compress and encode the screen image data after receiving the screen image data, and send the compressed and encoded screen image data to the video sending layer 53;
  • the video sending layer 53 is used to send the compressed and coded screen image data to the far end through the network module.
  • a data temporary storage queue for temporarily storing data and a timer for setting a trigger time are set in advance at the video acquisition layer 51 .
  • the video acquisition layer 51 puts the last frame of screen image data into the data temporary storage queue, and judges whether new screen image data is generated;
  • described video collection layer 51 collects new screen image data, and the new screen image data that gathers is sent to described video coding layer 52;
  • described The timer drives the data temporary storage queue to take out the last frame of screen image data and send it to the video coding layer 52 .
  • the timer triggers the data temporary storage queue to take out the last frame of screen image data every preset time interval; the callback cycle of the timer is preset time/frame rate; the preset time is 1000ms, The frame rate is 60 frames.
  • the screen image data is YUV data.
  • This disclosure designs a YUV memory temporary storage area (data temporary storage queue) and a timer.
  • the trigger interval of the timer is 1000ms/(preset frame rate), and the last frame of data obtained from VirtualDisplay is processed each time.
  • Temporary storage and judge whether there is new YUV data, if not, start the timer driver software to take out the last frame of YUV data from the temporary storage area to fill the current missing YUV, in this way, any frame rate can be achieved stable output.
  • the video acquisition layer of the present disclosure is the data generation layer, which often requires the stability and continuity of the generated data, which is what the present disclosure refers to.
  • the video acquisition layer of the present disclosure is the data generation layer, which often requires the stability and continuity of the generated data, which is what the present disclosure refers to.
  • the data is generated, it is sent to the video encoding layer for data encoding and compression, and finally the encoded data is sent out by the video sending layer.
  • This disclosure designs a data temporary storage queue and a timer in the video acquisition layer. When no new data is generated, the timer is used to drive the program to cyclically take out data from the temporary storage queue (not delete) to fill the current frame rate deficiency, and achieve the purpose of stably outputting video data.
  • the present disclosure also provides a smart terminal correspondingly, and the smart terminal includes a processor 10 , a memory 20 and a display 30 .
  • Fig. 4 only shows some components of the smart terminal, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
  • the memory 20 may be an internal storage unit of the smart terminal in some embodiments, such as a hard disk or memory of the smart terminal.
  • the memory 20 may also be an external storage device of the smart terminal in other embodiments, such as a plug-in hard disk equipped on the smart terminal, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 20 may also include both an internal storage unit of the smart terminal and an external storage device.
  • the memory 20 is used to store application software and various data installed on the smart terminal, such as program codes installed on the smart terminal.
  • the memory 20 can also be used to temporarily store data that has been output or will be output.
  • a frame rate stable output program 40 is stored in the memory 20, and the frame rate stable output program 40 can be executed by the processor 10, so as to realize the frame rate stable output method in this application.
  • the processor 10 may be a central processing unit (Central Processing Unit, CPU) in some embodiments, a microprocessor or other data processing chips, for running the program codes stored in the memory 20 or processing data, for example Executing the frame rate stable output method and the like.
  • CPU Central Processing Unit
  • microprocessor or other data processing chips for running the program codes stored in the memory 20 or processing data, for example Executing the frame rate stable output method and the like.
  • the display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, Organic Light-Emitting Diode) touch device, and the like.
  • the display 30 is used for displaying information on the smart terminal and for displaying a visualized user interface.
  • the components 10-30 of the intelligent terminal communicate with each other through the system bus.
  • the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer;
  • the video encoding layer After the video encoding layer receives the screen image data, it compresses and encodes the screen image data, and sends the compressed and encoded screen image data to the video sending layer;
  • the video sending layer sends the compressed and encoded screen image data to the remote end through the network module.
  • the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer, which also includes:
  • a data temporary storage queue for temporarily storing data and a timer for setting a trigger time are set in advance at the video acquisition layer.
  • the video collection layer collects screen image data, and sends the screen image data to the video coding layer, specifically including:
  • the video collection layer collects new screen image data, and sends the collected new screen image data to the video coding layer;
  • the timer drives the data temporary storage queue to fetch the last frame of screen image data and send it to the video coding layer.
  • the timer triggers the data temporary storage queue to take out the last frame of screen image data every preset time interval.
  • the callback period of the timer is a preset time/frame rate.
  • the preset time is 1000ms
  • the frame rate is 60 frames.
  • the screen image data is YUV data.
  • the present disclosure also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a frame rate stable output program, and when the frame rate stable output program is executed by a processor, the frame rate stable output as described above is realized method steps.
  • the present disclosure provides a frame rate stable output method, system and intelligent terminal, the method comprising: the video acquisition layer collects screen image data, and sends the screen image data to the video coding layer; the video coding layer After receiving the screen image data, the screen image data is compressed and encoded, and the compressed and encoded screen image data is sent to the video sending layer; the video sending layer sends the compressed and encoded screen image data to the remote.
  • This disclosure temporarily stores the last frame of data obtained from the virtual display module each time by setting a data temporary storage queue and a timer, and judges whether there is new screen image data generated, and starts the timer if not
  • the driver software takes out the last frame of screen image data from the data temporary storage queue to fill the current missing screen image data, and can achieve stable output at any frame rate.
  • the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware (such as processors, controllers, etc.), and the programs can be stored in a
  • the program may include the processes of the above-mentioned method embodiments when executed.
  • the computer-readable storage medium described herein may be a memory, a magnetic disk, an optical disk, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种帧率稳定输出方法、系统及智能终端,所述方法包括:视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。本公开通过设计一个数据暂存队列和一个定时器,对每次从虚拟显示模块获取到的最后一帧数据进行暂存,并判断是否有新的屏幕图像数据产生,如果没有就启动定时器驱动软件从数据暂存队列取出最后一帧屏幕图像数据来填补当前的缺少的屏幕图像数据,可以实现任意帧率的稳定输出。

Description

一种帧率稳定输出方法、系统及智能终端
优先权
所述PCT专利申请要求申请日为2021年7月19日,申请号为202110814888.2的中国专利优先权,本专利申请结合了上述专利的技术方案。
技术领域
本公开涉及音视频交互技术领域,尤其涉及一种帧率稳定输出方法、系统、智能终端及计算机可读存储介质。
背景技术
安卓手机投屏将手机屏幕所显示的画面内容实时投射到大屏上进行显示,实现多屏互动的效果,目前安卓开放了VirtualDisplay(安卓下的虚拟显示模块,VirtualDisplay的使用场景很多,比如录屏,WFD显示等,其作用就是抓取屏幕上显示的内容,VirtualDisplay抓取屏幕内容,其实现方式有很多,在API中就提供了ImageReader进行读取VirtualDisplay里的内容)模块来支持用户捕捉屏幕画面的需求,但VirtualDisplay不提供帧率的保障机制,而帧率是实时音视频交互场景中的重要因素,帧率是以帧称为单位的位图图像连续出现在显示器上的频率(速率);如开发者需要实现实时帧率统计等功能,基于安卓的画面刷新逻辑,如果画面不存在内容刷新时,则不产生新的图像数据,也就是说此时VirtualDisplay拿不到新的画面内容,帧率就会变成0,这显然不是开发者和用户想要的。
因此,现有技术还有待于改进和发展。
发明内容
本公开的主要目的在于提供一种帧率稳定输出方法、系统、智能终端及计算机可读存储介质,旨在解决现有技术中安卓手机投屏场景中帧率不稳定的问题。
为实现上述目的,本公开一方面提供一种帧率稳定输出方法,所述帧率稳定输出方法包括如下步骤:
预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器;
视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,所述屏幕图像数据为YUV数据;
视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
可选地,所述的帧率稳定输出方法,其中,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,具体包括:
当屏幕刷新时,将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
可选地,所述的帧率稳定输出方法,其中,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据。
可选地,所述的帧率稳定输出方法,其中,所述定时器的回调周期为预设时间/帧率。
可选地,所述的帧率稳定输出方法,其中,所述预设时间为1000ms,所述帧率为60帧。
本公开的第二方面提供一种帧率稳定输出方法,所述帧率稳定输出方法包括如下步骤:
视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;
视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
可选地,所述的帧率稳定输出方法,其中,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,之前还包括:
预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
可选地,所述的帧率稳定输出方法,其中,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,具体包括:
当屏幕刷新时,将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
可选地,所述的帧率稳定输出方法,其中,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据。
可选地,所述的帧率稳定输出方法,其中,所述定时器的回调周期为预设时间/帧率。
可选地,所述的帧率稳定输出方法,其中,所述预设时间为1000ms,所述帧率为60帧。
可选地,所述的帧率稳定输出方法,其中,所述屏幕图像数据为YUV数据。
可选地,所述的帧率稳定输出方法,其中,所述网络模块为WIFI模块。
此外,为实现上述目的,本公开第三方面还提供一种帧率稳定输出系统,其中,所述帧率稳定输出系统包括:视频采集层、视频编码层和视频发送层;
所述视频采集层用于采集屏幕图像数据,并将所述屏幕图像数据发送给所述视频编码层;
所述视频编码层用于接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给所述视频发送层;
所述视频发送层用于通过网络模块将压缩编码后的屏幕图像数据发送到远端。
可选地,所述的帧率稳定输出系统,其特征在于,所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
可选地,所述的帧率稳定输出系统,其中,当屏幕刷新时,所述视频采集层 将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
可选地,所述的帧率稳定输出系统,其中,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据;所述定时器的回调周期为预设时间/帧率;所述预设时间为1000ms,所述帧率为60帧。
可选地,所述的帧率稳定输出系统,其中,所述屏幕图像数据为YUV数据。
此外,为实现上述目的,本公开第四方面还提供一种智能终端,其中,所述智能终端包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的帧率稳定输出程序,所述帧率稳定输出程序被所述处理器执行时实现如上所述的帧率稳定输出方法的步骤。
此外,为实现上述目的,本公开第五方面还提供一种计算机可读存储介质,其中,所述计算机可读存储介质存储有帧率稳定输出程序,所述帧率稳定输出程序被处理器执行时实现如上所述的帧率稳定输出方法的步骤。
本公开通过视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。本公开通过设置计一个数据暂存队列和一个定时器,对每次从虚拟显示模块获取到的最后一帧数据进行暂存,并判断是否有新的屏幕图像数据产生,如果没有就启动定时器驱动软件从数据暂存队列取出最后一帧屏幕图像数据来填补当前的缺少的屏幕图像数据,可以实现任意帧率的稳定输出。
附图说明
图1是本公开帧率稳定输出方法的较佳实施例的流程图;
图2是本公开帧率稳定输出方法的较佳实施例中整个实施过程的流程示意图;
图3是本公开帧率稳定输出系统的较佳实施例的原理示意图;
图4为本公开智能终端的较佳实施例的运行环境示意图。
具体实施方式
安卓手机投屏将手机屏幕所显示的画面内容实时投射到大屏上进行显示,实现多屏互动的效果,目前安卓开放了VirtualDisplay模块来支持用户捕捉屏幕画面的需求,但VirtualDisplay不提供帧率的保障机制,而帧率是实时音视频交互场景中的重要因素,帧率是以帧称为单位的位图图像连续出现在显示器上的频率(速率);如开发者需要实现实时帧率统计等功能,基于安卓的画面刷新逻辑,如果画面不存在内容刷新时,则不产生新的图像数据,也就是说此时VirtualDisplay拿不到新的画面内容,帧率就会变成0,这显然不是开发者和用户想要的。
因此,为解决安卓手机投屏场景中视频帧率不稳定的问题,本公开创造提供了一种以定时器按照帧率驱动程序从暂存队列中取出上一帧YUV数据,来满足用户对任意帧率稳定输出的方法。
为使本公开的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本公开进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。
本公开较佳实施例所述的帧率稳定输出方法,如图1和图2所示,所述帧率稳定输出方法包括以下步骤:
步骤S10、视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层。
具体地,在所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,之前还包括:预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列(队列是一种存放数据的工具,从底部开始存数据,存放一个到排尾,数据的访问权就上移一位,取出排头的一个,数据的访问权也上移一位)和一个用于设置触发时间的定时器(用于定时的电子装置)。
如图2所示,当屏幕(例如智能手机的屏幕)刷新(刷新频率:即屏幕刷新的速度)时,将最后一帧屏幕图像数据放入所述数据暂存队列(暂时存储),并 判断是否有新的屏幕图像数据产生;当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
其中,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据;所述定时器的回调周期为预设时间/帧率,例如所述预设时间为1000ms,所述帧率为60帧,其中,帧率是以帧称为单位的位图图像连续出现在显示器上的频率(速率),那么所述定时器的回调周期=预设时间/帧率=1000ms/60帧=16.7ms/帧。
人类视觉的时间敏感度和分辨率根据视觉刺激的类型和特征而变化,并且在个体之间不同。人类视觉系统每秒可处理10到12个图像并单独感知它们,而较高的速率则被视为运动。当速率高于50Hz至90Hz时,大多数研究参与者认为调制光(如计算机显示器)稳定。这种调制光的稳定感被称为闪烁融合阈值。然而,当调制光是不均匀的并且包含图像时,闪烁融合阈值可以高得多,数百赫兹。关于图像识别,已经发现人们在不间断的一系列不同图像中识别特定图像,每个图像持续少至13毫秒。视力的持久性有时会导致非常短的单毫秒视觉刺激,其感知持续时间在100毫秒到400毫秒之间。非常短的多个刺激有时被认为是单个刺激,例如10毫秒的绿色闪光,紧接着是10毫秒的红色闪光,被感知为单个黄色闪光。每秒的帧数(fps)或者说帧率表示图形处理器处理场时每秒钟能够更新的次数。高的帧率可以得到更流畅、更逼真的动画。一般来说30fps就是可以接受的,但是将性能提升至60fps则可以明显提升交互感和逼真感,但是一般来说超过75fps一般就不容易察觉到有明显的流畅度提升了。如果帧率超过屏幕刷新率只会浪费图形处理的能力,因为监视器不能以这么快的速度更新,这样超过刷新率的帧率就浪费掉了。现代视频格式利用各种帧速率。由于电网的电源频率,模拟电视广播的帧速率为50Hz或60Hz,有时视频交错,因此可以在相同的可用广播带宽上发送更多的运动信息,有时视频会在25或30fps,每帧加倍。电影几乎普遍以每秒24帧的速度拍摄,无法以其原始帧速率显示,这需要下拉转换,通常导致“抖动”:将每秒24帧转换为每秒60帧,每个奇数帧加倍,每个偶数帧三倍,这会产生不均匀的运动。其他转换具有类似的不均匀帧加倍。较新的 视频标准支持每秒120,240或300帧,因此帧可以均匀地乘以常见帧速率,例如24fps电影和30fps视频,以及在300fps显示的情况下25和50fps视频。这些标准还支持原生在较高帧速率下的视频,以及在其原始帧之间具有内插帧的视频。一些现代电影正在试验帧速率高于24fps,例如48和60fps。
其中,所述屏幕图像数据表示YUV数据,YUV是一种颜色编码方法,常使用在各个视频处理组件中,YUV在对照片或视频编码时,考虑到人类的感知能力,允许降低色度的带宽;YUV是编译true-color颜色空间(color space)的种类,Y'UV,YUV,YCbCr,YPbPr等专有名词都可以称为YUV,彼此有重叠。“Y”表示明亮度(Luminance或Luma),也就是灰阶值,“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。
Y'UV的发明是由于彩色电视与黑白电视的过渡时期。黑白视频只有Y(Luma,Luminance)视频,也就是灰阶值。到了彩色电视规格的制定,是以YUV/YIQ的格式来处理彩色电视图像,把UV视作表示彩度的C(Chrominance或Chroma),如果忽略C信号,那么剩下的Y(Luma)信号就跟之前的黑白电视频号相同,这样一来便解决彩色电视机与黑白电视机的兼容问题。Y'UV最大的优点在于只需占用极少的带宽。因为UV分别代表不同颜色信号,所以直接使用R与B信号表示色度的UV。也就是说UV信号告诉了电视要偏移某象素的的颜色,而不改变其亮度。或者UV信号告诉了显示器使得某个颜色亮度依某个基准偏移。UV的值越高,代表该像素会有更饱和的颜色。彩色图像记录的格式,常见的有RGB、YUV、CMYK等。彩色电视最早的构想是使用RGB三原色来同时传输。这种设计方式是原来黑白带宽的3倍,在当时并不是很好的设计。RGB诉求于人眼对色彩的感应,YUV则着重于视觉对于亮度的敏感程度,Y代表的是亮度,UV代表的是彩度(因此黑白电影可省略UV,相近于RGB),分别用Cr和Cb来表示,因此YUV的记录通常以Y:UV的格式呈现。
步骤S20、视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层。
其中,图像压缩编码(数据压缩编码指编码后信息的长度较于原始信息要短)可分为两类:一类压缩是可逆的,即从压缩后的数据可以完全恢复原来的图像, 信息没有损失,称为无损压缩编码;另一类压缩是不可逆的,即从压缩后的数据无法完全恢复原来的图像,信息有一定损失,称为有损压缩编码。在满足一定保真度的要求下,对图像数据的进行变换、编码和压缩,去除多余数据减少表示数字图像时需要的数据量,以便于图像的存储和传输。即以较少的数据量有损或无损地表示原来的像素矩阵的技术,也称图像编码。
具体地,所述视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,进行压缩编码便于进行数据的传输,提高数据传输效率,并将压缩编码后的屏幕图像数据发送给视频发送层。
步骤S30、视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
具体地,所述视频发送层通过网络模块(例如WIFI模块)将压缩编码后的屏幕图像数据发送到远端(例如发送到智能电视进行投屏显示,智能电视是基于Internet应用技术,具备开放式操作系统与芯片,拥有开放式应用平台,可实现双向人机交互功能,集影音、娱乐、数据等多种功能于一体,以满足用户多样化和个性化需求的电视产品,其目的是带给用户更便捷的体验,目前已经成为电视的潮流趋势;智能电视,是具有全开放式平台,搭载了操作系统,用户在欣赏普通电视内容的同时,可自行安装和卸载各类应用软件,持续对功能进行扩充和升级的新电视产品。智能电视能够不断给用户带来有别于,使用有线数字电视接收机(机顶盒)的、丰富的个性化体验)。
进一步地,如图2所示,本公开的帧率稳定输出方法流程为:
(1)流程开始;
(2)当屏幕刷新时,所述视频采集层将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
(3)当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
(4)当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层;
(5)所述视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
(6)所述视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端;
(7)结束。
在本公开实施之前,如果智能手机屏幕内容没有刷新,例如智能手机屏幕显示一个静止画面,且用户未触碰触摸屏(触摸屏,Touch Panel又称为“触控屏”、“触控面板”,是一种可接收触头等输入讯号的感应式液晶显示装置,当接触了屏幕上的图形按钮时,屏幕上的触觉反馈系统可根据预先编程的程式驱动各种连结装置,可用以取代机械式的按钮面板,并借由液晶显示画面制造出生动的影音效果;触摸屏作为一种最新的电脑输入设备,它是简单、方便、自然的一种人机交互方式,它赋予了多媒体以崭新的面貌,是极富吸引力的全新多媒体交互设备,主要应用于公共信息的查询、工业控制、军事指挥、电子游戏、多媒体教学等),此时基于安卓的渲染逻辑,不产生新的数据,这时候采集模块不再向下级输送数据,最终就导致接收端的帧率为0,通过本公开的方法可完美保障帧率的稳定输出。
本公开的视频采集层是数据的产生层,往往需要产生数据的稳定、持续,也就是本公开所解决的当前痛点,数据产生后送往视频编码层进行数据的编码压缩,最后由视频发送层把编码后的数据发送出去,本公开在视频采集层设计了一个数据暂存队列和一个定时器,当没有新的数据产生时,由定时器来驱动程序从暂存队列中循环取出数据(并非删除)来填补当前的帧率缺失,达到了稳定输出视频数据的目的,本公开直接带来的效果是帧率的稳定输出,不会因屏幕无数据产生,而导致采集节点无数据向下级模块输送。
进一步地,如图3所示,基于上述帧率稳定输出方法,本公开还相应提供了一种帧率稳定输出系统,其中,所述帧率稳定输出系统包括:视频采集层51、视频编码层52和视频发送层53;
所述视频采集层51用于采集屏幕图像数据,并将所述屏幕图像数据发送给所述视频编码层52;
所述视频编码层52用于接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给所述视频发送层53;
所述视频发送层53用于通过网络模块将压缩编码后的屏幕图像数据发送到 远端。
具体地,预先在所述视频采集层51设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
具体地,当屏幕刷新时,所述视频采集层51将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;当检测到有新的屏幕图像数据产生时,所述视频采集层51采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层52;当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层52。
具体地,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据;所述定时器的回调周期为预设时间/帧率;所述预设时间为1000ms,所述帧率为60帧。
具体地,所述屏幕图像数据为YUV数据。
本公开通过设计一个YUV内存暂存区(数据暂存队列)和一个定时器,定时器的触发间隔时间为1000ms/(预设帧率),对每次从VirtualDisplay获取到的最后一帧数据进行暂存,并判断是否有新的YUV数据产生,如果没有就启动定时器驱动软件从暂存区拿出最后一帧YUV数据来填补当前的缺少的YUV,通过这样的方式,可以实现任意帧率的稳定输出。
图2和图3中从上到下依次是视频采集层、视频编码层、视频发送层,本公开的视频采集层是数据的产生层,往往需要产生数据的稳定、持续,也就是本公开所解决的当前痛点,数据产生后送往视频编码层进行数据的编码压缩,最后由视频发送层把编码后的数据发送出去,本公开在视频采集层设计了一个数据暂存队列和一个定时器,当没有新的数据产生时,由定时器来驱动程序从暂存队列中循环取出数据(并非删除)来填补当前的帧率缺失,达到了稳定输出视频数据的目的。
进一步地,如图4所示,基于上述帧率稳定输出方法和系统,本公开还相应提供了一种智能终端,所述智能终端包括处理器10、存储器20及显示器30。图4仅示出了智能终端的部分组件,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
所述存储器20在一些实施例中可以是所述智能终端的内部存储单元,例如智能终端的硬盘或内存。所述存储器20在另一些实施例中也可以是所述智能终端的外部存储设备,例如所述智能终端上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器20还可以既包括所述智能终端的内部存储单元也包括外部存储设备。所述存储器20用于存储安装于所述智能终端的应用软件及各类数据,例如所述安装智能终端的程序代码等。所述存储器20还可以用于暂时地存储已经输出或者将要输出的数据。在一实施例中,存储器20上存储有帧率稳定输出程序40,该帧率稳定输出程序40可被处理器10所执行,从而实现本申请中帧率稳定输出方法。
所述处理器10在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行所述存储器20中存储的程序代码或处理数据,例如执行所述帧率稳定输出方法等。
所述显示器30在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。所述显示器30用于显示在所述智能终端的信息以及用于显示可视化的用户界面。所述智能终端的部件10-30通过系统总线相互通信。
在一实施例中,当处理器10执行所述存储器20中帧率稳定输出程序40时实现以下步骤:
视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;
视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
其中,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,之前还包括:
预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
其中,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,具体包括:
当屏幕刷新时,将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
其中,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据。
其中,所述定时器的回调周期为预设时间/帧率。
其中,所述预设时间为1000ms,所述帧率为60帧。
其中,所述屏幕图像数据为YUV数据。
本公开还提供一种计算机可读存储介质,其中,所述计算机可读存储介质存储有帧率稳定输出程序,所述帧率稳定输出程序被处理器执行时实现如上所述的帧率稳定输出方法的步骤。
综上所述,本公开提供一种帧率稳定输出方法、系统及智能终端,所述方法包括:视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。本公开通过设置计一个数据暂存队列和一个定时器,对每次从虚拟显示模块获取到的最后一帧数据进行暂存,并判断是否有新的屏幕图像数据产生,如果没有就启动定时器驱动软件从数据暂存队列取出最后一帧屏幕图像数据来填补当前的缺少的屏幕图像数据,可以实现任意帧率的稳定输出。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
当然,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关硬件(如处理器,控制器等)来完成,所述的程序可存储于一计算机可读取的计算机可读存储介质中,所述程序在执行时可包括如上述各方法实施例的流程。其中所述的计算机可读存储介质可为存储器、磁碟、光盘等。
应当理解的是,本公开的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本公开所附权利要求的保护范围。

Claims (20)

  1. 一种帧率稳定输出方法,其特征在于,所述帧率稳定输出方法包括:
    预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器;
    视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,所述屏幕图像数据为YUV数据;
    视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
    视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
  2. 根据权利要求1所述的帧率稳定输出方法,其特征在于,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,具体包括:
    当屏幕刷新时,将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
    当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
    当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
  3. 根据权利要求2所述的帧率稳定输出方法,其特征在于,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据。
  4. 根据权利要求2所述的帧率稳定输出方法,其特征在于,所述定时器的回调周期为预设时间/帧率。
  5. 根据权利要求4所述的帧率稳定输出方法,其特征在于,所述预设时间为1000ms,所述帧率为60帧。
  6. 一种帧率稳定输出方法,其特征在于,所述帧率稳定输出方法包括:
    视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层;
    视频编码层接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给视频发送层;
    视频发送层通过网络模块将压缩编码后的屏幕图像数据发送到远端。
  7. 根据权利要求6所述的帧率稳定输出方法,其特征在于,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,之前还包括:
    预先在所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
  8. 根据权利要求7所述的帧率稳定输出方法,其特征在于,所述视频采集层采集屏幕图像数据,并将所述屏幕图像数据发送给视频编码层,具体包括:
    当屏幕刷新时,将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;
    当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;
    当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
  9. 根据权利要求8所述的帧率稳定输出方法,其特征在于,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据。
  10. 根据权利要求8所述的帧率稳定输出方法,其特征在于,所述定时器的回调周期为预设时间/帧率。
  11. 根据权利要求10所述的帧率稳定输出方法,其特征在于,所述预设时间为1000ms,所述帧率为60帧。
  12. 根据权利要求6所述的帧率稳定输出方法,其特征在于,所述屏幕图像数据为YUV数据。
  13. 根据权利要求6所述的帧率稳定输出方法,其特征在于,所述网络模块为WIFI模块。
  14. 一种帧率稳定输出系统,其特征在于,所述帧率稳定输出系统包括:视频采集层、视频编码层和视频发送层;
    所述视频采集层用于采集屏幕图像数据,并将所述屏幕图像数据发送给所述视频编 码层;
    所述视频编码层用于接收到所述屏幕图像数据后,将所述屏幕图像数据进行压缩编码,并将压缩编码后的屏幕图像数据发送给所述视频发送层;
    所述视频发送层用于通过网络模块将压缩编码后的屏幕图像数据发送到远端。
  15. 根据权利要求14所述的帧率稳定输出系统,其特征在于,所述视频采集层设置一个用于暂时存储数据的数据暂存队列和一个用于设置触发时间的定时器。
  16. 根据权利要求15所述的帧率稳定输出系统,其特征在于,当屏幕刷新时,所述视频采集层将最后一帧屏幕图像数据放入所述数据暂存队列,并判断是否有新的屏幕图像数据产生;当检测到有新的屏幕图像数据产生时,所述视频采集层采集新的屏幕图像数据,并将采集到的新的屏幕图像数据发送给所述视频编码层;当未检测到有新的屏幕图像数据产生时,所述定时器驱动所述数据暂存队列取出最后一帧屏幕图像数据发送给所述视频编码层。
  17. 根据权利要求16所述的帧率稳定输出系统,其特征在于,所述定时器每间隔预设时间触发所述数据暂存队列取出最后一帧屏幕图像数据;所述定时器的回调周期为预设时间/帧率;所述预设时间为1000ms,所述帧率为60帧。
  18. 根据权利要求17所述的帧率稳定输出系统,其特征在于,所述屏幕图像数据为YUV数据。
  19. 一种智能终端,其特征在于,所述智能终端包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的帧率稳定输出程序,所述帧率稳定输出程序被所述处理器执行时实现如权利要求6-12任一项所述的帧率稳定输出方法的步骤。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有帧率稳定输出程序,所述帧率稳定输出程序被处理器执行时实现如权利要求6-12任一项所述的帧率稳定输出方法的步骤。
PCT/CN2021/119999 2021-07-19 2021-09-23 一种帧率稳定输出方法、系统及智能终端 WO2023000484A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110814888.2A CN113660494A (zh) 2021-07-19 2021-07-19 一种帧率稳定输出方法、系统及智能终端
CN202110814888.2 2021-07-19

Publications (1)

Publication Number Publication Date
WO2023000484A1 true WO2023000484A1 (zh) 2023-01-26

Family

ID=78477501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119999 WO2023000484A1 (zh) 2021-07-19 2021-09-23 一种帧率稳定输出方法、系统及智能终端

Country Status (2)

Country Link
CN (1) CN113660494A (zh)
WO (1) WO2023000484A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104021A1 (en) * 2008-10-27 2010-04-29 Advanced Micro Devices, Inc. Remote Transmission and Display of Video Data Using Standard H.264-Based Video Codecs
CN106296566A (zh) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 一种虚拟现实移动端动态时间帧补偿渲染系统及方法
CN108810281A (zh) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 丢帧补偿方法、装置、存储介质及终端
CN109218731A (zh) * 2017-06-30 2019-01-15 腾讯科技(深圳)有限公司 移动设备的投屏方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100104021A1 (en) * 2008-10-27 2010-04-29 Advanced Micro Devices, Inc. Remote Transmission and Display of Video Data Using Standard H.264-Based Video Codecs
CN106296566A (zh) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 一种虚拟现实移动端动态时间帧补偿渲染系统及方法
CN109218731A (zh) * 2017-06-30 2019-01-15 腾讯科技(深圳)有限公司 移动设备的投屏方法、装置及系统
CN108810281A (zh) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 丢帧补偿方法、装置、存储介质及终端

Also Published As

Publication number Publication date
CN113660494A (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
US10511803B2 (en) Video signal transmission method and device
CN103841389B (zh) 一种视频播放方法及播放器
CN105892976B (zh) 实现多屏互动的方法及装置
US9292901B2 (en) Handheld device and method for displaying synchronously with TV set
KR100685438B1 (ko) 디스플레이 장치 내에서 출력 이미지 데이터를 캡쳐, 저장 및 재생하는 장치 및 방법
CN104917990A (zh) 通过调整垂直消隐进行视频帧速率补偿
US8610763B2 (en) Display controller, display control method, program, output device, and transmitter
WO2017185761A1 (zh) 2d视频播放方法及装置
CN108260011B (zh) 在显示设备上实现写画的方法和系统
CA2950642C (en) Minimizing input lag in a remote gui tv application
US20100253850A1 (en) Video presentation system
CN112783380A (zh) 显示设备和方法
CN103037169A (zh) 嵌入式硬盘录像机的画面拼接合成的方法
CN107580228B (zh) 一种监控视频处理方法、装置及设备
CN111447488B (zh) 一种memc控制方法及显示设备
WO2023000484A1 (zh) 一种帧率稳定输出方法、系统及智能终端
US20110018979A1 (en) Display controller, display control method, program, output device, and transmitter
CN111064982B (zh) 一种显示控制方法、存储介质及显示设备
CN114710707A (zh) 显示设备及视频缩略图获取方法
CN102025936A (zh) 视频宽高比转换方法及装置
CN108965764B (zh) 图像处理方法及电子设备
US20020113814A1 (en) Method and device for video scene composition
CN115174991B (zh) 一种显示设备及视频播放方法
US20130097636A1 (en) Method and apparatus for displaying pictures during iptv channel switching
US20170208286A1 (en) Television system and multimedia playing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21950722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE