WO2020019139A1 - 视频均匀显示方法、终端设备、机器可读存储介质 - Google Patents

视频均匀显示方法、终端设备、机器可读存储介质 Download PDF

Info

Publication number
WO2020019139A1
WO2020019139A1 PCT/CN2018/096708 CN2018096708W WO2020019139A1 WO 2020019139 A1 WO2020019139 A1 WO 2020019139A1 CN 2018096708 W CN2018096708 W CN 2018096708W WO 2020019139 A1 WO2020019139 A1 WO 2020019139A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
display
control signal
video frame
queue
Prior art date
Application number
PCT/CN2018/096708
Other languages
English (en)
French (fr)
Inventor
陈欣
刘细华
Original Assignee
深圳市大疆创新科技有限公司
大疆互娱科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司, 大疆互娱科技(北京)有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/096708 priority Critical patent/WO2020019139A1/zh
Priority to CN201880039235.8A priority patent/CN110771160A/zh
Publication of WO2020019139A1 publication Critical patent/WO2020019139A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present invention relates to the technical field of video processing, and in particular, to a method for uniformly displaying a video, a terminal device, and a machine-readable storage medium.
  • the sender needs to send the collected video to the receiver so that the video stream can be displayed on the receiver as soon as possible. Because there is data jitter during the acquisition, encoding, and transmission process at the transmitting end, the video bitstream received by the receiving end will have more or less delay. If the receiving end displays the video stream immediately after receiving it, it will make the display time of each frame of the image uneven, which will cause the movement of the moving objects in the video to be inconsistent, that is, the problem of video freeze, which will affect the user's viewing.
  • the invention provides a video uniform display method, a terminal device, and a machine-readable storage medium.
  • a method for uniformly displaying a video including:
  • the rendered video frame is exchanged to the display of the terminal device; a time interval is set between the first control signal and the second control signal.
  • a terminal device including a communication bus, a memory, and a processor; the memory stores several computer instructions, buffers a video code stream from the communication bus, and converts the video code stream Video frame; the processor is connected to the memory through a communication bus, and is used to read computer instructions from the memory to achieve:
  • the rendered video frame is exchanged to the display of the terminal device; a time interval is set between the first control signal and the second control signal.
  • a machine-readable storage medium stores a plurality of computer instructions, and when the computer instructions are executed, the steps of the method according to the first aspect are implemented.
  • the display video frame is divided into two processes of rendering and display, and then the rendering operation and the video frame exchange operation are controlled to be performed according to the first control signal and the second control signal, respectively.
  • each video frame uses its own second control signal as a starting time for data exchange, that is, the time when the video frame starts to be displayed is the same.
  • each video frame is displayed between the respective second control signal (the second control signal of the current frame) and the first control signal of the subsequent frame, that is, the display duration is a set time.
  • the refresh time and display duration of each video frame are the same, which can ensure more uniform and delicate video display.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for uniformly displaying a video according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of decoding a video frame according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a process of discarding a video frame according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of rendering and display processing in different refresh cycles according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of rendering a video frame according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of rendering a video frame according to another embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a buffered video frame according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of displaying a video frame according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a method for uniformly displaying a video according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart of a video uniform display method according to another embodiment of the present invention.
  • FIG. 12 is a block diagram of a terminal device according to another embodiment of the present invention.
  • the sending end (such as a camera, camera, etc.) needs to send the collected video to the receiving end (such as a terminal device) so that the video stream can be displayed on the receiving end as soon as possible.
  • the screen refresh period is 60 Hz, and each refresh period is approximately 16.67 ms.
  • the frequency of the video collected at the sender is 30Hz, then each frame of image is displayed on the screen of the Android device or IOS device for 2 cycles, about 33.3ms.
  • the Android device or IOS device can be completed in 0-16.67ms, or it can be completed in 16.67 ⁇ 33.33ms, resulting in some image frames occupying 2 Display within the first cycle (0-16.67ms) of each refresh cycle (the display duration is the duration of the second cycle + part of the first cycle, exceeding 16.67ms), and some image frames occupy 2 refreshes Displayed in the second cycle (16.67-33.33ms) of the cycle (the display time is part of the second cycle, less than 16.67ms), so the display duration of the two image frames is different, causing the movement of the moving object in the video Coherence affects user viewing.
  • FIG. 1 is a schematic diagram of an application scenario of a method for uniformly displaying a video according to an embodiment of the present invention.
  • the transmitting end 10 maintains a communication state with the display end 30 through the communication connection 20.
  • the communication connection 20 may be a wired method or a wireless method.
  • the transmitting end 10 (such as a camera, a camera, or a handheld shooting device, etc.) captures video, and then sends the video stream to the display end 30 (such as a smart phone, a handheld, or a handheld device) through a network 20 (local area network LAN or wide area network WAN). Terminal equipment such as shooting equipment).
  • the display terminal 30 can process the video code stream to obtain video frames. Based on the obtained video frames, the display terminal 30 starts to perform the steps of the uniform video display method. Repeating the above scheme for each video frame can achieve the effect of displaying the video uniformly.
  • FIG. 2 is a schematic flowchart of a uniform video display method according to an embodiment of the present invention.
  • a uniform video display method includes steps 201 to 203, where:
  • the terminal device can decode the video code stream to obtain a video frame according to a preset decoding method, and then buffer the video frame at a specified position.
  • the specified location may be a buffer or a memory.
  • the designated position may also be in the form of a preset queue and the like.
  • a parsing thread for parsing a video stream and a decoding thread for decoding can be set in the terminal device in advance. Therefore, the following scheme can be adopted from the video stream to the video frame:
  • the video stream when a video stream from a transmitting end is received, the video stream may be a part of a video frame (for example, 1/3 frame, 1/2 frame).
  • the processor of the terminal device may call a parsing thread for parsing to parse the received video bitstream.
  • the multiple video bitstreams are used as The video frames to be decoded are buffered (corresponding to step 301), that is, the corresponding relationship between each video frame and multiple video code streams is determined.
  • a frame queue may be set in advance, and the frame queue may buffer multiple video frames. In this way, the parsing thread or processor can buffer the parsed video frames to be decoded into the frame queue.
  • the processor may call a preset decoding thread for decoding, and then obtain the video frames to be decoded from the frame queue, input them into the decoder in turn, and the decoder decodes the video frames. After that, the decoding thread or the processor may buffer the decoded video frame to a preset decoding queue (corresponding to step 302).
  • the processor of the terminal device may read the video frame from the decoding queue as the video frame to be displayed, that is, the processor may obtain the video frame to be displayed.
  • the processor can also adjust the decoding rate and the time to output the decoded video frame, so that the decoded video frame is directly used as the video frame to be displayed.
  • the processor further obtains the number of buffered video frames in the decoding queue (corresponding to step 401).
  • the processor then obtains the first set number preset and compares the number of buffered video frames with the first set number (corresponding to step 402). If they are equal, the video frame with the earliest cache time in the decoding queue (corresponding to Step 403), for example, discard the video frame at the head of the decoding queue.
  • the processor places the newly acquired video frame at the end of the decoding queue (corresponding to step 404). If not, the processor directly places the newly acquired video frame at the end of the decoding queue (corresponding to step 404).
  • the problem of display delay caused by too many buffered video frames in the decoding queue can be overcome.
  • the display delay caused by buffering too many video frames is because the rendering thread takes different time to render each video frame, while the decoding thread decodes the video frames all of the time is relatively fixed, that is, the speed of decoding the video frames received in the queue and the video frames released are different , So that the number of video frames in the decoding queue is large, which causes display delay.
  • the terminal device also includes a display which displays video content.
  • the display may include a driving chip, such as a gate driving chip.
  • the driving chip turns on the pixels through a setting method (progressive, progressive, interlaced, etc.) so that the pixel data is written into each pixel to complete the video frame Display.
  • the driver chip can generate a trigger signal, which indicates that the current video frame has been displayed and can start displaying the next video frame.
  • This trigger signal is called a vertical synchronization signal (VSync) in this embodiment.
  • the processor may be connected to the driving chip in advance, so that the processor receives the vertical synchronization signal, and the vertical synchronization signal may be used as a first control signal or a subsequent second control signal.
  • the first control signal and the second control signal may both be vertical synchronization signals. The difference between the two is only that the generated time is different, or that the time received by the processor is different.
  • the first control signal refers to the first vertical synchronization signal (VSync) received by the processor after acquiring the video frame to be displayed;
  • the second control signal refers to the processor receiving the video frame after receiving the to-be-displayed video frame.
  • VSync vertical sync signal
  • the interval between the first control signal and the second control signal is set in this embodiment. In this scenario, the set time is a refresh period, which is about 16.67 ms.
  • the first control signal and the second control signal may also come from a Sleep function in the terminal device or a timing synchronization signal of a preset timer.
  • the first control signal and the second control signal meet the above requirements, In the case, the scheme of the present application can also be implemented.
  • the system of the IOS device will provide a screen refresh callback signal, which is similar to the vertical synchronization signal in an Android device.
  • a technician can refer to the relevant literature for the screen refresh callback. The description of the signals is not repeated here.
  • the processor may switch the rendering thread or the display thread from the background state to the foreground state according to the screen refresh callback signal, so as to achieve the effect of calling the rendering thread or the display thread.
  • the processor when the processor receives the first control signal, the processor may render the acquired video frame to be displayed.
  • Rendering methods can include:
  • a rendering process for rendering a video frame is preset in the terminal device. Normally, the rendering process can be in the background state.
  • the processor calls the rendering process (corresponding to step 601), switches the rendering process from the background state to the foreground state, and uses the rendering process to render the video to be displayed.
  • Frame (corresponding to step 602).
  • the rendering method can refer to the related literature, and will not be repeated here.
  • Method 2 The video frame rendering method in the related technology is used to render the displayed video frame.
  • the thread in the related technology may have a rendering and display function, and the thread completes the rendering of the video frame.
  • the rendering solution please refer to the related literature. This will not be repeated here.
  • a display queue may be set in advance.
  • the rendering thread reads video frames from the decoding queue for rendering (corresponding to step 701), and the rendering process or processor buffers the rendered video frames to the display queue (corresponding to step 702), so that the display thread can directly Read the rendered video frames from the display queue to ensure that video frame exchange starts when the second control signal is received.
  • the rendering process or the processor may also cache the rendered video frames to be displayed in a preset background cache in the display.
  • the working principle of the background cache is similar to the working principle of the display queue. More details.
  • the processor when the processor receives the second control signal, the processor may exchange the rendered video frame to the display.
  • Exchange methods can include:
  • a display process for displaying a video frame is set in the terminal device in advance. Normally, the display process can be in the background state.
  • the processor calls the display process, switches the display process from the background state to the foreground state, and uses the display process to exchange the rendered video frames for display.
  • the display process and the display exchange operation can refer to the relevant literature, and will not be repeated here.
  • Manner 2 The rendered video frames are displayed using a scheme for displaying video frames in related technologies.
  • a thread having functions of rendering and displaying may be performed by the thread to display a rendered video frame to be displayed.
  • For an exchange operation between the thread and the display reference may be made to related documents, and details are not described herein again.
  • the difference is that the time at which the thread outputs the video frame to be displayed after rendering is controlled by the second control signal, so as to ensure the time when the video frame is exchanged to the display, and thus the length of time that the display displays the video frame.
  • the processor upon receiving the second control signal, the processor also detects whether a video frame to be displayed exists in the display queue (corresponding to step 801). If it does not exist, the processor receives the second control signal. After the control signal is not processed, it waits for the next second control signal (corresponding to step 803). If it exists, the rendered video frames are read from the display queue and exchanged to the display (corresponding to step 802). It can be seen that, in this embodiment, the display abnormality can be avoided.
  • the processor repeats steps 201 to 203, so that display of each video frame can be completed.
  • the parsing thread, the decoding thread, the rendering process, and the display process are different processes, and each performs a corresponding function, thereby ensuring that the video code stream is processed into video frames and the delay between display video frames is low.
  • the second control signal corresponding to each video frame is used as the starting time for data exchange, so that the time when each video frame starts to be displayed is the same.
  • each video frame is displayed between the respective second control signal and the first control signal of the subsequent frame, that is, the display duration is a set time.
  • the refresh time and display duration of each video frame are the same, which can ensure more uniform and delicate video display.
  • FIG. 9 is a schematic flowchart of a uniform video display method according to an embodiment of the present invention.
  • a uniform video display method includes steps 901 to 905, where:
  • step 901 and step 201 are the same.
  • FIG. 2 and the related content of step 201 and details are not described herein again.
  • step 902 and step 202 are the same.
  • FIG. 2 and the related content of step 202 are not described herein again.
  • the processor may obtain the number of cached and rendered video frames in the display queue.
  • the acquisition methods can include:
  • the processor reads the quantity identifier of the display queue, and the quantity identifier determines the number of video frames.
  • the processor statistics displays the number of buffered video frames in the queue.
  • the second preset number of buffered video frames in the display queue is stored in the terminal device in advance, for example, the second preset number may take 2 frames.
  • the second set number can be adjusted according to the transmission speed of the video bitstream and the size of the display queue, which is not limited herein.
  • the processor determines whether the number of buffered video frames in the display queue is equal to the second set number. If it is less than the second set number, go to step 901 and continue to obtain the video frames to be displayed, so as to continue to cache the rendered video frames in the display queue. If it is equal to the second set number, go to step 905.
  • the rendered video frame is exchanged to a display; a time interval is set between the first control signal and the second control signal.
  • step 905 and step 203 are the same.
  • FIG. 2 and the related content of step 203 which will not be repeated here.
  • the second control signal corresponding to each video frame is used as a starting time for data exchange, so that the time when each video frame starts to be displayed is the same.
  • Each video frame is displayed between the respective second control signal and the first control signal of the subsequent frame, that is, the display time is a set time, which can ensure that the video display is more uniform and delicate.
  • FIG. 10 is a schematic flowchart of a uniform video display method according to an embodiment of the present invention.
  • an Android device receives a video code stream from a sending end that has a communication relationship with the Android device.
  • the processor of the Android device can call a parsing thread to parse the video code stream, and parse the video code stream to form a complete video frame. Multiple video streams, and buffer the video frames to be decoded to a preset frame queue. Then, the processor may call the decoding thread to read the video frame to be decoded from the frame queue for decoding, and the decoding process may be completed by a preset decoder.
  • the processor may obtain the number of buffered video frames in the decoding queue, compare the number with the first set number, and if the number is less than the first set number, buffer the decoded video frames to be displayed to the decoding queue. If the number is equal to the first set number, the video frame with the earliest buffer time in the decoding queue is discarded, and the decoded video frame to be displayed is buffered to the decoding queue.
  • the processor detects whether the first control signal is received, and if not, the processor continues to detect. If the first control signal is received, the processor invokes a preset rendering process, reads the video frames to be displayed from the decoding queue for rendering, and buffers the rendered video frames to be displayed in the display queue.
  • the processor detects whether a second control signal is received, and if not, the processor continues to detect. If a second control signal is received, the processor detects whether there are rendered video frames to be displayed in the display queue, if not, it returns to detect whether a second control signal is received, and if a video frame is detected in the display queue, then Obtain the number of buffered video frames in the display queue. If the number is less than the second set number, return to detect whether a second control signal is received. If the number is equal to the second set number, the processor reads the rendered video frames and exchanges them to the display.
  • FIG. 11 is a schematic flowchart of a uniform video display method according to an embodiment of the present invention. Referring to FIG. 11, the steps of processing an IOS device into a decoded video frame according to a video code stream are the same as those of an Android device to obtain a decoded video frame, and details are not described herein again.
  • the processor detects whether a screen refresh callback signal is received. If it is not detected, it continues to detect. If it is detected, it determines whether the display flag is true. If the display flag is true, the video frames in the background buffer are exchanged to the display. If the display flag is not true, check whether the decoding queue is empty. If the decoding queue is empty, return to continue to detect whether a screen refresh callback signal is received. If the decoding queue is not empty, then check whether the buffered video frames in the decoding queue exceed the maximum buffer number. If the maximum number of buffers is exceeded, the video frame at the head of the decoding queue is discarded, and then the video frame at the head of the decoding queue is discarded. If the maximum number of buffers is not exceeded, a video frame at the head of the queue is obtained. Finally, the processor renders the video frame to be displayed and buffers it to the background buffer, and sets the display flag to true.
  • the IOS device is also provided with a foreground buffer.
  • the processor can also render the next frame of video frames and cache them to the foreground buffer.
  • the processor can switch the state of the background buffer and the foreground buffer, that is, the background buffer becomes the foreground buffer and the foreground buffer becomes the background buffer.
  • FIG. 12 is a block diagram of a terminal device according to an embodiment of the present invention.
  • a terminal device includes a processor 1201, a memory 1202, and a communication bus 1203.
  • the memory 1202 stores several computer instructions, buffers a video code stream from the communication bus 1203, and a video frame converted from the video code stream; the processor 1201 is connected to the memory 1202 through the communication bus 1203, and is used to download from the memory 1202 Read computer instructions to:
  • the rendered video frame is exchanged to the display of the terminal device; a time interval is set between the first control signal and the second control signal.
  • the first control signal and the second control signal are vertical synchronization signals from the display.
  • the set time is a refresh period of the display.
  • the processor 1201 before the processor 1201 is configured to obtain a video frame to be displayed, it is further configured to:
  • the preset parsing thread for parsing is called to parse the received video bitstream.
  • the multiple video bitstreams are used as the video frames to be decoded Buffer to frame queue;
  • processor 1201 before the processor 1201 is configured to buffer the video frame to a decoding queue, it is further configured to:
  • the video frame with the earliest buffer time in the decoding queue is discarded.
  • the processor 1201 for discarding the earliest buffered video frame in the decoding queue includes:
  • the processor 1201 is further configured to:
  • the step of obtaining a video frame to be displayed is performed.
  • the processor 1201 for rendering the video frame to be displayed when receiving the first control signal, includes:
  • the processor 1201 configured to render the video frame to be displayed by using the rendering process includes:
  • calling the rendering thread refers to switching the rendering thread from a background state to a foreground state.
  • the terminal device 1200 includes a display queue set in advance, and the rendered video frames to be displayed are buffered to the display queue.
  • the processor 1201 after receiving the second control signal, the processor 1201 is further configured to:
  • the video frames to be displayed are buffered in a preset background buffer.
  • the processor 1201 is configured to exchange the rendered video frames to the display when the second control signal is received, including:
  • a display thread for displaying is called, and the rendered video frame is read from the background cache through the display thread, and the video frame is exchanged to the display.
  • calling the display thread refers to switching the display thread from a background state to a foreground state.
  • the first control signal and the second control signal are timing synchronization signals from a Sleep function or a timer.
  • the first control signal and the second control signal are screen refresh callback signals from a system.
  • An embodiment of the present invention also provides a machine-readable storage medium that can be configured on a terminal device; the machine-readable storage medium stores a number of computer instructions, a video code stream, and a video frame converted from the video code stream; When the computer instructions are executed, the following processing is performed:
  • the rendered video frame is exchanged to the display of the terminal device; a time interval is set between the first control signal and the second control signal.

Abstract

一种视频均匀显示方法、终端设备、机器可读存储介质。一种视频均匀显示方法,包括:获取待显示视频帧;在接收到第一控制信号时,渲染所述待显示视频帧;在接收到第二控制信号时,将渲染后的视频帧交换到显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。这样,各视频帧均以各自的第二控制信号为起始时间进行数据交换,即视频帧开始显示的时刻相同。并且,各视频帧均在各自第二控制信号和后一帧的第一控制信号之间显示,即显示时长均为设定时间。换言之,本实施例中,各视频帧的刷新时刻和显示时长相同,可以保证视频显示更均匀和更细腻。

Description

视频均匀显示方法、终端设备、机器可读存储介质 技术领域
本发明涉及视频处理技术领域,尤其涉及视频均匀显示方法、终端设备、机器可读存储介质。
背景技术
目前,对于实时视频流,例如监控等场景,发送端需要将采集的视频发送给接收端,使视频流尽快地在接收端显示。由于发送端在采集、编码、传输等过程中存在数据抖动,则接收端接收的视频码流会存在或多或少的延迟。若接收端在接收到视频码流后立即显示,则会使每一帧图像的显示时间不均匀,从而造成视频中移动物体的动作不连贯,即视频卡顿的问题,影响到用户的观看。
发明内容
本发明提供一种视频均匀显示方法、终端设备、机器可读存储介质。
根据本发明的第一方面,提供一种视频均匀显示方法,包括:
获取待显示视频帧;
在接收到第一控制信号时,渲染所述待显示视频帧;
在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
根据本发明的第二方面,提供一种终端设备,包括通信总线、存储器和处理器;所述存储器存储若干条计算机指令、缓存来自所述通信总线的视频码流以及由所述视频码流转换成的视频帧;所述处理器通过通信总线与存储器连接,用于从所述存储器中读取计算机指令以实现:
获取待显示视频帧;
在接收到第一控制信号时,渲染所述待显示视频帧;
在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
根据本发明的第三方面,提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现第一方面所述方法的步骤。
由上述的技术方案可见,本实施例中通过将显示视频帧分为渲染和显示两个过程,然后根据第一控制信号和第二控制信号分别控制执行渲染操作和视频帧交换操作。这样,各视频帧均以各自的第二控制信号为起始时间进行数据交换,即视频帧开始显示的时刻相同。并且,各视频帧均在各自第二控制信号(当前帧的第二控制信号)和后一帧的第一控制信号之间显示,即显示时长均为设定时间。换言之,本实施例中,各视频帧的刷新时刻和显示时长相同,可以保证视频显示更均匀和更细腻。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例提供的应用场景示意图;
图2是本发明一实施例提供的一种视频均匀显示方法的流程示意图;
图3是本发明一实施例提供的解码视频帧的流程示意图;
图4是本发明一实施例提供的丢弃视频帧的流程示意图;
图5是本发明一实施例提供的渲染和显示在不同刷新周期的处理的示意图;
图6是本发明一实施例提供的渲染视频帧的流程示意图;
图7是本发明另一实施例提供的渲染视频帧的流程示意图;
图8是本发明一实施例提供的缓冲视频帧的流程示意图;
图9是本发明一实施例提供的显示视频帧的流程示意图;
图10是本发明一实施例提供的一种视频均匀显示方法的流程示意图;
图11是本发明另一实施例提供的一种视频均匀显示方法的流程示意图;
图12是本发明另一实施例提供的一种终端设备的框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
目前,对于实时视频流,例如监控等场景,发送端(例如摄像头、相机等)需要将采集的视频发送给接收端(例如终端设备),使视频流尽快地在接收端显示。
以安装有安卓系统的设备(后称之为安卓设备)或者安装有IOS系统的设备(后称之为ISO设备)的屏幕刷新周期为60Hz,每个刷新周期约为16.67ms。发送端采集视频的频率为30Hz,那么每一帧图像在安卓设备或者IOS设备的屏幕上显示2个周期,约33.3ms。由于在接收到图像帧后,针对不同图像帧的渲染和显示过程,安卓设备或者IOS设备可以在0-16.67ms内完成,也可以在16.67~33.33ms内完成,导致部分图像帧在所占2个刷新周期中的第一个周期(0-16.67ms)内显示(显示时长为第二个周期的时长+第一个周期的部分时长,超过16.67ms),部分图像帧在所占2个刷新周期中的第二个周期(16.67-33.33ms)内显示(显示时长为第二个周期的部分时长,小于16.67ms),从而这两部分图像帧的显示时长不同,造成视频中移动物体的动作不连贯,影响到用户的观看。
为此,本发明实施例提供了一种视频均匀显示方法,图1是本发明一实施例提供的视频均匀显示方法的应用场景示意图。参见图1,发送端10通过通信连接20与显示端30保持通信状态。该通信连接20可以为有线方式,也可以为无线方式。本实施例中,发送端10(如摄像头、相机或者手持拍摄设备等)采集视频,然后以视频码流的方式通过网络20(局域网LAN或者广域网WAN)发送给显示端30(如智能手机、手持拍摄设备等终端设备)。显示端30可以处理视频码流得到视频帧,在获取到各视频帧的基础上,显示端30开始执行视频均匀显示方法的步骤。针对各视频帧重复上述方案,可以达到均匀显示视频的效果。
为方便说明,后续中显示端30以终端设备为例进行描述。图2是本发明一实施例提供的视频均匀显示方法的流程示意图,参见图2,一种视频均匀显示方法,包括步骤201~步骤203,其中:
201,获取待显示视频帧。
本实施例中,终端设备在接收到视频码流后,按照预先设置的解码方法可以解码视频码流得到视频帧,然后将视频帧缓存在指定位置。其中,指定位置可以为缓存器或者存储器。当然,指定位置还可以为预先设置的队列等形式。
本实施例中,终端设备中可以预先设置用于解析视频码流的解析线程,以及用于解码的解码线程,因此从视频码流到视频帧可以采用以下方案:
参见图3,在接收到来自发送端的视频码流时,视频码流可以为视频帧的一部分(例如1/3帧,1/2帧)。终端设备的处理器可以调用用于解析的解析线程,对接收到的视频码流进行解析,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧进行缓存(对应步骤301),即确定出每一视频帧与多个视频码流的对应关系。本实施例中,可以预先设置一个帧队列,该帧队列可以缓存多帧视频帧。这样,解析线程或者处理器可以将解析出的待解码的视频帧缓存至帧队列中。
继续参见图3,处理器可以调用预先设置的用于解码的解码线程,然后从帧队列中获取待解码的视频帧,依次输入到解码器中,由解码器解码出视频帧。之后,解码线程或者处理器可以将解码出的视频帧缓存至预先设置的解码队列(对应步骤302)。
本实施例中,终端设备的处理器可以从解码队列中读取视频帧作为待显示视频帧,即处理器可以获取到待显示视频帧。当然,处理器还可以调整解码速率,调整输出解码后的视频帧的时间,从而直接将解码输出的视频帧作为待显示视频帧。
在一实施例中,参见图4,在解码出视频帧后,处理器还获取所述解码队列中已缓存视频帧的数量(对应步骤401)。然后处理器获取预先设置的第一设定数量,并对比已缓存视频帧的数量和该第一设定数量(对应步骤402),若相等,则丢弃解码队列中缓存时间最早的视频帧(对应步骤403),例如丢弃位于所述解码队列队头的视频帧。之后,处理器将新获取的视频帧放至解码队列的队尾(对应步骤404)。若不等,处理器直接将新获取的视频帧放至解码队列的队尾(对应步骤404)。这样,本实施例中可以克服解码队列中缓存视频帧过多而造成显示延时的问题。缓存过多视频帧而造成显示延时是因为:渲染线程渲染各视频帧所用时间不同,而解码线程解码视频帧所有时间相对固定,即解码队列中接收的视频帧和释放的视频帧的速度不同,使解码队列中视频帧的数量较多进而引起显示延时。
202,在接收到第一控制信号时,渲染所述待显示视频帧。
终端设备还包括显示器,由该显示器显示视频内容。以安卓设备为例,显示器中可以包含驱动芯片,例如栅极驱动芯片,驱动芯片通过设定方式(逐行、逐列、隔行等)开启像素,以使像素数据写入各像素,完成视频帧的显示。
在每次显示完成后,驱动芯片可以生成一个触发信号,该触发信号表明当前视频帧已经显示完成,可以开始显示下一帧视频帧。本实施例中将 该触发信号称之为垂直同步信号(VSync)。
由于垂直同步信号在两帧视频帧之间产生且每帧视频帧的显示时长相同,因此一个垂直同步信号对应一个刷新周期,从而可以将其作为控制信号。本实施例中可以预先将处理器与驱动芯片连接,从而使处理器接收到垂直同步信号,并且将垂直同步信号可以作为第一控制信号或后续的第二控制信号。
需要说明的是,第一控制信号以及第二控制信号可以都为垂直同步信号,两者的区别仅在于产生的时刻不同,或者说处理器接收的时刻不同。参见图5,第一控制信号是指在获取待显示视频帧后,处理器接收到的第一个垂直同步信号(VSync);第二控制信号是指在获取待显示视频帧后,处理器接收到的第二个垂直同步信号(VSync)。可见,本实施例中第一控制信号和第二控制信号之间间隔设定时间,此场景下,设定时间为一个刷新周期,约为16.67ms。
在另一实施例中,第一控制信号和第二控制信号还可以来自终端设备中的Sleep函数或者预先设置的计时器的定时同步信号,在第一控制信号和第二控制信号满足上述要求的情况下,同样可以实现本申请的方案。
在又一实施例中,以IOS设备为例,IOS设备的系统会提供屏幕刷新回调信号,该屏幕刷新回调信号与安卓设备中的垂直同步信号类似,技术人员可以参考相关文献中对屏幕刷新回调信号的说明,在此不再赘述。本实施例中,处理器可以根据屏幕刷新回调信号将渲染线程或者显示线程从后台状态切换到前台状态,达到调用渲染线程或者显示线程的效果。
本实施例中,处理器在接收到第一控制信号时,可以对获取的待显示视频帧进行渲染。渲染方式可以包括:
方式一,参见图6,在终端设备中预先设置一个用于渲染视频帧的渲染进程。通常情况下,渲染进程可以处于后台状态,在接收到第一控制信号时,处理器调用渲染进程(对应步骤601),将该渲染进程从后台状态切换至前台状态,利用渲染进程渲染待显示视频帧(对应步骤602)。渲 染方法可以参考相关文献,在此不再赘述。
方式二,采用相关技术中渲染视频帧的方案对待显示视频帧进行渲染,例如相关技术中可以具有渲染和显示功能的线程,由该线程完成对视频帧的渲染,渲染方案请参考相关文献,在此不再赘述。
在一实施例中,由于渲染进程的渲染时间是不固定的,而后续出现的显示进程的调用时间是固定的,因此可以预先设置一个显示队列。参见图7,渲染线程从解码队列中读取视频帧进行渲染(对应步骤701),并由渲染进程或者处理器将渲染后的视频帧缓存至显示队列(对应步骤702),这样显示线程可以直接从显示队列中读取渲染后的视频帧,以保证视频帧交换在接收到第二控制信号时开始进行。
在另一实施例中,渲染进程或者处理器还可以将渲染后的待显示视频帧缓存到显示器中预先设置的后台缓存中,该后台缓存的工作原理与显示队列的工作原理类似,在此不再赘述。
203,在接收到第二控制信号时,将渲染后的视频帧交换到显示器。
本实施例中,处理器在接收到第二控制信号时,处理器可以将渲染后的视频帧交换到显示器。交换方式可以包括:
方式一,在终端设备中预先设置一个用于显示视频帧的显示进程。通常情况下,显示进程可以处于后台状态,在接收到第二控制信号时,处理器调用显示进程,将该显示进程从后台状态切换至前台状态,利用显示进程将渲染后的待显示视频帧交换到显示器,显示进程和显示器的交换操作可以参考相关文献,在此不再赘述。
方式二,采用相关技术中显示视频帧的方案显示渲染后的待显示视频帧。例如相关技术中可以具有渲染和显示功能的线程,由该线程完成对渲染后的待显示视频帧的显示,该线程和显示器之间交换操作可以参考相关文献,在此不再赘述。区别在于,该线程输出渲染后待显示视频帧的时刻受到第二控制信号的控制,从而保证视频帧交换到显示器的时刻,进而保证显示器显示视频帧的时长。
在一实施例中,参见图8,在接收到第二控制信号时,处理器还检测显示队列中是否存在待显示视频帧(对应步骤801),若不存在,则处理器在接收到第二控制信号后不做处理,等待下一个第二控制信号(对应步骤803)。若存在,则从显示队列中读取渲染后的视频帧并交换到显示器(对应步骤802)。可见,本实施例中可以避免显示器显示异常。
针对各待显示图像帧,处理器重复步骤201~步骤203,从而可以完成各视频帧的显示。
本实施例中,解析线程、解码线程、渲染进程和显示进程为不同的进程,各自完成相应的功能,从而保证视频码流处理为视频帧以及显示视频帧之间延时较低。
至此,本实施例中均以各视频帧对应的第二控制信号为起始时间进行数据交换,从而达到各视频帧开始显示的时刻相同。并且,各视频帧均在各自第二控制信号和后一帧的第一控制信号之间显示,即显示时长均为设定时间。换言之,本实施例中,各视频帧的刷新时刻和显示时长相同,可以保证视频显示更均匀和更细腻。
图9是本发明一实施例提供的视频均匀显示方法的流程示意图,参见图9,一种视频均匀显示方法,包括步骤901~步骤905,其中:
901,获取待显示视频帧。
步骤901和步骤201的具体方法和原理一致,详细描述请参考图2及步骤201的相关内容,此处不再赘述。
902,在接收到第一控制信号时,渲染所述待显示视频帧。
步骤902和步骤202的具体方法和原理一致,详细描述请参考图2及步骤202的相关内容,此处不再赘述。
903,获取显示队列中已缓存视频帧的数量。
由于视频码流受到传输条件的影响会传输速率波动,导致显示线程读取不到渲染后的视频帧情况,从而无法正常显示。因此,本实施例中,处理器可以获取显示队列中已缓存渲染后的视频帧的数量。获取方式可以包 括:
方式一,处理器读取显示队列的数量标识,由数量标识确定视频帧的数量。
方式二,处理器统计显示队列中已经缓存视频帧的数量。
904,若所述数量等于第二设定数量,则转到步骤905。
本实施例中,终端设备中预先存储显示队列中已缓存视频帧的第二设定数量,例如第二设定数量可以取值2帧。该第二设定数量可以根据视频码流的传输速度以及显示队列的大小进行调整,在此不作限定。
然后,处理器判断显示队列中已缓存的视频帧的数量是否等于第二设定数量。若小于第二设定数量,则转到步骤901,继续获取待显示视频帧,以向显示队列中继续缓存渲染后的视频帧。若等于第二设定数量,则转到步骤905。
本实施例中,通过在显示队列中缓存部分视频帧,可以克服视频码流波动的问题,从而提升显示质量。
905,在接收到第二控制信号时,将渲染后的视频帧交换到显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
步骤905和步骤203的具体方法和原理一致,详细描述请参考图2及步骤203的相关内容,此处不再赘述。
至此,本实施例中在显示视频帧之前,通过缓存数帧视频帧,可以克服视频码流的波动。并且,本实施例中均以各视频帧对应的第二控制信号为起始时间进行数据交换,从而达到各视频帧开始显示的时刻相同。各视频帧均在各自第二控制信号和后一帧的第一控制信号之间显示,即显示时长均为设定时间,可以保证视频显示更均匀和更细腻。
下面以安卓设备为例,结合附图10描述视频均匀显示方法的内容,图10是本发明实施例提供的一种视频均匀显示方法的流程示意图。参见图10,安卓设备从与其有通信关系的发送端接收到视频码流,安卓设备的处理器可以调用解析线程解析视频码流,从视频码流中解析出能够构成完整 的一帧视频帧的多个视频码流,并将待解码的视频帧缓存至预先设置的帧队列。然后,处理器可以调用解码线程从帧队列中读取待解码的视频帧进行解码,解码过程可以由预先设置的解码器完成。
处理器可以获取解码队列中已缓存视频帧的数量,对比数量与第一设定数量的大小,若数量小于第一设定数量,则将解码出的待显示视频帧缓存至解码队列。若数量等于第一设定数量,则丢弃解码队列中缓存时间最早的视频帧,并将解码出的待显示视频帧缓存至解码队列。
处理器检测是否接收到第一控制信号,若未接收到,则处理器继续检测。若接收到第一控制信号,处理器调用预先设置的渲染进程,从解码队列中读取待显示视频帧进行渲染,并将渲染后的待显示视频帧缓存至显示队列。
处理器检测是否接收到第二控制信号,若未接收到,则处理器继续检测。若接收到第二控制信号,处理器检测显示队列中是否存在渲染后的待显示视频帧,若不存在,则返回检测是否接收到第二控制信号,若检测到显示队列中存在视频帧,则获取显示队列中已缓存视频帧的数量,若数量小于第二设定数量,则返回检测是否接收到第二控制信号。若数量等于第二设定数量,处理器读取渲染后的视频帧,并交换到显示器。
下面以IOS设备为例,结合附图11描述视频均匀显示方法的内容,图11是本发明实施例提供的一种视频均匀显示方法的流程示意图。参见图11,IOS设备根据视频码流处理为解码后的视频帧的步骤与安卓设备的获取解码后的视频帧的步骤相同,在此不再赘述。
处理器检测是否接收到屏幕刷新回调信号,若未检测到,则继续检测,若检测到则确定显示标志是否为真,若显示标志为真,则将后台缓冲内的视频帧交换到显示器,若显示标志不为真,则检测解码队列是否为空。若解码队列为空,则返回继续检测是否接收到屏幕刷新回调信号,若解码队列不为空,则检测解码队列中已缓存视频帧是否超过最大缓冲数。若超过最大缓冲数,则丢弃解码队列中队头的一帧视频帧,然后获取解码队列中 队头的一帧视频帧。若未超过最大缓冲数,则获取解码队列中队头的一帧视频帧。最后,处理器渲染待显示视频帧并缓存至后台缓冲,设置显示标志为真。
IOS设备中还设置有前台缓冲,在处理器将后台缓冲的视频帧交换到显示器的过程中,处理器还可以对后一帧视频帧进行渲染并缓存至前台缓冲。处理器在接收到屏幕刷新回调信号后,可以切换后台缓冲和前台缓冲的状态,即后台缓冲变为前台缓冲且前台缓冲变为后台缓冲。
图12是本发明一实施例提供的终端设备的框图,参见图12,一种终端设备,包括处理器1201、存储器1202和通信总线1203。存储器1202存储若干条计算机指令、缓存来自通信总线1203的视频码流以及由视频码流转换成的视频帧;所述处理器1201通过通信总线1203与存储器1202连接,用于从所述存储器1202中读取计算机指令以实现:
获取待显示视频帧;
在接收到第一控制信号时,渲染所述待显示视频帧;
在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
在一实施例中,所述第一控制信号和所述第二控制信号为来自所述显示器的垂直同步信号。
在一实施例中,所述设定时间为所述显示器的刷新周期。
在一实施例中,所述处理器1201用于获取待显示视频帧之前,还用于:
调用预先设置的用于解析的解析线程对接收的视频码流进行解析,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至帧队列;
调用预先设置的用于解码的解码线程,将所述多个视频码流解码出视频帧,并缓存所述视频帧至解码队列。
在一实施例中,所述处理器1201用于缓存所述视频帧至解码队列之前,还用于:
获取所述解码队列中已缓存视频帧的数量;
若所述数量等于第一设定数量,则丢弃所述解码队列中缓存时间最早的视频帧。
在一实施例中,所述处理器1201用于丢弃所述解码队列中缓存时间最早的视频帧包括:
丢弃位于所述解码队列队头的视频帧且将新获取的视频帧放至所述解码队列的队尾。
在一实施例中,所述处理器1201用于缓存所述视频帧至解码队列之后,还用于:
获取所述解码队列中已缓存视频帧的数量;
若所述数量等于第二设定数量,则执行获取待显示视频帧的步骤。
在一实施例中,在接收到第一控制信号时,所述处理器1201用于渲染所述待显示视频帧包括:
在接收到第一控制信号时,调用预先设置的渲染线程;
利用所述渲染进程渲染所述待显示视频帧。
在一实施例中,所述处理器1201用于利用所述渲染进程渲染所述待显示视频帧包括:
利用所述渲染线程从解码队列中读取视频帧进行渲染;
利用所述渲染线程将渲染后的视频帧缓存至显示队列。
在一实施例中,调用所述渲染线程,是指将所述渲染线程从后台状态切换至前台状态。
在一实施例中,所述终端设备1200包括预告设置的显示队列,渲染后的待显示视频帧缓存至所述显示队列。
在一实施例中,所述处理器1201用于在接收到第二控制信号后,还用于:
检测所述显示队列中是否存在渲染后的待显示视频帧;
若存在,则执行在接收到第二控制信号时,将渲染后的视频帧交换到 显示器的步骤;若不存在,则等待下一个第二控制信号。
在一实施例中,所述待显示视频帧缓存在预先设置的后台缓存中。
在一实施例中,所述处理器1201用于在接收到第二控制信号时,将渲染后的视频帧交换到显示器包括:
在接收到第二控制信号时,调用用于显示的显示线程,通过所述显示线程从所述后台缓存中读取渲染后的视频帧,并将所述视频帧交换到显示器。
在一实施例中,调用显示线程,是指将所述显示线程从后台状态切换至前台状态。
在一实施例中,所述第一控制信号和所述第二控制信号为来自Sleep函数或者计时器的定时同步信号。
在一实施例中,所述第一控制信号和所述第二控制信号为来自系统的屏幕刷新回调信号。
本发明实施例还提供了一种机器可读存储介质,可以配置在终端设备;所述机器可读存储介质上存储有若干计算机指令、视频码流以及由视频码流转换成的视频帧;所述计算机指令被执行时进行如下处理:
获取待显示视频帧;
在接收到第一控制信号时,渲染所述待显示视频帧;
在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……” 限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的检测装置和方法进行了详细介绍,本发明中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (35)

  1. 一种视频均匀显示方法,其特征在于,应用于终端设备,包括:
    获取待显示视频帧;
    在接收到第一控制信号时,渲染所述待显示视频帧;
    在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
  2. 根据权利要求1所述的视频均匀显示方法,其特征在于,所述第一控制信号和所述第二控制信号为来自所述显示器的垂直同步信号。
  3. 根据权利要求1所述的视频均匀显示方法,其特征在于,所述设定时间为所述显示器的刷新周期。
  4. 根据权利要求1所述的视频均匀显示方法,其特征在于,获取待显示视频帧之前,所述方法还包括:
    调用预先设置的用于解析的解析线程对接收到视频码流进行解析,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至帧队列;
    调用预先设置的用于解码的解码线程,将所述多个视频码流解码出视频帧,并缓存所述视频帧至解码队列。
  5. 根据权利要求4所述的视频均匀显示方法,其特征在于,缓存所述视频帧至解码队列之前,所述方法还包括:
    获取所述解码队列中已缓存视频帧的数量;
    若所述数量等于第一设定数量,则丢弃所述解码队列中缓存时间最早的视频帧。
  6. 根据权利要求5所述的视频均匀显示方法,其特征在于,丢弃所述解码队列中缓存时间最早的视频帧包括:
    丢弃位于所述解码队列队头的视频帧且将新获取的视频帧放至所述解码队列的队尾。
  7. 根据权利要求4所述的视频均匀显示方法,其特征在于,缓存所述视频帧至解码队列之后,所述方法还包括:
    获取所述解码队列中已缓存视频帧的数量;
    若所述数量等于第二设定数量,则执行获取待显示视频帧的步骤。
  8. 根据权利要求1所述的视频均匀显示方法,其特征在于,在接收到第一控制信号时,渲染所述待显示视频帧包括:
    在接收到第一控制信号时,调用预先设置的渲染线程;
    利用所述渲染进程渲染所述待显示视频帧。
  9. 根据权利要求8所述的视频均匀显示方法,其特征在于,利用所述渲染进程渲染所述待显示视频帧包括:
    利用所述渲染线程从解码队列中读取视频帧进行渲染;
    利用所述渲染线程将渲染后的视频帧缓存至显示队列。
  10. 根据权利要求8所述的视频均匀显示方法,其特征在于,调用所述渲染线程,是指将所述渲染线程从后台状态切换至前台状态。
  11. 根据权利要求1所述的视频均匀显示方法,其特征在于,所述终端设备包括预告设置的显示队列,渲染后的待显示视频帧缓存至显示队列。
  12. 根据权利要求1所述的视频均匀显示方法,其特征在于,在接收到第二控制信号后,所述方法还包括:
    检测所述显示队列中是否存在渲染后的待显示视频帧;
    若存在,则执行将渲染后的视频帧交换到显示器的步骤;若不存在,则等待下一个第二控制信号。
  13. 根据权利要求12所述的视频均匀显示方法,其特征在于,所述待显示视频帧缓存在预先设置的后台缓存中。
  14. 根据权利要求13所述的视频均匀显示方法,其特征在于,在接收到第二控制信号时,将渲染后的视频帧交换到显示器包括:
    在接收到第二控制信号时,调用用于显示的显示线程,利用所述显示线程从所述后台缓存中读取渲染后的视频帧,并将所述视频帧交换到显示 器。
  15. 根据权利要求14所述的视频均匀显示方法,其特征在于,调用显示线程,是指将所述显示线程从后台状态切换至前台状态。
  16. 根据权利要求1所述的视频均匀显示方法,其特征在于,所述第一控制信号和所述第二控制信号为来自Sleep函数或者计时器的定时同步信号。
  17. 根据权利要求1所述的视频均匀显示方法,其特征在于,所述第一控制信号和所述第二控制信号为来自系统的屏幕刷新回调信号。
  18. 一种终端设备,其特征在于,包括通信总线、存储器和处理器;所述存储器存储若干条计算机指令、缓存来自所述通信总线的视频码流以及由所述视频码流转换成的视频帧;所述处理器通过通信总线与存储器连接,用于从所述存储器中读取计算机指令以实现:
    获取待显示视频帧;
    在接收到第一控制信号时,渲染所述待显示视频帧;
    在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
  19. 根据权利要求18所述的终端设备,其特征在于,所述第一控制信号和所述第二控制信号为来自所述显示器的垂直同步信号。
  20. 根据权利要求18所述的终端设备,其特征在于,所述设定时间为所述显示器的刷新周期。
  21. 根据权利要求18所述的终端设备,其特征在于,所述处理器用于获取待显示视频帧之前,还用于:
    调用预先设置的用于解析的解析线程对接收的视频码流进行解析,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至帧队列;
    调用预先设置的用于解码的解码线程,将所述多个视频码流解码出视频帧,并缓存所述视频帧至解码队列。
  22. 根据权利要求21所述的终端设备,其特征在于,所述处理器用于缓存所述视频帧至解码队列之前,还用于:
    获取所述解码队列中已缓存视频帧的数量;
    若所述数量等于第一设定数量,则丢弃所述解码队列中缓存时间最早的视频帧。
  23. 根据权利要求22所述的终端设备,其特征在于,所述处理器用于丢弃所述解码队列中缓存时间最早的视频帧包括:
    丢弃位于所述解码队列队头的视频帧且将新获取的视频帧放至所述解码队列的队尾。
  24. 根据权利要求21所述的终端设备,其特征在于,所述处理器用于缓存所述视频帧至解码队列之后,还用于:
    获取所述解码队列中已缓存视频帧的数量;
    若所述数量等于第二设定数量,则执行获取待显示视频帧的步骤。
  25. 根据权利要求18所述的终端设备,其特征在于,在接收到第一控制信号时,所述处理器用于渲染所述待显示视频帧包括:
    在接收到第一控制信号时,调用预先设置的渲染线程;
    利用所述渲染进程渲染所述待显示视频帧。
  26. 根据权利要求25所述的终端设备,其特征在于,所述处理器用于利用所述渲染进程渲染所述待显示视频帧包括:
    利用所述渲染线程从解码队列中读取视频帧进行渲染;
    利用所述渲染线程将渲染后的视频帧缓存至显示队列。
  27. 根据权利要求23所述的终端设备,其特征在于,调用所述渲染线程,是指将所述渲染线程从后台状态切换至前台状态。
  28. 根据权利要求18所述的终端设备,其特征在于,所述终端设备包括预告设置的显示队列,渲染后的待显示视频帧缓存至所述显示队列。
  29. 根据权利要求18所述的终端设备,其特征在于,所述处理器用于在接收到第二控制信号后,还用于:
    检测所述显示队列中是否存在渲染后的待显示视频帧;
    若存在,则执行在接收到第二控制信号时,将渲染后的视频帧交换到显示器的步骤;若不存在,则等待下一个第二控制信号。
  30. 根据权利要求29所述的终端设备,其特征在于,所述待显示视频帧缓存在预先设置的后台缓存中。
  31. 根据权利要求30所述的终端设备,其特征在于,所述处理器用于在接收到第二控制信号时,将渲染后的视频帧交换到显示器包括:
    在接收到第二控制信号时,调用用于显示的显示线程,通过所述显示线程从所述后台缓存中读取渲染后的视频帧,并将所述视频帧交换到显示器。
  32. 根据权利要求31所述的终端设备,其特征在于,调用显示线程,是指将所述显示线程从后台状态切换至前台状态。
  33. 根据权利要求18所述的终端设备,其特征在于,所述第一控制信号和所述第二控制信号为来自Sleep函数或者计时器的定时同步信号。
  34. 根据权利要求18所述的终端设备,其特征在于,所述第一控制信号和所述第二控制信号为来自系统的屏幕刷新回调信号。
  35. 一种机器可读存储介质,其特征在于,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现权利要求1~17任一项所述方法的步骤。
PCT/CN2018/096708 2018-07-23 2018-07-23 视频均匀显示方法、终端设备、机器可读存储介质 WO2020019139A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/096708 WO2020019139A1 (zh) 2018-07-23 2018-07-23 视频均匀显示方法、终端设备、机器可读存储介质
CN201880039235.8A CN110771160A (zh) 2018-07-23 2018-07-23 视频均匀显示方法、终端设备、机器可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/096708 WO2020019139A1 (zh) 2018-07-23 2018-07-23 视频均匀显示方法、终端设备、机器可读存储介质

Publications (1)

Publication Number Publication Date
WO2020019139A1 true WO2020019139A1 (zh) 2020-01-30

Family

ID=69181110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096708 WO2020019139A1 (zh) 2018-07-23 2018-07-23 视频均匀显示方法、终端设备、机器可读存储介质

Country Status (2)

Country Link
CN (1) CN110771160A (zh)
WO (1) WO2020019139A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510759B (zh) * 2020-03-17 2023-10-13 视联动力信息技术股份有限公司 视频显示方法、装置及可读存储介质
CN112422875A (zh) * 2020-10-14 2021-02-26 西安万像电子科技有限公司 图像处理方法及装置
CN112929741B (zh) * 2021-01-21 2023-02-03 杭州雾联科技有限公司 一种视频帧渲染方法、装置、电子设备和存储介质
CN113254120B (zh) * 2021-04-02 2022-11-01 荣耀终端有限公司 数据处理方法和相关装置
CN113395572B (zh) * 2021-06-15 2023-05-16 北京字跳网络技术有限公司 一种视频处理方法、装置、存储介质及电子设备
CN113870799B (zh) * 2021-09-09 2022-11-18 瑞芯微电子股份有限公司 一种电子墨水屏设备的系统显示方法和存储设备
CN114205662B (zh) * 2021-12-13 2024-02-20 北京蔚领时代科技有限公司 iOS端的低延迟视频渲染方法及装置
CN115550708B (zh) * 2022-01-07 2023-12-19 荣耀终端有限公司 数据处理方法及电子设备
CN115361579A (zh) * 2022-07-28 2022-11-18 珠海全志科技股份有限公司 视频送显方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102185999A (zh) * 2011-03-28 2011-09-14 广东威创视讯科技股份有限公司 消除视频抖动的方法及装置
US20140092113A1 (en) * 2012-10-02 2014-04-03 Nvidia Corporation System, method, and computer program product for providing a dynamic display refresh
CN103747332A (zh) * 2013-12-25 2014-04-23 乐视致新电子科技(天津)有限公司 一种视频的平滑处理方法和装置
CN106296566A (zh) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 一种虚拟现实移动端动态时间帧补偿渲染系统及方法
CN106843859A (zh) * 2016-12-31 2017-06-13 歌尔科技有限公司 一种虚拟现实场景的绘制方法和装置及一种虚拟现实设备
CN107220019A (zh) * 2017-05-15 2017-09-29 努比亚技术有限公司 一种基于动态vsync信号的渲染方法、移动终端及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102171389B1 (ko) * 2014-04-21 2020-10-30 삼성디스플레이 주식회사 영상 표시 시스템
CN106095366B (zh) * 2016-06-07 2019-01-15 北京小鸟看看科技有限公司 一种缩短图像延迟的方法、装置和虚拟现实设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102185999A (zh) * 2011-03-28 2011-09-14 广东威创视讯科技股份有限公司 消除视频抖动的方法及装置
US20140092113A1 (en) * 2012-10-02 2014-04-03 Nvidia Corporation System, method, and computer program product for providing a dynamic display refresh
CN103747332A (zh) * 2013-12-25 2014-04-23 乐视致新电子科技(天津)有限公司 一种视频的平滑处理方法和装置
CN106296566A (zh) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 一种虚拟现实移动端动态时间帧补偿渲染系统及方法
CN106843859A (zh) * 2016-12-31 2017-06-13 歌尔科技有限公司 一种虚拟现实场景的绘制方法和装置及一种虚拟现实设备
CN107220019A (zh) * 2017-05-15 2017-09-29 努比亚技术有限公司 一种基于动态vsync信号的渲染方法、移动终端及存储介质

Also Published As

Publication number Publication date
CN110771160A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2020019139A1 (zh) 视频均匀显示方法、终端设备、机器可读存储介质
WO2020019140A1 (zh) 视频处理方法、终端设备、机器可读存储介质
CN107018370B (zh) 用于视频墙的显示方法及其系统
US7528889B2 (en) System, method, and apparatus for displaying streams with dynamically changing formats
CN104918137B (zh) 一种拼接屏系统播放视频的方法
JP4116006B2 (ja) 画面転送装置、画面転送システム、画面転送方法、およびプログラム
US8068110B2 (en) Remote display method and system for a monitor apparatus
WO2017107450A1 (zh) 视频显示窗口的切换方法及装置
CN109862409B (zh) 视频解码、播放方法、装置、系统、终端及存储介质
CN105245914A (zh) 云桌面高清视频传输协议及架构
US8605217B1 (en) Jitter cancellation for audio/video synchronization in a non-real time operating system
TWI628958B (zh) 在低延遲視頻通訊系統中改善視頻表現之全訊框緩衝
US9984653B1 (en) Method and device for reducing video latency
CN108769600B (zh) 一种基于视频流调帧率的桌面共享系统及其桌面共享方法
CN112637660A (zh) 一种安卓电视视频应用起播的图像稳定方法
CN110300326B (zh) 一种视频卡顿的检测方法、装置、电子设备及存储介质
CN111654740A (zh) 一种在视频播放过程中渲染的方法、装置及电子设备
CN110166733B (zh) 预监方法及装置、输出盒、服务器、拼接系统
CN114449344B (zh) 视频流传输方法、装置、电子设备及存储介质
CN114025107B (zh) 图像重影的拍摄方法、装置、存储介质和融合处理器
US11410700B2 (en) Video playback buffer adjustment
WO2021042341A1 (zh) 视频显示方法、接收端、系统及存储介质
EP3070598B1 (en) Image processing system and method
CN111510772B (zh) 一种平衡视频帧率误差的方法、装置、设备及存储介质
CN116112627B (zh) 一种视频帧率自适应变换的方法和电路

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18927245

Country of ref document: EP

Kind code of ref document: A1