WO2020019140A1 - 视频处理方法、终端设备、机器可读存储介质 - Google Patents

视频处理方法、终端设备、机器可读存储介质 Download PDF

Info

Publication number
WO2020019140A1
WO2020019140A1 PCT/CN2018/096709 CN2018096709W WO2020019140A1 WO 2020019140 A1 WO2020019140 A1 WO 2020019140A1 CN 2018096709 W CN2018096709 W CN 2018096709W WO 2020019140 A1 WO2020019140 A1 WO 2020019140A1
Authority
WO
WIPO (PCT)
Prior art keywords
decoding
thread
video
queue
video frame
Prior art date
Application number
PCT/CN2018/096709
Other languages
English (en)
French (fr)
Inventor
陈欣
Original Assignee
深圳市大疆创新科技有限公司
大疆互娱科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司, 大疆互娱科技(北京)有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/096709 priority Critical patent/WO2020019140A1/zh
Priority to CN201880039293.0A priority patent/CN110832875B/zh
Publication of WO2020019140A1 publication Critical patent/WO2020019140A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Definitions

  • the present invention relates to the technical field of video processing, and in particular, to a video processing method, a terminal device, and a machine-readable storage medium.
  • handheld shooting devices are more and more widely used. They can be equipped with professional cameras and can provide precise anti-shake measures to capture high-quality images.
  • the shooting functions of terminal devices such as smartphones are not as good as handheld shooting devices, but they have high-speed image processing functions and clear image display functions, which are incomparable with handheld shooting devices. Therefore, the video captured by the handheld shooting device can be transmitted to the terminal device for display by combining the handheld device and the mobile device. Because the terminal device needs to perform processing such as decoding and rendering of the received video, coupled with the time-consuming video transmission, there is a large delay in the display content between the handheld shooting device and the terminal device.
  • the invention provides a video processing method, a terminal device, and a machine-readable storage medium.
  • a video processing method applied to a terminal device includes:
  • a decoding thread called for decoding is used to perform the following operations on the video frame to be decoded:
  • the decoded video frame is obtained from the decoder through an output thread in the decoding thread and stored in a decoding queue.
  • a terminal device including a processor, a memory, and a communication bus; the memory stores several computer instructions, buffers a video code stream transmitted by the communication bus, and converts the video code stream Video frame; the processor is connected to the memory through a communication bus, and is used to read computer instructions from the memory to achieve:
  • a decoding thread called for decoding is used to perform the following operations on the video frame to be decoded:
  • the decoded video frame is obtained from the decoder through an output thread in the decoding thread and stored in a decoding queue.
  • a machine-readable storage medium stores a number of computer instructions, and the computer instructions are implemented when executed:
  • a decoding thread called for decoding is used to perform the following operations on the video frame to be decoded:
  • the decoded video frame is obtained from the decoder through an output thread in the decoding thread and stored in a decoding queue.
  • the video frame to be decoded is called by calling a decoding thread for decoding, and the following decoding operations are performed: the input thread in the decoding thread is used to obtain the video frame to be decoded from the frame queue.
  • the video frame to be decoded is input to a decoder for decoding; the decoded video frame is obtained from the decoder through an output thread in the decoding thread and stored in a decoding queue.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of analyzing a video code stream according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another video processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of still another video processing method according to an embodiment of the present invention.
  • FIG. 6 is a block diagram of a terminal device according to an embodiment of the present invention.
  • handheld shooting devices are more and more widely used. They can be equipped with professional cameras and can provide precise anti-shake measures to capture high-quality images.
  • the shooting functions of terminal devices such as smartphones are not as good as handheld shooting devices, but they have high-speed image processing functions and clear image display functions, which are incomparable with handheld shooting devices. Therefore, the video captured by the handheld shooting device can be transmitted to the terminal device for display by combining the handheld device and the mobile device. Because the terminal device needs to perform processing such as decoding and rendering of the received video, coupled with the time-consuming video transmission, there is a large delay in the display content between the handheld shooting device and the terminal device.
  • FIG. 1 is a schematic diagram of an application scenario of a video processing method according to an embodiment of the present invention.
  • a handheld shooting device 10 maintains a communication state with a terminal device 30 through a communication network 20.
  • the communication network 20 may be a wired method or a wireless method.
  • the handheld shooting device 10 collects video, and then sends it to the terminal device 30 through the communication network 20 (local area network LAN, wide area network WAN, or mobile network) as a video stream (for example, H.264 format).
  • the terminal device 30 may execute a video processing method to obtain a video frame, and display the video frame on the display of the terminal device 30.
  • FIG. 2 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • a video processing method includes steps 201 to 203, where:
  • a decoding thread for decoding is called to perform operations corresponding to steps 202 and 203 on a video frame to be decoded.
  • the handheld shooting device encodes the captured video, for example, encoding into the H.264 format to obtain a video stream.
  • a video stream can be part of a video frame (eg 1/3 frame, 1/2 frame). It is then sent to the terminal device via the communication network.
  • the terminal device can perform analysis when receiving the video code stream.
  • the terminal device can be preset with a circular queue, wherein the circular queue can buffer a first number of frames (for example, 10 frames) of video frames, and the terminal device buffers the received video stream into the circular queue.
  • the terminal device can also set a parsing thread in advance, and detect whether the newly acquired video stream and the buffered video stream can form a complete video frame through the parsing process in real time or according to a set period. For example, see FIG.
  • the parsing thread polls the circular queue (corresponding to step 301), and detects the buffered video bitstreams of the circular queue to determine whether there are multiple video bitstreams in the circular queue that can form a complete video frame (corresponding to step 302). If it can, the decoding thread fetches the video stream and buffers it (corresponding to step 303).
  • the terminal device may preset a frame queue, where the frame queue may buffer several video frames.
  • the terminal device may buffer the retrieved video frames into a frame queue, where the video frames in the frame queue are used as the video frames to be decoded.
  • the processor of the terminal device can detect whether there are video frames to be decoded in the frame queue, and call a decoding thread for decoding when there are video frames to be decoded in the frame queue.
  • the processor may directly call a decoding thread for decoding, read the video frame to be decoded from the frame queue through the decoding thread, and perform operations of steps 202 and 203 after obtaining the video frame to be decoded.
  • the decoding thread may include an input thread.
  • the input thread can obtain the video frames to be decoded from the frame queue and input them to the decoder, and the decoder decodes the video frames to be decoded to obtain the decoded video frames.
  • the decoder may be a MediaCodec decoder preset in the terminal device.
  • the working principle of MediaCodec decoder decoding can refer to the related literature, and will not be repeated here.
  • the decoding thread may further include an output thread.
  • the output thread can obtain the decoded video frames output by the decoder for buffering.
  • a decoding queue can be set in the terminal device in advance, so that the output thread can buffer the decoded video frames to the decoding queue.
  • the input thread and output thread work in series. From the perspective of the decoder, the input thread and output thread work in parallel, that is, the input thread and the decoder cooperate with each other.
  • the input thread directly sends the next video to be decoded.
  • the frame can be sent to the decoder; at the same time, the output thread and the decoder cooperate with each other.
  • the output thread can buffer the decoded video frame to the decoding queue.
  • the input thread does not need to wait for the output thread to store the decoded video frame in the decoding queue before acquiring the next video frame to be decoded from the frame queue and inputting it to the decoder for decoding, only considering whether the decoder has finished decoding the video. Frame.
  • the video code stream sent by the handheld shooting device corresponds to several video frames, and the decoder continuously decodes the input video frames to be decoded, the video frames to be decoded input to the decoder and the output of the decoder at the same time
  • the decoded video frames are different video frames.
  • Step 202 may be performed before step 203, may be performed after step 203, or may be performed simultaneously with step 203.
  • the processor of the terminal device may perform rendering and display operations on the decoded video device.
  • the rendering and display operations may be implemented by using related technologies or the solutions of the subsequent embodiments, which are not described here.
  • FIG. 4 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • a video processing method includes steps 401 to 405, where:
  • step 401 and step 201 are the same.
  • FIG. 2 and the related content of step 201 and details are not described herein again.
  • step 402 and step 202 are the same.
  • FIG. 2 and the related content of step 202 are not described herein again.
  • step 403 and step 203 are the same.
  • FIG. 2 and the related content of step 203 which will not be repeated here.
  • Invoke a rendering thread for rendering read a video frame from the decoding queue for rendering, and place the rendered video frame into a preset display queue.
  • the terminal device may preset a rendering thread for rendering.
  • the processor of the terminal device can call a rendering thread, read the decoded video frame from the decoding queue for rendering through the rendering thread, and cache it after rendering.
  • a display queue may be set in advance, and the display queue may buffer multiple rendered video frames.
  • the terminal device may preset a display thread for display.
  • the processor of the terminal device can call a display thread, and read the rendered video frame from the display queue through the display thread to exchange it with the display of the terminal device, and the display displays the video frame.
  • the input thread, output thread, parsing thread, rendering thread, and display thread are different threads that work in parallel with each other. The differences are as follows:
  • the frame process is divided into parsing tasks, decoding tasks, rendering tasks, and display tasks. Parsing threads are used to perform parsing tasks, input threads and output threads are used to perform decoding tasks, rendering threads are used to perform decoding tasks, and display threads are used to perform Show tasks.
  • the second difference is that the processor can call the input thread, output thread, parsing thread, rendering thread, and display thread at the same time or at different times to achieve the parallel processing task at the same time: the display thread can display the first video frame, The rendering thread can render the second video frame, the input thread and the output thread can decode the third video frame, and the parsing thread can parse the fourth video frame.
  • the display of each video frame is divided into four stages of analysis, decoding, rendering, and display, and the tasks corresponding to each stage are performed by different threads, so that each stage is independent of each other.
  • the purpose of synchronous processing is achieved, and the delay of the terminal device displaying the video transmitted by the handheld shooting device can be reduced.
  • FIG. 5 is a schematic flowchart of a video processing method according to an embodiment of the present invention.
  • a video processing method includes steps 501 to 505, where:
  • step 501 and step 201 are the same.
  • FIG. 2 and the related content of step 201 and details are not described herein again.
  • step 502 and step 202 are the same.
  • FIG. 2 and the related content of step 202 are not described herein again.
  • the processor obtains the decoded video frame from the decoder through the output thread in the decoding thread (corresponding to step 5032). After decoding the video frames, the processor also obtains the number of buffered video frames in the decoding queue (corresponding to step 5032). The processor then obtains the first set number preset and compares the number of buffered video frames with the first set number (corresponding to step 5033). If they are equal, it discards the video frame with the earliest buffer time in the decoding queue (corresponding to Step 5034), for example, discard the video frame at the head of the decoding queue or the multi-frame video frame at the head of the queue. After that, the processor places the newly acquired video frame at the end of the decoding queue (corresponding to step 5035). If not, the processor directly places the newly acquired video frame at the end of the decoding queue (corresponding to step 5035).
  • the problem of delay accumulation caused by too many buffered video frames in the decoding queue can be overcome.
  • the accumulation of delay caused by buffering too many video frames is because: the rendering thread takes different time to render each video frame, and the decoding thread decodes the video frame all time is relatively fixed, that is, the speed of decoding the received video frame and the released video frame in the decoding queue are different. As the number of video frames spaced between the video frames decoded by the decoding thread and the video frames rendered by the rendering thread is increasing, there is a delay accumulation of the video frames displayed by the handheld shooting device and the video frames displayed on the terminal device.
  • step 504 and step 404 are the same.
  • FIG. 4 and the related content of step 404 which will not be repeated here.
  • step 505 and step 405 are the same. For detailed descriptions, please refer to FIG. 4 and the related content of step 405, which will not be repeated here.
  • the excessively buffered video frames in the decoding queue are discarded to achieve the purpose of frequency reduction display, thereby reducing the delay of displaying video frames by the handheld shooting device and the terminal device.
  • the display of each video frame is divided into four stages of analysis, decoding, rendering, and display, and the tasks corresponding to each stage are performed by different threads, so that each stage is independent of each other.
  • the purpose of synchronous processing is achieved, and the delay of the terminal device displaying the video transmitted by the handheld shooting device can be reduced.
  • FIG. 6 is a block diagram of a terminal device according to an embodiment of the present invention.
  • a terminal device includes a processor 601, a memory 602, and a communication bus 603.
  • the memory 602 stores several computer instructions, buffers a video code stream from the communication bus 603, and a video frame converted from the video code stream; the processor 601 is connected to the memory 602 through the communication bus 603, and is used to load the data from the memory 602.
  • a decoding thread called for decoding is used to perform the following operations on the video frame to be decoded:
  • the decoded video frame is obtained from the decoder through an output thread in the decoding thread and stored in a decoding queue.
  • the video frames to be decoded input by the input thread and the decoded video frames obtained by the output thread are different video frames.
  • the processor 601 before calling a decoding thread for decoding, the processor 601 is further configured to:
  • the multiple video code streams are buffered to the video frame as video frames to be decoded. Said frame queue.
  • the processor 601 for parsing the received video stream includes:
  • the circular queue can buffer a first number of video frames to be decoded.
  • the processor 601 before the obtained decoded video frames are stored in a decoding queue, the processor 601 is further configured to:
  • the output thread also determines whether the decoding queue is full
  • the video frame at the head of the decoding queue is discarded and the acquired video frame is placed at the end of the decoding queue; if it is not full, the acquired video frame is placed at the decoding queue. tail.
  • the processor 601 is further configured to:
  • a rendering thread for rendering is called, a video frame is read from the decoding queue for rendering, and the rendered video frame is placed in a preset display queue.
  • the processor 601 is further configured to:
  • a display thread for calling is called, and a video frame read from the display queue is exchanged to a display, and the display displays the video frame.
  • the decoder is a MediaCodec decoder.
  • the input thread, the output thread, the parsing thread, the rendering thread, and the display thread are different threads that work in parallel with each other.
  • An embodiment of the present invention also provides a machine-readable storage medium that can be configured on a terminal device; the machine-readable storage medium stores a number of computer instructions, a video code stream, and a video frame converted from the video code stream; When the computer instructions are executed, the following processing is performed:
  • the rendered video frame is exchanged to the display of the terminal device; a time interval is set between the first control signal and the second control signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种视频处理方法、终端设备、机器可读存储介质。一种视频处理方法包括:在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。可见,本实施例中通过设置输入线程和输出线程可以使向解码器输入数据的操作和获取输出数据的操作同时进行,减少串行输入和输出数据时造成的延时,可以降低预览视频的延时。

Description

视频处理方法、终端设备、机器可读存储介质 技术领域
本发明涉及视频处理技术领域,尤其涉及视频处理方法、终端设备、机器可读存储介质。
背景技术
目前,手持拍摄设备的应用越来越广泛,其可以配备专业摄像头,并且可提供精密的防抖措施,从而拍摄出高质量的图像。智能手机等终端设备的拍摄功能不如手持拍摄设备,但是其有高速的图像处理功能和清晰的图像显示功能,这是手持拍摄设备无法比拟的。因此,可以结合手持设备和移动设备,将手持拍摄设备所拍摄的视频传输到终端设备上显示。由于终端设备需要对接收的视频进行解码、渲染等处理,加之视频传输耗时,使得手持拍摄设备和终端设备之间的显示内容存在较大的延时。
发明内容
本发明提供一种视频处理方法、终端设备、机器可读存储介质。
根据本发明的第一方面,提供一种视频处理方法,应用于终端设备,包括:
在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
根据本发明的第二方面,提供一种终端设备,包括处理器、存储器和通信总线;所述存储器存储若干条计算机指令、缓存所述通信总线传输的视 频码流以及由所述视频码流转换成的视频帧;所述处理器通过通信总线与所述存储器连接,用于从所述存储器中读取计算机指令以实现:
在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
根据本发明的第三方面,提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现:
在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
由上述的技术方案可见,本实施例中通过调用用于解码的解码线程对待解码的视频帧进行以下解码操作:通过所述解码线程中的输入线程,从帧队列中将从帧队列中获取的待解码的视频帧输入至解码器进行解码;通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。可见,本实施例中通过设置输入线程和输出线程可以使向解码器输入数据的操作和获取输出数据的操作同时进行,减少串行输入和输出数据时造成的延时,可以降低预览视频的延时。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅 是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例提供的一种应用场景示意图;
图2是本发明一实施例提供的一种视频处理方法的流程示意图;
图3是本发明一实施例提供的解析视频码流的流程示意图;
图4是本发明一实施例提供的另一种视频处理方法的流程示意图;
图5是本发明一实施例提供的又一种视频处理方法的流程示意图;
图6是本发明一实施例提供的一种终端设备的框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
目前,手持拍摄设备的应用越来越广泛,其可以配备专业摄像头,并且可提供精密的防抖措施,从而拍摄出高质量的图像。智能手机等终端设备的拍摄功能不如手持拍摄设备,但是其有高速的图像处理功能和清晰的图像显示功能,这是手持拍摄设备无法比拟的。因此,可以结合手持设备和移动设备,将手持拍摄设备所拍摄的视频传输到终端设备上显示。由于终端设备需要对接收的视频进行解码、渲染等处理,加之视频传输耗时,使得手持拍摄设备和终端设备之间的显示内容存在较大的延时。
为此,本发明实施例提供了一种视频处理方法,可以应用于智能手机、手持拍摄设备、PC机等具有显示功能的终端设备,后续将以终端设备为例描述。图1是本发明一实施例提供的视频处理方法的应用场景示意图,参见图1,手持拍摄设备10通过通信网络20与终端设备30保持通信状态。该通信网络20可以为有线方式,也可以为无线方式。本实施例中,手持拍摄设备10采集 视频,然后以(例如H.264格式)视频码流的方式通过通信网络20(局域网LAN、广域网WAN或者移动网络)发送给终端设备30。终端设备30可以执行视频处理方法得到视频帧,并在终端设备30的显示器上显示。
图2是本发明一实施例提供的视频处理方法的流程示意图,参见图2,一种视频处理方法,包括步骤201~步骤203,其中:
201,在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行步骤202和步骤203对应的操作。
实际应用中,由于受到手持拍摄设备和终端设备之间通信网络带宽的影响,手持拍摄设备会将采集到的视频进行编码,例如编码为H.264格式,得到视频码流,每个视频码流可以为视频帧的一部分(例如1/3帧,1/2帧)。之后再通过通信网络发送给终端设备。
本实施例中,终端设备在接收到视频码流时可以进行解析。终端设备中可以预先设置一个循环队列,其中循环队列可以缓存第一数量帧(例如10帧)视频帧,终端设备将接收的视频码流缓存至循环队列。并且,终端设备还可以预先设置一解析线程,通过解析进程实时或按照设定周期检测新获取的视频码流和已缓存的视频码流是否可以构成完整一帧视频帧,例如,参见图3,解析线程轮询循环队列(对应步骤301),检测循环队列已缓存的视频码流,以确定循环队列中是否存在多个视频码流能够构成完整的一帧视频帧(对应步骤302)。若能够,则解码线程将视频码流取出并进行缓存(对应步骤303)。
本实施例中,终端设备可以预先设置一个帧队列,其中帧队列可以缓存数帧视频帧。终端设备可以将取出的视频帧缓存到帧队列中,其中帧队列中的视频帧作为待解码的视频帧。
终端设备的处理器可以检测帧队列中是否存在待解码的视频帧,当帧队列中存在待解码的视频帧时调用用于解码的解码线程。或者处理器也可以直接调用用于解码的解码线程,通过解码线程从帧队列中读取待解码的视频帧,在获取到待解码的视频帧后进行步骤202和步骤203的操作。
202,通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视 频帧输入至解码器进行解码。
本实施例中,解码线程可以包括输入线程。该输入线程可以从帧队列中获取待解码的视频帧并输入至解码器,由解码器对待解码的视频帧进行解码,得到解码后的视频帧。
本实施例中,解码器可以终端设备中预先设置的MediaCodec解码器。MediaCodec解码器解码的工作原理可以参考相关文献,在此不再赘述。
203,通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
本实施例中,解码线程还可以包括输出线程。该输出线程可以获取解码器输出的解码后的视频帧进行缓存。在一实施例中,终端设备中可以预先设置一解码队列,这样输出线程可以将解码后的视频帧缓存至解码队列。
需要说明的是,从每一帧视频帧的角度而言,输入线程和输出线程是串行工作的。而从解码器的角度而言,输入线程和输出线程是并行工作的,即输入线程和解码器相互配合,在解码器需要输入待解码的视频帧时,输入线程直接将下一待解码的视频帧发送给解码器即可;同时,输出线程和解码器相互配合,在解码器有解码后的视频帧需要输出时,输出线程即可将解码完的视频帧缓存到解码队列。换言之,输入线程不需要等待输出线程将解码后的视频帧存放至解码队列后,才从帧队列中获取下一帧待解码的视频帧输入至解码器进行解码,仅考虑解码器是否解码完视频帧即可。
由于手持拍摄设备发送的视频码流中对应若干帧视频帧,且解码器是连续解码输入的待解码的视频帧,因此,在同一时刻输入到解码器的待解码的视频帧和解码器输出的解码后的视频帧为不同的视频帧。换言之,本实施例中重点考虑多视频帧情况下,输入线程和输出线程为并行工作的场景,即步骤202和步骤203的执行顺序不作限定。步骤202可以先于步骤203执行,可以后于步骤203执行,还可以与步骤203同时执行。
之后,终端设备的处理器可以对解码后的视频器进行渲染和显示操作,其中,渲染和显示操作可以采用相关技术实现,也可以采用后续实施例的方 案,在此先不作说明。
至此,相较于相关技术中同一解码线程在向解码器输入视频帧和获取解码器输出的视频帧之间来回切换的方案,本实施例中通过设置输入线程和输出线程可以使向解码器输入数据的操作和获取输出数据的操作同时进行,减少串行输入和输出数据时造成的延时,可以降低预览视频的延时。
图4是本发明一实施例提供的视频处理方法的流程示意图,参见图4,一种视频处理方法,包括步骤401~步骤405,其中:
401,在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行步骤402和步骤403对应的操作。
步骤401和步骤201的具体方法和原理一致,详细描述请参考图2及步骤201的相关内容,此处不再赘述。
402,通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码。
步骤402和步骤202的具体方法和原理一致,详细描述请参考图2及步骤202的相关内容,此处不再赘述。
403,通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
步骤403和步骤203的具体方法和原理一致,详细描述请参考图2及步骤203的相关内容,此处不再赘述。
404,调用用于渲染的渲染线程,从所述解码队列中读取视频帧进行渲染,并将渲染后的视频帧放入预先设置的显示队列。
本实施例中,终端设备可以预先设置一个用于渲染的渲染线程。终端设备的处理器可以调用渲染线程,通过渲染线程从解码队列中读取解码后的视频帧进行渲染,并将渲染后进行缓存。
在一实施例中,可以预先设置一个显示队列,该显示队列可以缓存多帧渲染后的视频帧。
405,调用用于显示的显示线程,将从所述显示队列中读取的视频帧交换 到显示器,所述显示器显示所述视频帧。
本实施例中,终端设备可以预先设置一个用于显示的显示线程。终端设备的处理器可以调用显示线程,通过显示线程从显示队列中读取渲染后的视频帧交换到终端设备的显示器,由显示器显示该视频帧。
需要说明的是,本实施例中,输入线程、输出线程、解析线程、渲染线程和显示线程为相互并行工作的不同线程,其中不同之处在于:
第一处不同:由于解析线程,输入线程与输出线程,渲染线程和显示线程分别对应解析视频码流,解码视频帧,渲染视频帧和显示视频帧的过程,即将接收的视频码流处理为视频帧的过程分为解析任务、解码任务、渲染任务和显示任务,解析线程用于执行解析任务,输入线程与输出线程用于执行解码任务,渲染线程用于执行解码任务,以及显示线程用于执行显示任务。
第二处不同:处理器可以在同一时刻或者不同时刻分别调用输入线程、输出线程、解析线程、渲染线程和显示线程,在同一时刻达到并行处理任务目的:显示线程可以显示第一帧视频帧,渲染线程可以渲染第二帧视频帧,输入线程和输出线程可以解码第三帧视频帧,以及解析线程可以解析第四帧视频帧。
至此,本实施例中将每个视频帧显示出来分为解析、解码、渲染和显示四个阶段,每个阶段对应的任务由不同的线程执行,从而实现每个阶段相互独立。并且,本实施例中通过调用不同线程并列执行不同视频帧各阶段的任务,达到同步处理的目的,可以降低终端设备显示手持拍摄设备传输的视频的延时。
图5是本发明一实施例提供的视频处理方法的流程示意图,参见图5,一种视频处理方法,包括步骤501~步骤505,其中:
501,在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行步骤502和步骤503对应的操作。
步骤501和步骤201的具体方法和原理一致,详细描述请参考图2及步骤201的相关内容,此处不再赘述。
502,通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码。
步骤502和步骤202的具体方法和原理一致,详细描述请参考图2及步骤202的相关内容,此处不再赘述。
503,通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
本实施例中,处理器通过解码线程中的输出线程,从解码器获取解码后的视频帧(对应步骤5032)。在解码出视频帧后,处理器还获取所述解码队列中已缓存视频帧的数量(对应步骤5032)。然后处理器获取预先设置的第一设定数量,并对比已缓存视频帧的数量和该第一设定数量(对应步骤5033),若相等,则丢弃解码队列中缓存时间最早的视频帧(对应步骤5034),例如丢弃位于所述解码队列队头的视频帧,或者队头的多帧视频帧。之后,处理器将新获取的视频帧放至解码队列的队尾(对应步骤5035)。若不等,处理器直接将新获取的视频帧放至解码队列的队尾(对应步骤5035)。
这样,本实施例中通过丢弃已缓存的视频帧,可以克服解码队列中缓存视频帧过多而造成延时积累的问题。缓存过多视频帧而造成延时积累是因为:渲染线程渲染各视频帧所用时间不同,而解码线程解码视频帧所有时间相对固定,即解码队列中接收的视频帧和释放的视频帧的速度不同,解码线程解码的视频帧和渲染线程渲染的视频帧之间间隔的视频帧的数量越来越多,导致手持拍摄设备显示的视频帧和终端设备上显示的视频帧存在延时积累。
504,调用用于渲染的渲染线程,从所述解码队列中读取视频帧进行渲染,并将渲染后的视频帧放入预先设置的显示队列。
步骤504和步骤404的具体方法和原理一致,详细描述请参考图4及步骤404的相关内容,此处不再赘述。
505,调用用于显示的显示线程,将从所述显示队列中读取的视频帧交换到显示器,所述显示器显示所述视频帧。
步骤505和步骤405的具体方法和原理一致,详细描述请参考图4及步 骤405的相关内容,此处不再赘述。
至此,本实施例中通过丢弃解码队列中已缓存过多的视频帧,达到降频显示的目的,从而降低手持拍摄设备和终端设备显示视频帧的延时。另外,本实施例中将每个视频帧显示出来分为解析、解码、渲染和显示四个阶段,每个阶段对应的任务由不同的线程执行,从而实现每个阶段相互独立。并且,本实施例中通过调用不同线程并列执行不同视频帧各阶段的任务,达到同步处理的目的,可以降低终端设备显示手持拍摄设备传输的视频的延时。
图6是本发明一实施例提供的终端设备的框图,参见图6,一种终端设备,包括处理器601、存储器602和通信总线603。存储器602存储若干条计算机指令、缓存来自通信总线603的视频码流以及由视频码流转换成的视频帧;所述处理器601通过通信总线603与存储器602连接,用于从所述存储器602中读取计算机指令以实现:
在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
在一实施例中,所述输入线程输入的待解码的视频帧和所述输出线程获取解码后的视频帧为不同的视频帧。
在一实施例中,调用用于解码的解码线程之前,所述处理器601还用于:
调用用于解析的解析线程,解析接收到的视频码流,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至所述帧队列。
在一实施例中,所述处理器601用于解析接收到的视频码流包括:
轮询循环队列,所述循环队列中缓存接收到的视频码流;
检测所述循环队列已缓存的视频码流,以确定是否存在多个视频码流能够构成完整的一帧视频帧。
在一实施例中,所述循环队列能够缓存第一数量帧待解码的视频帧。
在一实施例中,将获取的解码后的视频帧存放至解码队列之前,所述处理器601还用于:
所述输出线程还判断所述解码队列是否已满;
若已满,则丢弃位于所述解码队列队头的视频帧且将获取的视频帧放至所述解码队列的队尾;若未满,则将获取的视频帧放至所述解码队列的队尾。
在一实施例中,调用用于解码的解码线程之后,所述处理器601还用于:
调用用于渲染的渲染线程,从所述解码队列中读取视频帧进行渲染,并将渲染后的视频帧放入预先设置的显示队列。
在一实施例中,将渲染后的视频帧放入预先设置的显示队列之后,所述处理器601还用于:
调用用于显示的显示线程,将从所述显示队列中读取的视频帧交换到显示器,所述显示器显示所述视频帧。
在一实施例中,所述解码器为MediaCodec解码器。
在一实施例中,所述输入线程、所述输出线程、解析线程、渲染线程和显示线程为相互并行工作的不同线程。
本发明实施例还提供了一种机器可读存储介质,可以配置在终端设备;所述机器可读存储介质上存储有若干计算机指令、视频码流以及由视频码流转换成的视频帧;所述计算机指令被执行时进行如下处理:
获取待显示视频帧;
在接收到第一控制信号时,渲染所述待显示视频帧;
在接收到第二控制信号时,将渲染后的视频帧交换到所述终端设备的显示器;所述第一控制信号和所述第二控制信号之间间隔设定时间。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的检测装置和方法进行了详细介绍,本发明中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (21)

  1. 一种视频处理方法,其特征在于,应用于终端设备,包括:
    在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
    通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
    通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
  2. 根据权利要求1所述的视频处理方法,其特征在于,还包括:
    所述输入线程从帧队列中获取待解码的视频帧输入至解码器进行解码,同时,所述输出线程将所述解码器已解码的视频帧存放至解码队列。
  3. 根据权利要求1所述的视频处理方法,其特征在于,调用用于解码的解码线程之前,所述方法还包括:
    调用用于解析的解析线程,解析接收到的视频码流,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至所述帧队列。
  4. 根据权利要求3所述的视频处理方法,其特征在于,解析接收到的视频码流包括:
    轮询循环队列,所述循环队列中缓存接收到的视频码流;
    检测所述循环队列已缓存的视频码流,以确定是否存在多个视频码流能够构成完整的一帧视频帧。
  5. 根据权利要求3所述的视频处理方法,其特征在于,所述循环队列能够缓存第一数量帧待解码的视频帧。
  6. 根据权利要求1所述的视频处理方法,其特征在于,将获取的解码后的视频帧存放至解码队列之前,所述方法还包括:
    所述输出线程还判断所述解码队列是否已满;
    若已满,则丢弃位于所述解码队列队头的视频帧且将获取的视频帧放至所述解码队列的队尾;若未满,则将获取的视频帧放至所述解码队列的队尾。
  7. 根据权利要求1所述的视频处理方法,其特征在于,调用用于解码的解码线程之后,所述方法还包括:
    调用用于渲染的渲染线程,从所述解码队列中读取视频帧进行渲染,并将渲染后的视频帧放入预先设置的显示队列。
  8. 根据权利要求7所述的视频处理方法,其特征在于,将渲染后的视频帧放入预先设置的显示队列之后,所述方法还包括:
    调用用于显示的显示线程,将从所述显示队列中读取的视频帧交换到显示器,所述显示器显示所述视频帧。
  9. 根据权利要求1所述的视频处理方法,其特征在于,所述解码器为MediaCodec解码器。
  10. 根据权利要求1所述的视频处理方法,其特征在于,所述输入线程、所述输出线程、解析线程、渲染线程和显示线程为相互并行工作的不同线程。
  11. 一种终端设备,其特征在于,包括处理器、存储器和通信总线;所述存储器存储若干条计算机指令、缓存所述通信总线传输的视频码流以及由所述视频码流转换成的视频帧;所述处理器通过通信总线与所述存储器连接,用于从所述存储器中读取计算机指令以实现:
    在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
    通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
    通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
  12. 根据权利要求11所述的终端设备,其特征在于,所述输入线程输 入的待解码的视频帧和所述输出线程获取解码后的视频帧为不同的视频帧。
  13. 根据权利要求11所述的终端设备,其特征在于,调用用于解码的解码线程之前,所述处理器还用于:
    调用用于解析的解析线程,解析接收到的视频码流,当存在多个视频码流能够构成完整的一帧视频帧时,将所述多个视频码流作为待解码的视频帧缓存至所述帧队列。
  14. 根据权利要求13所述的终端设备,其特征在于,所述处理器用于解析接收到的视频码流包括:
    轮询循环队列,所述循环队列中缓存接收到的视频码流;
    检测所述循环队列已缓存的视频码流,以确定是否存在多个视频码流能够构成完整的一帧视频帧。
  15. 根据权利要求13所述的终端设备,其特征在于,所述循环队列能够缓存第一数量帧待解码的视频帧。
  16. 根据权利要求11所述的终端设备,其特征在于,将获取的解码后的视频帧存放至解码队列之前,所述处理器还用于:
    所述输出线程还判断所述解码队列是否已满;
    若已满,则丢弃位于所述解码队列队头的视频帧且将获取的视频帧放至所述解码队列的队尾;若未满,则将获取的视频帧放至所述解码队列的队尾。
  17. 根据权利要求11所述的终端设备,其特征在于,调用用于解码的解码线程之后,所述处理器还用于:
    调用用于渲染的渲染线程,从所述解码队列中读取视频帧进行渲染,并将渲染后的视频帧放入预先设置的显示队列。
  18. 根据权利要求17所述的终端设备,其特征在于,将渲染后的视频帧放入预先设置的显示队列之后,所述处理器还用于:
    调用用于显示的显示线程,将从所述显示队列中读取的视频帧交换到 显示器,所述显示器显示所述视频帧。
  19. 根据权利要求11所述的终端设备,其特征在于,所述解码器为MediaCodec解码器。
  20. 根据权利要求11所述的终端设备,其特征在于,所述输入线程、所述输出线程、解析线程、渲染线程和显示线程为不同的线程。
  21. 一种机器可读存储介质,其特征在于,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现:
    在对待解码的视频帧进行解码时,调用用于解码的解码线程对待解码的视频帧进行以下操作:
    通过所述解码线程中的输入线程,将从帧队列中获取的待解码的视频帧输入至解码器进行解码;
    通过所述解码线程中的输出线程,从所述解码器获取解码后的视频帧并存放至解码队列。
PCT/CN2018/096709 2018-07-23 2018-07-23 视频处理方法、终端设备、机器可读存储介质 WO2020019140A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/096709 WO2020019140A1 (zh) 2018-07-23 2018-07-23 视频处理方法、终端设备、机器可读存储介质
CN201880039293.0A CN110832875B (zh) 2018-07-23 2018-07-23 视频处理方法、终端设备、机器可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/096709 WO2020019140A1 (zh) 2018-07-23 2018-07-23 视频处理方法、终端设备、机器可读存储介质

Publications (1)

Publication Number Publication Date
WO2020019140A1 true WO2020019140A1 (zh) 2020-01-30

Family

ID=69181107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096709 WO2020019140A1 (zh) 2018-07-23 2018-07-23 视频处理方法、终端设备、机器可读存储介质

Country Status (2)

Country Link
CN (1) CN110832875B (zh)
WO (1) WO2020019140A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181657A (zh) * 2020-09-30 2021-01-05 京东方科技集团股份有限公司 视频处理方法、装置、电子设备及存储介质
CN112261412A (zh) * 2020-11-09 2021-01-22 中科智云科技有限公司 基于pid的视频流轮询解码的拉流控制系统及方法
CN113825014A (zh) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 多媒体内容播放方法、装置、计算机设备和存储介质
CN113923456A (zh) * 2021-09-30 2022-01-11 稿定(厦门)科技有限公司 视频处理方法及装置
CN114071224A (zh) * 2020-07-31 2022-02-18 腾讯科技(深圳)有限公司 视频数据处理方法、装置、计算机设备及存储介质
WO2024001777A1 (zh) * 2022-06-30 2024-01-04 中兴通讯股份有限公司 视频解码方法、云化机顶盒、物理端机顶盒、介质
CN113825014B (zh) * 2021-09-10 2024-06-11 网易(杭州)网络有限公司 多媒体内容播放方法、装置、计算机设备和存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510759B (zh) * 2020-03-17 2023-10-13 视联动力信息技术股份有限公司 视频显示方法、装置及可读存储介质
CN112468875B (zh) * 2020-11-30 2022-03-29 展讯通信(天津)有限公司 视频解码帧的显示输出控制方法及装置、存储介质、终端
CN112995532B (zh) * 2021-02-03 2023-06-13 上海哔哩哔哩科技有限公司 视频处理方法及装置
CN113395523B (zh) * 2021-06-11 2023-05-30 深圳万兴软件有限公司 基于并行线程的图像解码方法、装置、设备及存储介质
CN113873345B (zh) * 2021-09-27 2023-11-14 中国电子科技集团公司第二十八研究所 一种分布式的超高清视频同步处理方法
CN113923507B (zh) * 2021-12-13 2022-07-22 北京蔚领时代科技有限公司 Android端的低延迟视频渲染方法及装置
CN114205662B (zh) * 2021-12-13 2024-02-20 北京蔚领时代科技有限公司 iOS端的低延迟视频渲染方法及装置
CN114666292A (zh) * 2022-04-01 2022-06-24 广州大学 一种即时通信技术方法、系统及介质
CN115002541A (zh) * 2022-05-26 2022-09-02 深圳市瑞云科技有限公司 一种降低客户端云串流渲染的系统
CN115361579A (zh) * 2022-07-28 2022-11-18 珠海全志科技股份有限公司 视频送显方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215977A1 (en) * 2007-02-06 2013-08-22 Microsoft Corporation Scalable multi-thread video decoding
CN105263021A (zh) * 2015-10-13 2016-01-20 华南理工大学 一种基于uvd的hevc视频解码方法
CN105992005A (zh) * 2015-03-04 2016-10-05 广州市动景计算机科技有限公司 视频解码方法、装置及终端设备
US20180035125A1 (en) * 2016-07-28 2018-02-01 Hypori, Inc. System, method and computer program product for generating remote views in a virtual mobile device platform using efficient macroblock comparison during display encoding, including efficient detection of unchanged macroblocks
CN108093293A (zh) * 2018-01-15 2018-05-29 北京奇艺世纪科技有限公司 一种视频渲染方法及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030002578A1 (en) * 2000-12-11 2003-01-02 Ikuo Tsukagoshi System and method for timeshifting the encoding/decoding of audio/visual signals in real-time
US20040181611A1 (en) * 2003-03-14 2004-09-16 Viresh Ratnakar Multimedia streaming system for wireless handheld devices
US7765547B2 (en) * 2004-11-24 2010-07-27 Maxim Integrated Products, Inc. Hardware multithreading systems with state registers having thread profiling data
CN101984672B (zh) * 2010-11-03 2012-10-17 深圳芯邦科技股份有限公司 多线程的音视频同步控制方法及装置
CN103369299A (zh) * 2012-04-09 2013-10-23 维图通讯有限公司 一种基于h.264编码技术的视频监控方法
CN103716644A (zh) * 2013-12-05 2014-04-09 南京肯麦思智能技术有限公司 一种h264多粒度并行的处理方法
CN104219555B (zh) * 2014-08-21 2018-03-30 北京奇艺世纪科技有限公司 一种安卓系统终端中的视频显示装置和方法
CN104333762B (zh) * 2014-11-24 2017-10-10 成都瑞博慧窗信息技术有限公司 一种视频解码方法
CN105323637B (zh) * 2015-10-29 2018-08-24 无锡天脉聚源传媒科技有限公司 一种视频处理方法及装置
CN106792124A (zh) * 2016-12-30 2017-05-31 合网络技术(北京)有限公司 多媒体资源解码播放方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215977A1 (en) * 2007-02-06 2013-08-22 Microsoft Corporation Scalable multi-thread video decoding
CN105992005A (zh) * 2015-03-04 2016-10-05 广州市动景计算机科技有限公司 视频解码方法、装置及终端设备
CN105263021A (zh) * 2015-10-13 2016-01-20 华南理工大学 一种基于uvd的hevc视频解码方法
US20180035125A1 (en) * 2016-07-28 2018-02-01 Hypori, Inc. System, method and computer program product for generating remote views in a virtual mobile device platform using efficient macroblock comparison during display encoding, including efficient detection of unchanged macroblocks
CN108093293A (zh) * 2018-01-15 2018-05-29 北京奇艺世纪科技有限公司 一种视频渲染方法及系统

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071224A (zh) * 2020-07-31 2022-02-18 腾讯科技(深圳)有限公司 视频数据处理方法、装置、计算机设备及存储介质
CN114071224B (zh) * 2020-07-31 2023-08-25 腾讯科技(深圳)有限公司 视频数据处理方法、装置、计算机设备及存储介质
CN112181657A (zh) * 2020-09-30 2021-01-05 京东方科技集团股份有限公司 视频处理方法、装置、电子设备及存储介质
CN112181657B (zh) * 2020-09-30 2024-05-07 京东方科技集团股份有限公司 视频处理方法、装置、电子设备及存储介质
CN112261412A (zh) * 2020-11-09 2021-01-22 中科智云科技有限公司 基于pid的视频流轮询解码的拉流控制系统及方法
CN112261412B (zh) * 2020-11-09 2021-04-27 中科智云科技有限公司 基于pid的视频流轮询解码的拉流控制系统及方法
CN113825014A (zh) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 多媒体内容播放方法、装置、计算机设备和存储介质
CN113825014B (zh) * 2021-09-10 2024-06-11 网易(杭州)网络有限公司 多媒体内容播放方法、装置、计算机设备和存储介质
CN113923456A (zh) * 2021-09-30 2022-01-11 稿定(厦门)科技有限公司 视频处理方法及装置
CN113923456B (zh) * 2021-09-30 2022-12-13 稿定(厦门)科技有限公司 视频处理方法及装置
WO2024001777A1 (zh) * 2022-06-30 2024-01-04 中兴通讯股份有限公司 视频解码方法、云化机顶盒、物理端机顶盒、介质

Also Published As

Publication number Publication date
CN110832875B (zh) 2022-02-22
CN110832875A (zh) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2020019140A1 (zh) 视频处理方法、终端设备、机器可读存储介质
WO2020019139A1 (zh) 视频均匀显示方法、终端设备、机器可读存储介质
CN110430441B (zh) 一种云手机视频采集方法、系统、装置及存储介质
WO2021031850A1 (zh) 图像处理的方法、装置、电子设备及存储介质
JP6026443B2 (ja) ビデオ・ビットストリーム中の描画方向情報
JP2016042712A (ja) ズームされた画像の生成
WO2015144084A1 (en) Video synchronous playback method, apparatus, and system
US20230144483A1 (en) Method for encoding video data, device, and storage medium
US10582258B2 (en) Method and system of rendering late or early audio-video frames
KR20230039723A (ko) 프로젝션 데이터 프로세싱 방법 및 장치
EP2555517A1 (en) Network video server and video control method thereof
CN111343503A (zh) 视频的转码方法、装置、电子设备及存储介质
CN113395523A (zh) 基于并行线程的图像解码方法、装置、设备及存储介质
CN113709518B (zh) 一种基于rtsp协议的视频实时传输模式设计方法
CN113630575B (zh) 多人在线视频会议图像显示的方法、系统和存储介质
US20130162757A1 (en) Image processing apparatus and image processing method
CN110798700B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
CN210670365U (zh) 一种视频预监系统
CN108924465B (zh) 视频会议发言人终端的确定方法、装置、设备和存储介质
WO2016177257A1 (zh) 一种数据分享的方法和装置
JP2012089989A (ja) 映像監視システム
KR101212947B1 (ko) 데이터 전송 장치
WO2022061723A1 (zh) 一种图像处理方法、设备、终端及存储介质
CN112312067A (zh) 预监输入视频信号的方法、装置和设备
WO2022143205A1 (zh) 编解码方法、电子设备、通信系统以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928062

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18928062

Country of ref document: EP

Kind code of ref document: A1