WO2022160744A1 - 基于gpu的视频合成系统及方法 - Google Patents

基于gpu的视频合成系统及方法 Download PDF

Info

Publication number
WO2022160744A1
WO2022160744A1 PCT/CN2021/119418 CN2021119418W WO2022160744A1 WO 2022160744 A1 WO2022160744 A1 WO 2022160744A1 CN 2021119418 W CN2021119418 W CN 2021119418W WO 2022160744 A1 WO2022160744 A1 WO 2022160744A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
module
video
cache module
gpu
Prior art date
Application number
PCT/CN2021/119418
Other languages
English (en)
French (fr)
Inventor
刘志杰
林炳河
Original Assignee
稿定(厦门)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 稿定(厦门)科技有限公司 filed Critical 稿定(厦门)科技有限公司
Publication of WO2022160744A1 publication Critical patent/WO2022160744A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present invention relates to the technical field of video coding, and in particular, to a GPU-based video synthesis system, a GPU-based video synthesis method, a computer-readable storage medium, and a computer device.
  • the data is decoded from the hardware-accelerated decoder, and the decoded data is output to the GPU for rendering; then, the GPU transmits the rendered data to the hardware-accelerated encoder for rendering.
  • Encoding in this way, the entire video encoding process is performed serially in one thread, requiring frequent communication between the CPU and GPU, and the CPU has a high load.
  • an object of the present invention is to propose a video synthesis system based on GPU, which can reduce the waiting time in the decoding and rendering process, reduce the CPU load, and effectively improve the video synthesis efficiency.
  • the second object of the present invention is to propose a video synthesis method based on GPU.
  • a third object of the present invention is to provide a computer-readable storage medium.
  • the fourth object of the present invention is to provide a computer device.
  • the embodiment of the first aspect of the present invention provides a video synthesis system based on GPU, including: a rendering module, an encoding module, a first cache module and a second cache module; wherein, the rendering module is used to obtain The video to be synthesized, a basic rendering context is created according to the video to be synthesized, and the video frame is rendered according to the video to be synthesized, and the rendered video frame is stored in the first cache module; the encoding module uses Create a shared rendering context according to the basic rendering context, and judge whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encode the rendered video frame, and encode the The latter data is stored in the second cache module.
  • a GPU-based video synthesis system includes a rendering module, an encoding module, a first buffering module, and a second buffering module; wherein the rendering module is used to acquire a video to be synthesized, and according to the video to be synthesized Create a basic rendering context, and perform video frame rendering according to the video to be synthesized, and store the rendered video frame in the first cache module; the encoding module is configured to create a shared rendering context according to the basic rendering context , and judge whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encode the rendered video frame, and store the encoded data into the second cache module; thus It can reduce the waiting time in the decoding and rendering process, reduce the CPU load, and effectively improve the efficiency of video synthesis.
  • GPU-based video synthesis system proposed according to the foregoing embodiments of the present invention may also have the following additional technical features:
  • the rendering module is further configured to determine whether encoded data exists in the second cache module, and when the determination result is yes, recycle the encoded data.
  • the encoding module is further configured to determine whether the current encoding task has ended after the encoded data is stored in the second cache module; If yes, store the unencoded video frame in the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
  • the rendering module is further configured to determine whether the current rendering task has ended after the rendered video frame is stored in the first cache module; if not, return the video frame according to the to-be-synthesized video. Steps of rendering; if yes, then judge whether the current encoding task has exited, and when the judgement result is yes, judge whether there is data stored in the second cache module, and when the judgement result is yes, determine whether the data currently stored in the second cache The data in the module is recycled, and the underlying rendering context is exited after the recycling is complete.
  • a second aspect of the present invention provides a GPU-based video synthesis method, which includes the following steps: obtaining a video to be synthesized through a rendering thread, creating a basic rendering context according to the video to be synthesized, and performing video frame rendering on the video to be synthesized; storing the rendered video frame in the first cache module; creating a shared rendering context according to the basic rendering context through an encoding thread, and judging whether the first cache module exists The rendered video frame, and when the judgment result is yes, the rendered video frame is encoded; the encoded data is stored in the second cache module.
  • a video to be synthesized is obtained through a rendering thread, a basic rendering context is created according to the video to be synthesized, and video frames are rendered according to the video to be synthesized; then, Store the rendered video frame into the first cache module; then, create a shared rendering context according to the basic rendering context through an encoding thread, and determine whether there is a rendered video frame in the first cache module, and When the judgment result is yes, encode the rendered video frame; then, store the encoded data in the second cache module; thereby reducing the waiting time in the decoding and rendering process, reducing the CPU load, and effectively improving the video synthesis efficiency.
  • the GPU-based video synthesis method proposed according to the foregoing embodiments of the present invention may also have the following additional technical features:
  • the rendering thread further determines whether encoded data exists in the second cache module, and when the determination result is yes, corrects the The encoded data is recycled.
  • the encoding thread after storing the encoded data in the second cache module, the encoding thread also determines whether the current encoding task has ended; if not, returns to determine whether there is rendered video in the first cache module. frame step; if yes, store the unencoded video frame in the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
  • the rendering thread determines whether the current rendering task has ended; if not, returns to the step of rendering the video frame according to the to-be-synthesized video. If yes, then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second cache module, and when the judgment result is yes, to the currently stored in the second cache module The data is recycled, and the underlying rendering context is exited after the recycling is complete.
  • a third aspect of the present invention provides a computer-readable storage medium on which a GPU-based video synthesis program is stored, and when the GPU-based video synthesis program is executed by a processor, the above-mentioned based video synthesis program is realized.
  • GPU-based video synthesis methods
  • the computer-readable storage medium of the embodiment of the present invention by storing a GPU-based video synthesis program, when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, so as to reduce the The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
  • a fourth aspect of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, when the processor executes the program, Implement the GPU-based video synthesis method as described above.
  • the GPU-based video synthesis program is stored through the memory, so that when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, thereby reducing the cost of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
  • FIG. 1 is a schematic block diagram of a GPU-based video synthesis system according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a GPU-based video synthesis method according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a rendering process of a rendering thread according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an encoding flow of an encoding thread according to yet another embodiment of the present invention.
  • the video synthesis system includes a rendering module, an encoding module, a first cache module and a second cache module; wherein, the rendering module is used to obtain the video to be synthesized, and create a basic rendering context according to the video to be synthesized, and according to the The to-be-synthesized video is rendered video frames, and the rendered video frames are stored in the first cache module; the encoding module is used to create a shared rendering context according to the basic rendering context, and determine the first cache Whether there is a rendered video frame in the module, and when the judgment result is yes, encode the rendered video frame, and store the encoded data in the second cache module; thereby reducing the decoding and rendering process. Wait time, reduce CPU load, and effectively improve video synthesis efficiency.
  • FIG. 1 is a schematic block diagram of a GPU-based video synthesis system according to an embodiment of the present invention.
  • the GPU-based video synthesis system includes: a rendering module 10, an encoding module 20, a first buffer module 30, and a first buffer module 30.
  • Two cache modules 40 Two cache modules 40 .
  • the rendering module 10 is used for acquiring the video to be synthesized, creating a basic rendering context according to the video to be synthesized, and rendering the video frame according to the video to be synthesized, and storing the rendered video frame in the first buffer module 30;
  • the encoding module 20 is configured to create a shared rendering context according to the basic rendering context, and determine whether there is a rendered video frame in the first cache module 30, and when the determination result is yes, encode the rendered video frame, and encode the rendered video frame.
  • the encoded data is stored in the second buffer module 40 .
  • the video encoding process is not completed by a linear serial thread; it is completed by multiple modules (rendering thread and encoding thread); and, through the way of context sharing and the way of caching module sharing of encoded data To parallelize the rendering behavior and the encoding behavior without waiting for each other; to reduce the communication frequency between the CPU and the GPU, reduce the CPU load, and improve the encoding efficiency of the video.
  • the rendering module 10 is further configured to determine whether encoded data exists in the second cache module 40, and when the determination result is yes, recycle the encoded data.
  • the encoding module 20 is further configured to determine whether the current encoding task has ended after storing the encoded data in the second cache module 40; if not, return to determine whether rendering exists in the first cache module 30 The following video frame step; if yes, store the unencoded video frame in the second buffer module 40, recycle the shared rendering context, and exit the current encoding task after the recycle.
  • the rendering module 10 is further configured to determine whether the current rendering task has ended after storing the rendered video frame in the first cache module 30; Step; If yes, then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second buffer module 40, and when the judgment result is yes, to the second buffer module currently stored in the second buffer module.
  • the data in 40 is recycled, and the underlying rendering context is exited after the recycling is complete.
  • the encoding module 20 stores the encoded data in the second buffer module 40; it further determines whether the current encoding task has ended (wherein, the current encoding task may end because the encoding task corresponding to the video has been completed, or, The user actively stops the task, and makes the current encoding task end); thus, if the current encoding task does not end, then return to the step of judging whether there is a rendered video frame in the first buffer module 30, to continue to perform the encoding task; if the current encoding task When the encoding task ends, the encoding module 20 stores the unencoded video frame in the second cache module 40, recycles the shared rendering context, and exits the current encoding task after recycling; After being stored in the first cache module 30, it is further judged whether the current rendering task has ended; if not, then return to the step of performing video frame rendering according to the video to be synthesized, so as to continue to perform the rendering task; if so, judge whether the current encoding task is has
  • the GPU-based video synthesis system includes a rendering module, an encoding module, a first cache module, and a second cache module; wherein, the rendering module is used to obtain the video to be synthesized, and according to the The video to be synthesized creates a basic rendering context, and the video frame is rendered according to the video to be synthesized, and the rendered video frame is stored in the first cache module; the encoding module is used for rendering according to the basic The context creates a shared rendering context, and judges whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encodes the rendered video frame, and stores the encoded data in the first cache module. Two cache modules; thereby reducing the waiting time in the decoding and rendering process, reducing the CPU load, and effectively improving the efficiency of video synthesis.
  • an embodiment of the present invention proposes a GPU-based video synthesis method.
  • the GPU-based video synthesis method includes the following steps:
  • S101 Acquire a video to be synthesized through a rendering thread, create a basic rendering context according to the video to be synthesized, and render video frames according to the video to be synthesized.
  • S102 Store the rendered video frame in the first cache module.
  • the rendering thread after storing the rendered video frame in the first cache module, the rendering thread further determines whether encoded data exists in the second cache module, and when the determination result is yes, determines whether the encoded data exists in the second cache module. Data is recycled.
  • the encoding thread after storing the encoded data in the second cache module, the encoding thread further determines whether the current encoding task has ended; if not, returns to the step of determining whether there is a rendered video frame in the first cache module ; If yes, store the unencoded video frame into the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
  • the rendering thread determines whether the current rendering task has ended after storing the rendered video frame into the first cache module; if not, returns to the step of rendering the video frame according to the video to be synthesized; if so, Then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second cache module, and when the judgment result is yes, reclaim the data currently stored in the second cache module, And exit the underlying rendering context after recycling is complete.
  • the rendering process of the rendering thread proposed by the embodiment of the present invention includes the following steps:
  • step S205 the rendering thread enters a waiting state, and returns to step S204.
  • the rendered video frame is stored in the first cache module.
  • step S209 determine whether the current rendering task has ended; if not, return to step S203; if yes, execute step S210.
  • step S211 the rendering thread enters a waiting state, and returns to step S210.
  • the encoding process of the encoding thread proposed by the embodiment of the present invention includes the following steps:
  • step S302 the encoding thread enters a waiting state, and returns to step S301.
  • step S305 the encoding thread enters a waiting state, and returns to step S304.
  • step S308 determine whether the current encoding task has ended; if not, return to step S304; if yes, execute step S309.
  • a video to be synthesized is acquired through a rendering thread, a basic rendering context is created according to the to-be-synthesized video, and a video frame is performed according to the to-be-synthesized video.
  • the embodiments of the present invention provide a computer-readable storage medium on which a GPU-based video synthesis program is stored, and when the GPU-based video synthesis program is executed by a processor, the above-mentioned GPU-based video synthesis program is implemented.
  • Video synthesis method When the GPU-based video synthesis program is executed by a processor, the above-mentioned GPU-based video synthesis program is implemented.
  • the processor by storing the GPU-based video synthesis program, the processor implements the above-mentioned GPU-based video synthesis method when executing the GPU-based video synthesis program, thereby realizing the reduction of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
  • the embodiments of the present invention provide a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the program, the processor implements the following The above-mentioned GPU-based video synthesis method.
  • the GPU-based video synthesis program is stored in the memory, so that when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, thereby reducing the cost of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
  • embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
  • the word “a” or “an” preceding an element does not preclude the presence of a plurality of such elements.
  • the invention can be implemented by means of hardware comprising several different components and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware.
  • the use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.
  • first and second are only used for description purposes, and cannot be interpreted as indicating or implying relative importance or the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • “plurality” means two or more, unless otherwise expressly and specifically defined.
  • the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be the internal connection of the two elements or the interaction relationship between the two elements.
  • installed may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be the internal connection of the two elements or the interaction relationship between the two elements.
  • a first feature "on” or “under” a second feature may be in direct contact between the first and second features, or the first and second features indirectly through an intermediary touch.
  • the first feature being “above”, “over” and “above” the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the first feature is level higher than the second feature.
  • the first feature being “below”, “below” and “below” the second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature has a lower level than the second feature.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了一种基于GPU的视频合成系统、方法、介质及设备,其中系统包括:渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,渲染模块用于获取待合成视频,并根据待合成视频创建基础渲染上下文,以及根据待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入第一缓存模块;编码模块用于根据基础渲染上下文创建共享渲染上下文,并判断第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块;能够降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。

Description

基于GPU的视频合成系统及方法 技术领域
本发明涉及视频编码技术领域,特别涉及一种基于GPU的视频合成系统、一种基于GPU的视频合成方法、一种计算机可读存储介质和一种计算机设备。
背景技术
相关技术中,在对视频进行编码优化时,多通过从硬件加速解码器解码数据,并将解码后的数据输出到GPU进行渲染;然后,GPU将渲染完的数据传输给硬件加速的编码器进行编码;这种方式下,整个视频编码过程都在一个线程中串行进行,需要CPU和GPU之间频繁通讯,并且,CPU的负载较高。
发明内容
本发明旨在至少在一定程度上解决上述技术中的技术问题之一。为此,本发明的一个目的在于提出一种基于GPU的视频合成系统,能够降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
本发明的第二个目的在于提出一种基于GPU的视频合成方法。
本发明的第三个目的在于提出一种计算机可读存储介质。
本发明的第四个目的在于提出一种计算机设备。
为达到上述目的,本发明第一方面实施例提出了一种基于GPU的视频合成系统,包括:渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块。
根据本发明实施例的基于GPU的视频合成系统,包括渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该 渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块;从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
另外,根据本发明上述实施例提出的基于GPU的视频合成系统还可以具有如下附加的技术特征:
可选地,所述渲染模块还用于判断所述第二缓存模块中是否存在编码后的数据,并在判断结果为是时,对该编码后的数据进行回收。
可选地,所述编码模块还用于在将编码后的数据存储入第二缓存模块之后,判断当前编码任务是否已结束;如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
可选地,所述渲染模块还用于在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
为达到上述目的,本发明第二方面实施例提出了一种基于GPU的视频合成方法,包括以下步骤:通过渲染线程获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染;将渲染后的视频帧存储入所述第一缓存模块;通过编码线程根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码;将编码后的数据存储入第二缓存模块。
根据本发明实施例的基于GPU的视频合成方法,首先,通过渲染线程获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染;接着,将渲染后的视频帧存储入所述第一缓存模块;然后,通过编码线程根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码;接着,将编码后的数据存储入第二缓存模块;从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
另外,根据本发明上述实施例提出的基于GPU的视频合成方法还可以具有如下附加的技术特征:
可选地,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,还判断所 述第二缓存模块中是否存在编码后的数据,以及在判断结果为是时,对该编码后的数据进行回收。
可选地,所述编码线程在将编码后的数据存储入第二缓存模块之后,还判断当前编码任务是否已结束;如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
可选地,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
为达到上述目的,本发明第三方面实施例提出了一种计算机可读存储介质,其上存储有基于GPU的视频合成程序,该基于GPU的视频合成程序被处理器执行时实现如上述的基于GPU的视频合成方法。
根据本发明实施例的计算机可读存储介质,通过存储基于GPU的视频合成程序,以使得处理器在执行该基于GPU的视频合成程序时,实现如上述的基于GPU的视频合成方法,从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
为达到上述目的,本发明第四方面实施例提出了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如上述的基于GPU的视频合成方法。
根据本发明实施例的计算机设备,通过存储器对基于GPU的视频合成程序进行存储,以使得处理器在执行该基于GPU的视频合成程序时,实现如上述的基于GPU的视频合成方法,从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
附图说明
图1为根据本发明实施例的基于GPU的视频合成系统的方框示意图;
图2为根据本发明实施例的基于GPU的视频合成方法的流程示意图;
图3为根据本发明另一实施例的渲染线程的渲染流程示意图;
图4为根据本发明又一实施例的编码线程的编码流程示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
相关技术中,在对视频进行编码优化时,整个视频编码过程都在一个线程中串行进行,需要CPU和GPU之间频繁通讯,并且,CPU的负载较高;根据本发明实施例的基于GPU的视频合成系统,包括渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块;从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
为了更好的理解上述技术方案,下面将参照附图更详细地描述本发明的示例性实施例。虽然附图中显示了本发明的示例性实施例,然而应当理解,可以以各种形式实现本发明而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本发明,并且能够将本发明的范围完整的传达给本领域的技术人员。
为了更好的理解上述技术方案,下面将结合说明书附图以及具体的实施方式对上述技术方案进行详细的说明。
图1为根据本发明实施例的基于GPU的视频合成系统的方框示意图,如图1所示,该基于GPU的视频合成系统包括:渲染模块10、编码模块20、第一缓存模块30和第二缓存模块40。
其中,渲染模块10用于获取待合成视频,并根据待合成视频创建基础渲染上下文,以及根据待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入第一缓存模块30;
编码模块20用于根据基础渲染上下文创建共享渲染上下文,并判断第一缓存模块30中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块40。
也就是说,视频的编码过程并非由一个线性的串行线程完成的;而是由多个模块(渲染线程和编码线程)完成的;并且,通过上下文共享的方式、缓存模块共享编码数据的方式来使得渲染行为和编码行为并行化,不需要互相等待;以降低CPU与GPU之间的通讯频率,降低CPU负载,提高视频的编码效率。
在一些实施例中,渲染模块10还用于判断所述第二缓存模块40中是否存在编码后的数据,并在判断结果为是时,对该编码后的数据进行回收。
在一些实施例中,编码模块20还用于在将编码后的数据存储入第二缓存模块40之后,判断当前编码任务是否已结束;如果否,则返回判断第一缓存模块30中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块40,并对共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
在一些实施例中,渲染模块10还用于在将渲染后的视频帧存储入第一缓存模块30之后,判断当前渲染任务是否已结束;如果否,则返回根据待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块40中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块40中的数据进行回收,并在回收完成之后退出基础渲染上下文。
即言,编码模块20在将编码后的数据存储到第二缓存模块40之后;进一步地判断当前编码任务是否已结束(其中,当前编码任务结束可能是因为视频对应的编码任务已经完成,或者,用户主动进行任务的停止,而使得当前编码任务结束);从而,如果当前编码任务未结束,则返回到判断第一缓存模块30是否存在渲染后的视频帧步骤,以继续执行编码任务;如果当前编码任务结束,则编码模块20将未编码的视频帧存储到第二缓存模块40,并对共享渲染上下文进行回收,以及在回收之后退出当前编码任务;而渲染模块10在将渲染后的视频帧存储到第一缓存模块30之后,进一步地判断当前渲染任务是否已结束;如果否,则返回根据待合成视频进行视频帧渲染的步骤,以继续执行渲染任务;如果是,则判断当前编码任务是否已经退出;当判断结果为是时,再判断第二缓存模块40中是否存储有数据(即言,第二缓存模块40中可能存在未编码的视频帧);如果是,则对当前存储的数据进行回收;并在回收完成之后推出基础渲染上下文;从而,可以防止资源泄漏。
综上所述,根据本发明实施例的基于GPU的视频合成系统,包括渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块;从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
为了实现上述实施例,本发明实施例提出了一种基于GPU的视频合成方法,如图2所示,该基于GPU的视频合成方法包括以下步骤:
S101,通过渲染线程获取待合成视频,并根据待合成视频创建基础渲染上下文,以及根据待合成视频进行视频帧的渲染。
S102,将渲染后的视频帧存储入第一缓存模块。
S103,通过编码线程根据基础渲染上下文创建共享渲染上下文,并判断第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码。
S104,将编码后的数据存储入第二缓存模块。
在一些实施例中,渲染线程在将渲染后的视频帧存储入第一缓存模块之后,还判断第二缓存模块中是否存在编码后的数据,以及在判断结果为是时,对该编码后的数据进行回收。
在一些实施例中,编码线程在将编码后的数据存储入第二缓存模块之后,还判断当前编码任务是否已结束;如果否,则返回判断第一缓存模块中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块,并对共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
在一些实施例中,渲染线程在将渲染后的视频帧存储入第一缓存模块之后,判断当前渲染任务是否已结束;如果否,则返回根据待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出基础渲染上下文。
作为本发明的一个具体实施例,如图3所示,本发明实施例提出的渲染线程的渲染流程包括以下步骤:
S201,获取待合成视频。
S202,根据待合成视频创建基础渲染上下文。
S203,根据待合成视频进行视频帧的渲染。
S204,判断第一缓存模块中的缓存队列是否已满;如果否,则执行步骤S205;如果是,则执行步骤S206。
S205,渲染线程进入等待状态,并返回步骤S204。
S206,将渲染后的视频帧存储入第一缓存模块。
S207,判断第二缓存模块中是否存在编码后的数据;如果是,则执行步骤S208;如果否,则执行步骤S209。
S208,回收第二缓存模块中存储的编码后的数据。
S209,判断当前渲染任务是否已结束;如果否,则返回步骤S203;如果是,则执行步骤S210。
S210,判断当前编码任务是否已退出;如果否,则执行步骤S211;如果是,则执行步骤S212。
S211,渲染线程进入等待状态,并返回步骤S210。
S212,判断第二缓存模块中是否存储有数据;如果是,则执行步骤S213;如果否,则执行步骤S214。
S213,回收当前存储在第二缓存模块中的数据,并执行步骤S214。
S214,退出基础渲染上下文。
作为本发明的一个具体实施例,如图4所示,本发明实施例提出的编码线程的编码流程包括以下步骤:
S301,判断基础渲染上下文是否已创建;如果否,则执行步骤S302;如果是,则执行步骤S303。
S302,编码线程进入等待状态,并返回步骤S301。
S303,根据基础渲染上下文创建共享渲染上下文。
S304,判断第一缓存模块中是否存在渲染后的视频帧;如果否,则执行步骤S305;如果是,则执行步骤S306。
S305,编码线程进入等待状态,并返回步骤S304。
S306,对渲染后的视频帧进行编码。
S307,将编码后的数据存储到第二缓存模块。
S308,判断当前编码任务是否已结束;如果否,则返回步骤S304;如果是,则执行步骤S309。
S309,将未编码的视频帧存储到第二缓存模块。
S310,回收共享渲染上下文。
S311,退出当前编码任务。
需要说明的是,上述关于图1中基于GPU的视频合成系统的描述同样适用于该基于GPU的视频合成方法,在此不做赘述。
综上所述,根据本发明实施例的基于GPU的视频合成方法,首先,通过渲染线程获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染;接着,将渲染后的视频帧存储入所述第一缓存模块;然后,通过编码线程根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码;接着,将编码后的数据存储入第二缓存模块;从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
为了实现上述实施例,本发明实施例提出了一种计算机可读存储介质,其上存储有基于GPU的视频合成程序,该基于GPU的视频合成程序被处理器执行时实现如上述的基于GPU的视频合成方法。
根据本发明实施例的计算机可读存储介质,通过存储基于GPU的视频合成程序,以使得处理器在执行该基于GPU的视频合成程序时,实现如上述的基于GPU的视频合成方法,从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
为了实现上述实施例,本发明实施例提出了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如上述的基于GPU的视频合成方法。
根据本发明实施例的计算机设备,通过存储器对基于GPU的视频合成程序进行存储,以使得处理器在执行该基于GPU的视频合成程序时,实现如上述的基于GPU的视频合成方法,从而实现降低解码、渲染过程中的等待时间,降低CPU负载,有效提高视频合成效率。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
应当注意的是,在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的部件或步骤。位于部件之前的单词“一”或“一个”不排除存在多个这样的部件。本发明可以借助于包括有若干不同部件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些 装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。
在本发明的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”、“固定”等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
在本发明中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不应理解为必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (10)

  1. 一种基于GPU的视频合成系统,其特征在于,包括:渲染模块、编码模块、第一缓存模块和第二缓存模块;
    其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;
    所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块。
  2. 如权利要求1所述的基于GPU的视频合成系统,其特征在于,所述渲染模块还用于判断所述第二缓存模块中是否存在编码后的数据,并在判断结果为是时,对该编码后的数据进行回收。
  3. 如权利要求1所述的基于GPU的视频合成系统,其特征在于,所述编码模块还用于在将编码后的数据存储入第二缓存模块之后,判断当前编码任务是否已结束;
    如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;
    如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
  4. 如权利要求3所述的基于GPU的视频合成系统,其特征在于,所述渲染模块还用于在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;
    如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;
    如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
  5. 一种基于GPU的视频合成方法,其特征在于,包括以下步骤:
    通过渲染线程获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染;
    将渲染后的视频帧存储入所述第一缓存模块;
    通过编码线程根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码;
    将编码后的数据存储入第二缓存模块。
  6. 如权利要求5所述的基于GPU的视频合成方法,其特征在于,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,还判断所述第二缓存模块中是否存在编码后 的数据,以及在判断结果为是时,对该编码后的数据进行回收。
  7. 如权利要求5所述的基于GPU的视频合成方法,其特征在于,所述编码线程在将编码后的数据存储入第二缓存模块之后,还判断当前编码任务是否已结束;
    如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;
    如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
  8. 如权利要求7所述的基于GPU的视频合成方法,其特征在于,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;
    如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;
    如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
  9. 一种计算机可读存储介质,其特征在于,其上存储有基于GPU的视频合成程序,该基于GPU的视频合成程序被处理器执行时实现如权利要求4-8中任一项所述的基于GPU的视频合成方法。
  10. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时,实现如权利要求4-8中任一项所述的基于GPU的视频合成方法。
PCT/CN2021/119418 2021-01-29 2021-09-18 基于gpu的视频合成系统及方法 WO2022160744A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110129959.5 2021-01-29
CN202110129959.5A CN112954233B (zh) 2021-01-29 2021-01-29 基于gpu的视频合成系统及方法

Publications (1)

Publication Number Publication Date
WO2022160744A1 true WO2022160744A1 (zh) 2022-08-04

Family

ID=76240155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119418 WO2022160744A1 (zh) 2021-01-29 2021-09-18 基于gpu的视频合成系统及方法

Country Status (2)

Country Link
CN (1) CN112954233B (zh)
WO (1) WO2022160744A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993887A (zh) * 2023-09-27 2023-11-03 湖南马栏山视频先进技术研究院有限公司 一种视频渲染异常的响应方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954233B (zh) * 2021-01-29 2022-11-18 稿定(厦门)科技有限公司 基于gpu的视频合成系统及方法
CN115375530A (zh) * 2022-07-13 2022-11-22 北京松应科技有限公司 一种多gpu协同渲染方法、系统、装置及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091608A (zh) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 一种基于ios设备的视频编辑方法及装置
CN106462393A (zh) * 2014-05-30 2017-02-22 苹果公司 用于统一应用编程接口和模型的系统和方法
CN107277616A (zh) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 视频特效渲染方法、装置及终端
GB2550150A (en) * 2016-05-10 2017-11-15 Advanced Risc Mach Ltd Data processing systems
CN107993183A (zh) * 2017-11-24 2018-05-04 暴风集团股份有限公司 图像处理装置、方法、终端和服务器
CN111901635A (zh) * 2020-06-17 2020-11-06 北京视博云信息技术有限公司 一种视频处理方法、装置、存储介质及设备
CN112218117A (zh) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 视频处理方法及设备
CN112954233A (zh) * 2021-01-29 2021-06-11 稿定(厦门)科技有限公司 基于gpu的视频合成系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8755515B1 (en) * 2008-09-29 2014-06-17 Wai Wu Parallel signal processing system and method
GB2502620B (en) * 2012-06-01 2020-04-22 Advanced Risc Mach Ltd A parallel parsing video decoder and method
CN109996104A (zh) * 2019-04-22 2019-07-09 北京奇艺世纪科技有限公司 一种视频播放方法、装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462393A (zh) * 2014-05-30 2017-02-22 苹果公司 用于统一应用编程接口和模型的系统和方法
CN104091608A (zh) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 一种基于ios设备的视频编辑方法及装置
GB2550150A (en) * 2016-05-10 2017-11-15 Advanced Risc Mach Ltd Data processing systems
CN107277616A (zh) * 2017-07-21 2017-10-20 广州爱拍网络科技有限公司 视频特效渲染方法、装置及终端
CN107993183A (zh) * 2017-11-24 2018-05-04 暴风集团股份有限公司 图像处理装置、方法、终端和服务器
CN111901635A (zh) * 2020-06-17 2020-11-06 北京视博云信息技术有限公司 一种视频处理方法、装置、存储介质及设备
CN112218117A (zh) * 2020-09-29 2021-01-12 北京字跳网络技术有限公司 视频处理方法及设备
CN112954233A (zh) * 2021-01-29 2021-06-11 稿定(厦门)科技有限公司 基于gpu的视频合成系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993887A (zh) * 2023-09-27 2023-11-03 湖南马栏山视频先进技术研究院有限公司 一种视频渲染异常的响应方法及系统
CN116993887B (zh) * 2023-09-27 2023-12-22 湖南马栏山视频先进技术研究院有限公司 一种视频渲染异常的响应方法及系统

Also Published As

Publication number Publication date
CN112954233A (zh) 2021-06-11
CN112954233B (zh) 2022-11-18

Similar Documents

Publication Publication Date Title
WO2022160744A1 (zh) 基于gpu的视频合成系统及方法
JP2023100934A5 (ja) ビデオ復号化方法およびコンピュータプログラム
US7929599B2 (en) Accelerated video encoding
US8660191B2 (en) Software video decoder display buffer underflow prediction and recovery
US9148669B2 (en) High performance AVC encoder on a multi-core platform
TWI517677B (zh) 用於視頻編碼管線之系統,方法,及電腦程式產品
CN112929755B (zh) 进度拖动过程中的视频文件播放方法及装置
US7515761B2 (en) Encoding device and method
CN102113327B (zh) 图像编码装置、方法、集成电路
WO2016210177A1 (en) Parallel intra-prediction
JP2006517069A (ja) モーションベクトルの予測方法及びシステム
CN105262957A (zh) 视频图像的处理方法和装置
CN111445562B (zh) 文字动画生成方法及装置
US20060171454A1 (en) Method of video coding for handheld apparatus
US20220400253A1 (en) Lossless image compression using block based prediction and optimized context adaptive entropy coding
US20120183234A1 (en) Methods for parallelizing fixed-length bitstream codecs
US9307259B2 (en) Image decoding methods and image decoding devices
CN105635731A (zh) 高效视频编码的帧内预测参考点预处理方法
CN114731412A (zh) 一种数据处理方法及数据处理设备
CN112822494A (zh) 双缓冲编码系统及其控制方法
US20130077881A1 (en) Image processing device, image processing system and method for having computer perform image processing
Fajardo et al. Reducing the I/O Bottleneck by a Compression Strategy.
US8373711B2 (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP2003198858A (ja) 符号化装置および復号化装置
JP2010041115A (ja) 画像処理装置および画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922336

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21922336

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.01.2024)