WO2022160744A1 - 基于gpu的视频合成系统及方法 - Google Patents
基于gpu的视频合成系统及方法 Download PDFInfo
- Publication number
- WO2022160744A1 WO2022160744A1 PCT/CN2021/119418 CN2021119418W WO2022160744A1 WO 2022160744 A1 WO2022160744 A1 WO 2022160744A1 CN 2021119418 W CN2021119418 W CN 2021119418W WO 2022160744 A1 WO2022160744 A1 WO 2022160744A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- module
- video
- cache module
- gpu
- Prior art date
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 45
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000009877 rendering Methods 0.000 claims abstract description 131
- 230000008569 process Effects 0.000 claims abstract description 21
- 238000001308 synthesis method Methods 0.000 claims description 24
- 238000004064 recycling Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- the present invention relates to the technical field of video coding, and in particular, to a GPU-based video synthesis system, a GPU-based video synthesis method, a computer-readable storage medium, and a computer device.
- the data is decoded from the hardware-accelerated decoder, and the decoded data is output to the GPU for rendering; then, the GPU transmits the rendered data to the hardware-accelerated encoder for rendering.
- Encoding in this way, the entire video encoding process is performed serially in one thread, requiring frequent communication between the CPU and GPU, and the CPU has a high load.
- an object of the present invention is to propose a video synthesis system based on GPU, which can reduce the waiting time in the decoding and rendering process, reduce the CPU load, and effectively improve the video synthesis efficiency.
- the second object of the present invention is to propose a video synthesis method based on GPU.
- a third object of the present invention is to provide a computer-readable storage medium.
- the fourth object of the present invention is to provide a computer device.
- the embodiment of the first aspect of the present invention provides a video synthesis system based on GPU, including: a rendering module, an encoding module, a first cache module and a second cache module; wherein, the rendering module is used to obtain The video to be synthesized, a basic rendering context is created according to the video to be synthesized, and the video frame is rendered according to the video to be synthesized, and the rendered video frame is stored in the first cache module; the encoding module uses Create a shared rendering context according to the basic rendering context, and judge whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encode the rendered video frame, and encode the The latter data is stored in the second cache module.
- a GPU-based video synthesis system includes a rendering module, an encoding module, a first buffering module, and a second buffering module; wherein the rendering module is used to acquire a video to be synthesized, and according to the video to be synthesized Create a basic rendering context, and perform video frame rendering according to the video to be synthesized, and store the rendered video frame in the first cache module; the encoding module is configured to create a shared rendering context according to the basic rendering context , and judge whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encode the rendered video frame, and store the encoded data into the second cache module; thus It can reduce the waiting time in the decoding and rendering process, reduce the CPU load, and effectively improve the efficiency of video synthesis.
- GPU-based video synthesis system proposed according to the foregoing embodiments of the present invention may also have the following additional technical features:
- the rendering module is further configured to determine whether encoded data exists in the second cache module, and when the determination result is yes, recycle the encoded data.
- the encoding module is further configured to determine whether the current encoding task has ended after the encoded data is stored in the second cache module; If yes, store the unencoded video frame in the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
- the rendering module is further configured to determine whether the current rendering task has ended after the rendered video frame is stored in the first cache module; if not, return the video frame according to the to-be-synthesized video. Steps of rendering; if yes, then judge whether the current encoding task has exited, and when the judgement result is yes, judge whether there is data stored in the second cache module, and when the judgement result is yes, determine whether the data currently stored in the second cache The data in the module is recycled, and the underlying rendering context is exited after the recycling is complete.
- a second aspect of the present invention provides a GPU-based video synthesis method, which includes the following steps: obtaining a video to be synthesized through a rendering thread, creating a basic rendering context according to the video to be synthesized, and performing video frame rendering on the video to be synthesized; storing the rendered video frame in the first cache module; creating a shared rendering context according to the basic rendering context through an encoding thread, and judging whether the first cache module exists The rendered video frame, and when the judgment result is yes, the rendered video frame is encoded; the encoded data is stored in the second cache module.
- a video to be synthesized is obtained through a rendering thread, a basic rendering context is created according to the video to be synthesized, and video frames are rendered according to the video to be synthesized; then, Store the rendered video frame into the first cache module; then, create a shared rendering context according to the basic rendering context through an encoding thread, and determine whether there is a rendered video frame in the first cache module, and When the judgment result is yes, encode the rendered video frame; then, store the encoded data in the second cache module; thereby reducing the waiting time in the decoding and rendering process, reducing the CPU load, and effectively improving the video synthesis efficiency.
- the GPU-based video synthesis method proposed according to the foregoing embodiments of the present invention may also have the following additional technical features:
- the rendering thread further determines whether encoded data exists in the second cache module, and when the determination result is yes, corrects the The encoded data is recycled.
- the encoding thread after storing the encoded data in the second cache module, the encoding thread also determines whether the current encoding task has ended; if not, returns to determine whether there is rendered video in the first cache module. frame step; if yes, store the unencoded video frame in the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
- the rendering thread determines whether the current rendering task has ended; if not, returns to the step of rendering the video frame according to the to-be-synthesized video. If yes, then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second cache module, and when the judgment result is yes, to the currently stored in the second cache module The data is recycled, and the underlying rendering context is exited after the recycling is complete.
- a third aspect of the present invention provides a computer-readable storage medium on which a GPU-based video synthesis program is stored, and when the GPU-based video synthesis program is executed by a processor, the above-mentioned based video synthesis program is realized.
- GPU-based video synthesis methods
- the computer-readable storage medium of the embodiment of the present invention by storing a GPU-based video synthesis program, when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, so as to reduce the The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
- a fourth aspect of the present invention provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, when the processor executes the program, Implement the GPU-based video synthesis method as described above.
- the GPU-based video synthesis program is stored through the memory, so that when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, thereby reducing the cost of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
- FIG. 1 is a schematic block diagram of a GPU-based video synthesis system according to an embodiment of the present invention
- FIG. 2 is a schematic flowchart of a GPU-based video synthesis method according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of a rendering process of a rendering thread according to another embodiment of the present invention.
- FIG. 4 is a schematic diagram of an encoding flow of an encoding thread according to yet another embodiment of the present invention.
- the video synthesis system includes a rendering module, an encoding module, a first cache module and a second cache module; wherein, the rendering module is used to obtain the video to be synthesized, and create a basic rendering context according to the video to be synthesized, and according to the The to-be-synthesized video is rendered video frames, and the rendered video frames are stored in the first cache module; the encoding module is used to create a shared rendering context according to the basic rendering context, and determine the first cache Whether there is a rendered video frame in the module, and when the judgment result is yes, encode the rendered video frame, and store the encoded data in the second cache module; thereby reducing the decoding and rendering process. Wait time, reduce CPU load, and effectively improve video synthesis efficiency.
- FIG. 1 is a schematic block diagram of a GPU-based video synthesis system according to an embodiment of the present invention.
- the GPU-based video synthesis system includes: a rendering module 10, an encoding module 20, a first buffer module 30, and a first buffer module 30.
- Two cache modules 40 Two cache modules 40 .
- the rendering module 10 is used for acquiring the video to be synthesized, creating a basic rendering context according to the video to be synthesized, and rendering the video frame according to the video to be synthesized, and storing the rendered video frame in the first buffer module 30;
- the encoding module 20 is configured to create a shared rendering context according to the basic rendering context, and determine whether there is a rendered video frame in the first cache module 30, and when the determination result is yes, encode the rendered video frame, and encode the rendered video frame.
- the encoded data is stored in the second buffer module 40 .
- the video encoding process is not completed by a linear serial thread; it is completed by multiple modules (rendering thread and encoding thread); and, through the way of context sharing and the way of caching module sharing of encoded data To parallelize the rendering behavior and the encoding behavior without waiting for each other; to reduce the communication frequency between the CPU and the GPU, reduce the CPU load, and improve the encoding efficiency of the video.
- the rendering module 10 is further configured to determine whether encoded data exists in the second cache module 40, and when the determination result is yes, recycle the encoded data.
- the encoding module 20 is further configured to determine whether the current encoding task has ended after storing the encoded data in the second cache module 40; if not, return to determine whether rendering exists in the first cache module 30 The following video frame step; if yes, store the unencoded video frame in the second buffer module 40, recycle the shared rendering context, and exit the current encoding task after the recycle.
- the rendering module 10 is further configured to determine whether the current rendering task has ended after storing the rendered video frame in the first cache module 30; Step; If yes, then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second buffer module 40, and when the judgment result is yes, to the second buffer module currently stored in the second buffer module.
- the data in 40 is recycled, and the underlying rendering context is exited after the recycling is complete.
- the encoding module 20 stores the encoded data in the second buffer module 40; it further determines whether the current encoding task has ended (wherein, the current encoding task may end because the encoding task corresponding to the video has been completed, or, The user actively stops the task, and makes the current encoding task end); thus, if the current encoding task does not end, then return to the step of judging whether there is a rendered video frame in the first buffer module 30, to continue to perform the encoding task; if the current encoding task When the encoding task ends, the encoding module 20 stores the unencoded video frame in the second cache module 40, recycles the shared rendering context, and exits the current encoding task after recycling; After being stored in the first cache module 30, it is further judged whether the current rendering task has ended; if not, then return to the step of performing video frame rendering according to the video to be synthesized, so as to continue to perform the rendering task; if so, judge whether the current encoding task is has
- the GPU-based video synthesis system includes a rendering module, an encoding module, a first cache module, and a second cache module; wherein, the rendering module is used to obtain the video to be synthesized, and according to the The video to be synthesized creates a basic rendering context, and the video frame is rendered according to the video to be synthesized, and the rendered video frame is stored in the first cache module; the encoding module is used for rendering according to the basic The context creates a shared rendering context, and judges whether there is a rendered video frame in the first cache module, and when the judgment result is yes, encodes the rendered video frame, and stores the encoded data in the first cache module. Two cache modules; thereby reducing the waiting time in the decoding and rendering process, reducing the CPU load, and effectively improving the efficiency of video synthesis.
- an embodiment of the present invention proposes a GPU-based video synthesis method.
- the GPU-based video synthesis method includes the following steps:
- S101 Acquire a video to be synthesized through a rendering thread, create a basic rendering context according to the video to be synthesized, and render video frames according to the video to be synthesized.
- S102 Store the rendered video frame in the first cache module.
- the rendering thread after storing the rendered video frame in the first cache module, the rendering thread further determines whether encoded data exists in the second cache module, and when the determination result is yes, determines whether the encoded data exists in the second cache module. Data is recycled.
- the encoding thread after storing the encoded data in the second cache module, the encoding thread further determines whether the current encoding task has ended; if not, returns to the step of determining whether there is a rendered video frame in the first cache module ; If yes, store the unencoded video frame into the second cache module, recycle the shared rendering context, and exit the current encoding task after the recycle.
- the rendering thread determines whether the current rendering task has ended after storing the rendered video frame into the first cache module; if not, returns to the step of rendering the video frame according to the video to be synthesized; if so, Then judge whether the current coding task has exited, and when the judgment result is yes, judge whether there is data stored in the second cache module, and when the judgment result is yes, reclaim the data currently stored in the second cache module, And exit the underlying rendering context after recycling is complete.
- the rendering process of the rendering thread proposed by the embodiment of the present invention includes the following steps:
- step S205 the rendering thread enters a waiting state, and returns to step S204.
- the rendered video frame is stored in the first cache module.
- step S209 determine whether the current rendering task has ended; if not, return to step S203; if yes, execute step S210.
- step S211 the rendering thread enters a waiting state, and returns to step S210.
- the encoding process of the encoding thread proposed by the embodiment of the present invention includes the following steps:
- step S302 the encoding thread enters a waiting state, and returns to step S301.
- step S305 the encoding thread enters a waiting state, and returns to step S304.
- step S308 determine whether the current encoding task has ended; if not, return to step S304; if yes, execute step S309.
- a video to be synthesized is acquired through a rendering thread, a basic rendering context is created according to the to-be-synthesized video, and a video frame is performed according to the to-be-synthesized video.
- the embodiments of the present invention provide a computer-readable storage medium on which a GPU-based video synthesis program is stored, and when the GPU-based video synthesis program is executed by a processor, the above-mentioned GPU-based video synthesis program is implemented.
- Video synthesis method When the GPU-based video synthesis program is executed by a processor, the above-mentioned GPU-based video synthesis program is implemented.
- the processor by storing the GPU-based video synthesis program, the processor implements the above-mentioned GPU-based video synthesis method when executing the GPU-based video synthesis program, thereby realizing the reduction of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
- the embodiments of the present invention provide a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
- the processor executes the program, the processor implements the following The above-mentioned GPU-based video synthesis method.
- the GPU-based video synthesis program is stored in the memory, so that when the processor executes the GPU-based video synthesis program, the above-mentioned GPU-based video synthesis method is implemented, thereby reducing the cost of The waiting time in the decoding and rendering process reduces the CPU load and effectively improves the efficiency of video synthesis.
- embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
- any reference signs placed between parentheses shall not be construed as limiting the claim.
- the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
- the word “a” or “an” preceding an element does not preclude the presence of a plurality of such elements.
- the invention can be implemented by means of hardware comprising several different components and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware.
- the use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.
- first and second are only used for description purposes, and cannot be interpreted as indicating or implying relative importance or the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
- “plurality” means two or more, unless otherwise expressly and specifically defined.
- the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be the internal connection of the two elements or the interaction relationship between the two elements.
- installed may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be a direct connection or an indirect connection through an intermediate medium, and it can be the internal connection of the two elements or the interaction relationship between the two elements.
- a first feature "on” or “under” a second feature may be in direct contact between the first and second features, or the first and second features indirectly through an intermediary touch.
- the first feature being “above”, “over” and “above” the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the first feature is level higher than the second feature.
- the first feature being “below”, “below” and “below” the second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature has a lower level than the second feature.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (10)
- 一种基于GPU的视频合成系统,其特征在于,包括:渲染模块、编码模块、第一缓存模块和第二缓存模块;其中,所述渲染模块用于获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染,并将渲染后的视频帧存储入所述第一缓存模块;所述编码模块用于根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码,并将编码后的数据存储入第二缓存模块。
- 如权利要求1所述的基于GPU的视频合成系统,其特征在于,所述渲染模块还用于判断所述第二缓存模块中是否存在编码后的数据,并在判断结果为是时,对该编码后的数据进行回收。
- 如权利要求1所述的基于GPU的视频合成系统,其特征在于,所述编码模块还用于在将编码后的数据存储入第二缓存模块之后,判断当前编码任务是否已结束;如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
- 如权利要求3所述的基于GPU的视频合成系统,其特征在于,所述渲染模块还用于在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
- 一种基于GPU的视频合成方法,其特征在于,包括以下步骤:通过渲染线程获取待合成视频,并根据所述待合成视频创建基础渲染上下文,以及根据所述待合成视频进行视频帧的渲染;将渲染后的视频帧存储入所述第一缓存模块;通过编码线程根据所述基础渲染上下文创建共享渲染上下文,并判断所述第一缓存模块中是否存在渲染后的视频帧,以及在判断结果为是时,对该渲染后的视频帧进行编码;将编码后的数据存储入第二缓存模块。
- 如权利要求5所述的基于GPU的视频合成方法,其特征在于,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,还判断所述第二缓存模块中是否存在编码后 的数据,以及在判断结果为是时,对该编码后的数据进行回收。
- 如权利要求5所述的基于GPU的视频合成方法,其特征在于,所述编码线程在将编码后的数据存储入第二缓存模块之后,还判断当前编码任务是否已结束;如果否,则返回判断所述第一缓存模块中是否存在渲染后的视频帧步骤;如果是,则将未编码的视频帧存储入第二缓存模块,并对所述共享渲染上下文进行回收,以及在回收之后退出当前编码任务。
- 如权利要求7所述的基于GPU的视频合成方法,其特征在于,所述渲染线程在将渲染后的视频帧存储入所述第一缓存模块之后,判断当前渲染任务是否已结束;如果否,则返回根据所述待合成视频进行视频帧渲染的步骤;如果是,则判断当前编码任务是否已退出,并在判断结果为是时,判断第二缓存模块中是否存储有数据,以及在判断结果为是时,对当前存储在第二缓存模块中的数据进行回收,并在回收完成之后退出所述基础渲染上下文。
- 一种计算机可读存储介质,其特征在于,其上存储有基于GPU的视频合成程序,该基于GPU的视频合成程序被处理器执行时实现如权利要求4-8中任一项所述的基于GPU的视频合成方法。
- 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时,实现如权利要求4-8中任一项所述的基于GPU的视频合成方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110129959.5 | 2021-01-29 | ||
CN202110129959.5A CN112954233B (zh) | 2021-01-29 | 2021-01-29 | 基于gpu的视频合成系统及方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022160744A1 true WO2022160744A1 (zh) | 2022-08-04 |
Family
ID=76240155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/119418 WO2022160744A1 (zh) | 2021-01-29 | 2021-09-18 | 基于gpu的视频合成系统及方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112954233B (zh) |
WO (1) | WO2022160744A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116993887A (zh) * | 2023-09-27 | 2023-11-03 | 湖南马栏山视频先进技术研究院有限公司 | 一种视频渲染异常的响应方法及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112954233B (zh) * | 2021-01-29 | 2022-11-18 | 稿定(厦门)科技有限公司 | 基于gpu的视频合成系统及方法 |
CN115375530A (zh) * | 2022-07-13 | 2022-11-22 | 北京松应科技有限公司 | 一种多gpu协同渲染方法、系统、装置及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091608A (zh) * | 2014-06-13 | 2014-10-08 | 北京奇艺世纪科技有限公司 | 一种基于ios设备的视频编辑方法及装置 |
CN106462393A (zh) * | 2014-05-30 | 2017-02-22 | 苹果公司 | 用于统一应用编程接口和模型的系统和方法 |
CN107277616A (zh) * | 2017-07-21 | 2017-10-20 | 广州爱拍网络科技有限公司 | 视频特效渲染方法、装置及终端 |
GB2550150A (en) * | 2016-05-10 | 2017-11-15 | Advanced Risc Mach Ltd | Data processing systems |
CN107993183A (zh) * | 2017-11-24 | 2018-05-04 | 暴风集团股份有限公司 | 图像处理装置、方法、终端和服务器 |
CN111901635A (zh) * | 2020-06-17 | 2020-11-06 | 北京视博云信息技术有限公司 | 一种视频处理方法、装置、存储介质及设备 |
CN112218117A (zh) * | 2020-09-29 | 2021-01-12 | 北京字跳网络技术有限公司 | 视频处理方法及设备 |
CN112954233A (zh) * | 2021-01-29 | 2021-06-11 | 稿定(厦门)科技有限公司 | 基于gpu的视频合成系统及方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8755515B1 (en) * | 2008-09-29 | 2014-06-17 | Wai Wu | Parallel signal processing system and method |
GB2502620B (en) * | 2012-06-01 | 2020-04-22 | Advanced Risc Mach Ltd | A parallel parsing video decoder and method |
CN109996104A (zh) * | 2019-04-22 | 2019-07-09 | 北京奇艺世纪科技有限公司 | 一种视频播放方法、装置及电子设备 |
-
2021
- 2021-01-29 CN CN202110129959.5A patent/CN112954233B/zh active Active
- 2021-09-18 WO PCT/CN2021/119418 patent/WO2022160744A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106462393A (zh) * | 2014-05-30 | 2017-02-22 | 苹果公司 | 用于统一应用编程接口和模型的系统和方法 |
CN104091608A (zh) * | 2014-06-13 | 2014-10-08 | 北京奇艺世纪科技有限公司 | 一种基于ios设备的视频编辑方法及装置 |
GB2550150A (en) * | 2016-05-10 | 2017-11-15 | Advanced Risc Mach Ltd | Data processing systems |
CN107277616A (zh) * | 2017-07-21 | 2017-10-20 | 广州爱拍网络科技有限公司 | 视频特效渲染方法、装置及终端 |
CN107993183A (zh) * | 2017-11-24 | 2018-05-04 | 暴风集团股份有限公司 | 图像处理装置、方法、终端和服务器 |
CN111901635A (zh) * | 2020-06-17 | 2020-11-06 | 北京视博云信息技术有限公司 | 一种视频处理方法、装置、存储介质及设备 |
CN112218117A (zh) * | 2020-09-29 | 2021-01-12 | 北京字跳网络技术有限公司 | 视频处理方法及设备 |
CN112954233A (zh) * | 2021-01-29 | 2021-06-11 | 稿定(厦门)科技有限公司 | 基于gpu的视频合成系统及方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116993887A (zh) * | 2023-09-27 | 2023-11-03 | 湖南马栏山视频先进技术研究院有限公司 | 一种视频渲染异常的响应方法及系统 |
CN116993887B (zh) * | 2023-09-27 | 2023-12-22 | 湖南马栏山视频先进技术研究院有限公司 | 一种视频渲染异常的响应方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN112954233A (zh) | 2021-06-11 |
CN112954233B (zh) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022160744A1 (zh) | 基于gpu的视频合成系统及方法 | |
JP2023100934A5 (ja) | ビデオ復号化方法およびコンピュータプログラム | |
US7929599B2 (en) | Accelerated video encoding | |
US8660191B2 (en) | Software video decoder display buffer underflow prediction and recovery | |
US9148669B2 (en) | High performance AVC encoder on a multi-core platform | |
TWI517677B (zh) | 用於視頻編碼管線之系統,方法,及電腦程式產品 | |
CN112929755B (zh) | 进度拖动过程中的视频文件播放方法及装置 | |
US7515761B2 (en) | Encoding device and method | |
CN102113327B (zh) | 图像编码装置、方法、集成电路 | |
WO2016210177A1 (en) | Parallel intra-prediction | |
JP2006517069A (ja) | モーションベクトルの予測方法及びシステム | |
CN105262957A (zh) | 视频图像的处理方法和装置 | |
CN111445562B (zh) | 文字动画生成方法及装置 | |
US20060171454A1 (en) | Method of video coding for handheld apparatus | |
US20220400253A1 (en) | Lossless image compression using block based prediction and optimized context adaptive entropy coding | |
US20120183234A1 (en) | Methods for parallelizing fixed-length bitstream codecs | |
US9307259B2 (en) | Image decoding methods and image decoding devices | |
CN105635731A (zh) | 高效视频编码的帧内预测参考点预处理方法 | |
CN114731412A (zh) | 一种数据处理方法及数据处理设备 | |
CN112822494A (zh) | 双缓冲编码系统及其控制方法 | |
US20130077881A1 (en) | Image processing device, image processing system and method for having computer perform image processing | |
Fajardo et al. | Reducing the I/O Bottleneck by a Compression Strategy. | |
US8373711B2 (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
JP2003198858A (ja) | 符号化装置および復号化装置 | |
JP2010041115A (ja) | 画像処理装置および画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21922336 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21922336 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21922336 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.01.2024) |