CN110381267B - Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering - Google Patents
Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering Download PDFInfo
- Publication number
- CN110381267B CN110381267B CN201910774596.3A CN201910774596A CN110381267B CN 110381267 B CN110381267 B CN 110381267B CN 201910774596 A CN201910774596 A CN 201910774596A CN 110381267 B CN110381267 B CN 110381267B
- Authority
- CN
- China
- Prior art keywords
- machine
- rendering
- machines
- editing
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000011218 segmentation Effects 0.000 title claims abstract description 12
- 238000009877 rendering Methods 0.000 claims abstract description 53
- 230000001360 synchronised effect Effects 0.000 claims abstract description 10
- 230000005540 biological transmission Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering, which relates to the technical field of video editing and comprises the following steps: s1: forming an InfiniBand network cluster by using a plurality of machines, and configuring an InfiniBand network card for each machine; s2: setting each machine to have the same timeline information, so that the timeline data structures of all the machines are synchronous; s3: decoding and editing an input video file to obtain decoded video data, then dividing the decoded video data into a plurality of regions according to space, rendering the decoded video data of each region in parallel, and finally obtaining a rendering result of each region; s4: the rendering results of all the areas are stitched and spliced into a complete picture to finish editing, a plurality of machines are adopted to form a cluster, the calculation tasks with heavy multilayer large-format editing are split into different machines to be processed in an intra-frame splitting mode, and free and smooth multilayer large-format real-time editing is realized.
Description
Technical Field
The invention relates to the technical field of video editing, in particular to a method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering.
Background
Large format video has a huge amount of data, and in 8K for example, a non-linear editing system needs to edit a layer of 8K4:2: 210 bit50P video, which needs to process more than 50Gbit data per second, and the processing includes very complicated stun such as decoding and rendering. The hardware capability of the current single-machine nonlinear editing system is completely insufficient for editing 8K videos, and the method is embodied in the following aspects:
1. the decoding capability of a CPU is insufficient, when an existing Intel high-end two-way to strong gold medal processor decodes 8K videos, only one layer can be in real time, two layers cannot be in real time, and the real time of multiple layers is not the same;
2. the GPU rendering capability is insufficient, and the 8K format rendering is difficult to achieve real time under the condition of complex special effects;
3. the data uplink and downlink speed of the GPU is insufficient, the current GPU is inserted into a slot of PCIE3.0 × 16, limited by PCIE bandwidth, only one uplink path of 8K data and one downlink path of 8K data can be barely simultaneously, and uplink and downlink of multiple layers of 8K data cannot be supported;
at present, non-editing products on the market are in a single machine form, and the hardware capability of a single machine nonlinear editing system cannot meet the editing requirement of a user on an 8K video, so that the development of an editing method capable of realizing unlimited breadth, unlimited layer number and unlimited frame rate is a main task at present.
Disclosure of Invention
The invention aims to: the invention provides a method for realizing large-breadth multilayer real-time editing based on intra-frame segmentation clustering, which aims to solve the problems that non-editing products on the market are in a single machine form at present, and the hardware capability of a single machine nonlinear editing system cannot meet the editing requirement of a user on an 8K video.
The invention specifically adopts the following technical scheme for realizing the purpose:
the method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering comprises the following steps:
s1: forming an InfiniBand network cluster by using a plurality of machines, and configuring an InfiniBand network card for each machine;
s2: setting each machine to have the same timeline information, so that the timeline data structures of all the machines are synchronous;
s3: decoding and editing an input video file to obtain decoded video data, then dividing the decoded video data into a plurality of regions according to space, rendering the decoded video data of each region in parallel, and finally obtaining a rendering result of each region;
s4: and (5) stitching the rendering results of the regions, splicing the rendering results into a complete picture, and finishing editing.
Further, the plurality of machines forming the InfiniBand network cluster in S1 includes: the system comprises a front-end machine, a plurality of decoding machines and a plurality of rendering machines.
Further, in S1, a 100Gbps InfiniBand switch is used to implement data transmission of multiple machines.
Further, in S2, the timeline information of each machine is abstracted into a timeline data structure by using a nonlinear editing system, and the timeline data structures of all the machines are synchronized by performing serialization and deserialization on the timeline data structures.
Further, in S3, the video file is input, the front-end machine is used to receive a UI operation of the user, the UI operation is converted into a data structure and an operation command of the nonlinear editing system, the back-end decoding machine and the rendering machine are driven to operate, and the front-end machine performs spatial division of the decoded video data.
Further, in S3, each decoding machine is driven by the front-end machine to receive the operation command, decode and edit the input video file to obtain decoded video data, and send the decoded video data to the corresponding rendering machine according to the space division of the front-end machine, where the number of video files decoded by each decoding machine is determined according to the video format and the format complexity of the video file.
Further, in S3, each rendering machine receives the decoded video data in the corresponding region according to a preset corresponding relationship, and the decoded video data is uplinked to the GPU for fusion and superimposition of video frames and rendering of various special effects, so as to obtain a rendering result, and then the rendering result is downlinked to the memory and is sent to the front-end machine.
Further, in S4, the front-end processor receives the rendering results of each region, stitches the rendering results according to the size and position of the space division, and splices the rendering results into a complete picture for display.
The invention has the following beneficial effects:
1. the invention adopts the InfiniBand network commonly used in the super computing architecture, the core technology of the InfiniBand network is RDMA (remote Direct Memory Access), the time delay can be greatly shortened, the bandwidth can be improved, a 100Gbps InfiniBand switch is used, one or two InfiniBand network cards are inserted into each machine, the in-out bandwidth of each machine can reach 100Gbps or 200Gbps, and the low-time-delay transmission of 8K large-format baseband data can be met.
2. Each machine of the present invention has the same timeline information, and the timeline data structures of all machines are synchronized such that all data changes involved on the timeline are synchronized, such as: the method comprises the steps of cutting \ moving \ Trim \ deleting materials, adding \ deleting \ modifying special effects, adding \ deleting subtitles, adding \ deleting tracks and the like, so that all machines in a cluster have the same timeline information and can work cooperatively correctly.
3. The invention uses a plurality of machines to form a cluster, divides the calculation tasks with heavy multilayer large-format editing into different machines for processing in a frame segmentation mode, and then uses a very high-speed network technology to realize data transmission, so that a user can realize free and smooth multilayer 8K large-format real-time editing, the operation feeling and 4K editing have no great difference, and the invention has very flexible architecture and easy expansion, and further can realize free editing with unlimited breadth, unlimited number of layers and unlimited frame rate.
Drawings
FIG. 1 is a schematic process flow diagram of an embodiment of the present invention.
FIG. 2 is a schematic diagram of a cluster architecture according to an embodiment of the present invention.
Detailed Description
For a better understanding of the present invention by those skilled in the art, the present invention will be described in further detail below with reference to the accompanying drawings and the following examples.
Example 1
As shown in fig. 1, this embodiment provides a method for implementing large-format multi-layer real-time editing based on intra-frame segmentation clustering, which includes the following steps:
s1: a traditional network networking method, such as ethernet, no matter 10GE, 25GE, 40GE, cannot meet the transmission requirement of 8K data, so this embodiment employs an InfiniBand network, the core technology of which is rdma (remote Direct Memory access), which can greatly shorten the delay and increase the bandwidth, in this embodiment, multiple machines are used to form an InfiniBand network cluster, a 100Gbps InfiniBand switch is used to implement data transmission of multiple machines, and each machine is configured with one or two InfiniBand cards, so the access bandwidth of each machine can reach 100Gbps or 200Gbps, which can meet the low-delay transmission of large-format 8K baseband data, where the multiple machines forming the InfiniBand network cluster include: the system comprises a front-end machine, a plurality of decoding machines and a plurality of rendering machines;
s2: setting each machine to have the same timeline information allows the timeline data structures of all machines to be synchronized so that all data changes involved in the timeline are synchronized, such as: shearing \ moving \ Trim \ deleting material, adding \ deleting \ modifying special effects, adding \ deleting subtitles, adding \ deleting tracks and the like, so that all machines in the cluster have the same time line information and can work cooperatively correctly;
in particular, the time line information of each machine is abstracted into a time line data structure by utilizing a nonlinear editing system, when the front-end machine operates, any change of the timeline information is reflected as the change of the data in the timeline data structure, the front-end machine sequences the timeline data structure to form binary data, by the Direct Write approach of RDMA, the front-end machine sends binary data to the memory registered by the decoding machine or the rendering machine, because the data amount of the timeline data is small compared to 4K, 8K video data, therefore, the time delay is very low and is less than 1ms, then the back-end machine receives the binary data and carries out deserialization to the binary data to form a timeline data structure consistent with the content of the front-end machine, thereby realizing the synchronization of the timeline data structures of all the machines, when the decoding machine or the rendering machine operates, the timeline data structures of other machines can be synchronized in the same way;
s3: decoding and editing an input video file to obtain decoded video data, then dividing the decoded video data into a plurality of regions according to space, rendering the decoded video data of each region in parallel, and finally obtaining a rendering result of each region, for example, for an 8K format, a typical dividing manner is to divide the decoded video data into 4 regions, and render the decoded video data of the 4 regions in parallel corresponding to an upper left region, an upper right region, a lower left region and a lower right region, and finally obtaining a rendering result of each region, specifically:
inputting a video file, receiving UI operation of a user by using a front-end machine, converting the UI operation into a data structure and an operation command of a nonlinear editing system, informing a rear-end decoding machine and a rendering machine to be synchronous, driving the rear-end decoding machine and the rendering machine to work, and carrying out space division on decoded video data by the front-end machine according to factors such as the breadth size of a timeline, the complexity of the timeline, the capacity of the machine and the like;
each decoding machine is driven by a front-end machine, receives a decoding command, decodes and edits an input video file to obtain decoded video data, sets up according to timeline information and clusters after decoding, and then performs scaling or cutting if necessary, and finally transmits the processed decoded video data to a corresponding rendering machine through an InfiniBand network according to space division of the front-end machine, wherein the number of the video files decoded by each decoding machine is determined according to the video breadth and the format complexity of the video files;
each rendering machine receives the decoded video data of the corresponding area according to a preset corresponding relation, the decoded video data are uplinked to a GPU for fusion and superposition of multilayer video pictures and graphics rendering and other operations on various special effects such as subtitles and special effects appearing in the area, rendering results are obtained, then the rendering results are downlinked to a memory and are sent to a front-end machine through an InfiniBand network;
s4: stitching the rendering results of the regions, splicing the rendering results into a complete picture, and finishing editing, wherein the stitching method specifically comprises the following steps:
and the front-end machine receives the rendering results of each area, stitches the rendering results according to the size and the position during space division, splices the rendering results into a complete picture, finally outputs the complete picture to a monitor through the board card, and simultaneously displays the non-coded MV picture on the display.
In this embodiment, a plurality of machines are used to form a cluster, a front-end machine is used to perform intra-frame segmentation to decode video data, the calculation task of heavy multilayer large-format editing is divided into different machines for processing, and then a very high-speed network technology is used to realize data transmission, so that a user can realize free and smooth multilayer large-format real-time editing, the operation feeling and 4K editing have no great difference, and as shown in fig. 2, the architecture of this embodiment is very flexible and easy to expand, and further, free editing with unlimited breadth, number of layers and frame rate can be realized.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention, the scope of the present invention is defined by the appended claims, and all structural changes that can be made by using the contents of the description and the drawings of the present invention are intended to be embraced therein.
Claims (7)
1. The method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering is characterized by comprising the following steps of:
s1: forming an InfiniBand network cluster by using a plurality of machines, and configuring an InfiniBand network card for each machine;
s2: setting each machine to have the same timeline information, so that the timeline data structures of all the machines are synchronous;
s3: decoding and editing an input video file to obtain decoded video data, then dividing the decoded video data into a plurality of regions according to space, rendering the decoded video data of each region in parallel, and finally obtaining a rendering result of each region;
s4: stitching the rendering results of the regions, splicing the rendering results into a complete picture, and finishing editing;
in S2, the timeline information of each machine is abstracted into a timeline data structure by using the nonlinear editing system, the front-end machine sends binary data to a memory registered in the decoding machine or the rendering machine by serializing and deserializing the timeline data structure, and the back-end machine receives the binary data and deserializes the binary data to form a timeline data structure consistent with the content of the front-end machine, thereby synchronizing the timeline data structures of all machines.
2. The method for realizing large-format multi-layer real-time editing based on intra-frame segmentation clustering as claimed in claim 1, wherein the plurality of machines forming the InfiniBand network cluster in S1 comprise: the system comprises a front-end machine, a plurality of decoding machines and a plurality of rendering machines.
3. The method for realizing large-format multi-layer real-time editing based on intra-frame segmentation clustering as claimed in claim 1, wherein a 100gbps infiniband switch is utilized in S1 to realize data transmission of multiple machines.
4. The method according to claim 2, wherein the video file is input in S3, the front-end machine receives a UI operation from a user, converts the UI operation into a data structure and an operation command of a nonlinear editing system, drives a back-end decoding machine and a rendering machine to work, and performs spatial division of decoded video data by the front-end machine.
5. The method according to claim 4, wherein in step S3, each decoding machine is driven by a front-end machine, receives an operation command, decodes and edits the input video file to obtain decoded video data, and sends the decoded video data to a corresponding rendering machine according to the spatial partition of the front-end machine, and the number of video files decoded by each decoding machine is determined according to the video format and the format complexity of the video file.
6. The method according to claim 2, wherein in S3, each rendering machine receives decoded video data in a corresponding region according to a preset correspondence, and sends the decoded video data to the GPU for fusion and superimposition of video frames and rendering of various special effects to obtain a rendering result, and then sends the rendering result to the memory and to the front-end machine.
7. The method according to claim 2, wherein in S4, the front-end processor receives rendering results of each region, stitches the rendering results according to the size and position of the space division, and splices the rendering results into a complete picture for display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774596.3A CN110381267B (en) | 2019-08-21 | 2019-08-21 | Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910774596.3A CN110381267B (en) | 2019-08-21 | 2019-08-21 | Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110381267A CN110381267A (en) | 2019-10-25 |
CN110381267B true CN110381267B (en) | 2021-08-20 |
Family
ID=68260207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910774596.3A Active CN110381267B (en) | 2019-08-21 | 2019-08-21 | Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110381267B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111614975B (en) * | 2020-05-08 | 2022-07-12 | 深圳拙河科技有限公司 | Hundred million-level pixel video playing method, device, medium and equipment |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750297A (en) * | 2011-11-11 | 2012-10-24 | 新奥特(北京)视频技术有限公司 | Rendering and compositing method and system of cluster packaging |
CN103310475B (en) * | 2012-03-16 | 2017-09-12 | 腾讯科技(深圳)有限公司 | animation playing method and device |
CN102752594B (en) * | 2012-06-21 | 2015-01-28 | 浙江大学 | Cluster rendering method based on image decoding and decoding and parallel transmission |
CN103699364B (en) * | 2013-12-24 | 2016-09-21 | 四川川大智胜软件股份有限公司 | A kind of three-dimensional graphics renderer method based on parallel drawing technology |
CN103927780B (en) * | 2014-05-05 | 2018-05-22 | 广东威创视讯科技股份有限公司 | The method and three-dimensional display system that a kind of more video cards render |
CN107660281B (en) * | 2015-05-19 | 2021-06-08 | 华为技术有限公司 | System and method for synchronizing distributed computing runtime |
WO2017058951A1 (en) * | 2015-09-30 | 2017-04-06 | Sony Interactive Entertainment America Llc | Systems and methods for providing time-shifted intelligently synchronized game video |
US20170134714A1 (en) * | 2015-11-11 | 2017-05-11 | Microsoft Technology Licensing, Llc | Device and method for creating videoclips from omnidirectional video |
JP6662063B2 (en) * | 2016-01-27 | 2020-03-11 | ヤマハ株式会社 | Recording data processing method |
CN108093151B (en) * | 2016-11-22 | 2020-03-06 | 京瓷办公信息系统株式会社 | Image forming apparatus and non-transitory computer-readable recording medium |
US10115223B2 (en) * | 2017-04-01 | 2018-10-30 | Intel Corporation | Graphics apparatus including a parallelized macro-pipeline |
KR101987356B1 (en) * | 2017-07-20 | 2019-06-10 | 이에스이 주식회사 | An image processing apparatus and method for image parallel rendering processing |
US10360832B2 (en) * | 2017-08-14 | 2019-07-23 | Microsoft Technology Licensing, Llc | Post-rendering image transformation using parallel image transformation pipelines |
-
2019
- 2019-08-21 CN CN201910774596.3A patent/CN110381267B/en active Active
Non-Patent Citations (1)
Title |
---|
"Syncing Shared Multimedia through Audiovisual Bimodal Segmentation";C. A. Dimoulas and A. L. Symeonidis;《IEEE MultiMedia》;20150503;第22卷(第3期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110381267A (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9077970B2 (en) | Independent layered content for hardware-accelerated media playback | |
KR0138845B1 (en) | Synchronization controller and control method for multimedia object in mheg engine | |
CN110393921A (en) | Processing method, device, terminal, server and the storage medium of cloud game | |
CN104767956A (en) | Video processing with multiple graphical processing units | |
CN103595896A (en) | Method and system for synchronously displaying images with UHD resolution ratio | |
DE102020202059A1 (en) | MULTI-THROUGH ADDITIONAL TOOL FOR COHERENT AND COMPLETE VIEW SYNTHESIS | |
CN110381267B (en) | Method for realizing large-format multilayer real-time editing based on intra-frame segmentation clustering | |
CN110445994B (en) | Method for realizing large-format multilayer real-time editing based on interframe segmentation clustering | |
KR102274723B1 (en) | Device, method and computer program for editing time slice images | |
CN103548343A (en) | Apparatus and method for converting 2d content into 3d content, and computer-readable storage medium thereof | |
CN105430296A (en) | Solving method for multi-picture division cracked screen display of high-definition video | |
CN113038220A (en) | Program directing method, program directing system, program directing apparatus, and computer-readable storage medium | |
CN106161995A (en) | The specially good effect switching device of Table top type Video processing control platform and specially good effect changing method | |
CN103686388B (en) | Video player system capable of playing ultrahigh resolution video | |
CN101383951B (en) | Picture splitting device | |
CN112565869A (en) | Window fusion method, device and equipment for video redirection | |
DE102019122181B4 (en) | GENERALIZED LOW LATENCY USER INTERACTION WITH VIDEO ON VARIOUS TRANSPORTATION DEVICES | |
CN116708696B (en) | Video processing method and electronic equipment | |
US8570436B2 (en) | Information processing device, information processing method, and program | |
WO2019118890A1 (en) | Method and system for cloud video stitching | |
CN103279268B (en) | A kind of interactive approach based on controlled terminal list and device | |
CN114466222B (en) | Video synthesis method and device, electronic equipment and storage medium | |
CN104808976B (en) | File sharing method | |
CN108616768A (en) | Synchronous broadcast method, device, storage location and the electronic device of multimedia resource | |
CN114374872A (en) | Video generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |