CN112637520B - Dynamic video editing method and system - Google Patents

Dynamic video editing method and system Download PDF

Info

Publication number
CN112637520B
CN112637520B CN202011539365.3A CN202011539365A CN112637520B CN 112637520 B CN112637520 B CN 112637520B CN 202011539365 A CN202011539365 A CN 202011539365A CN 112637520 B CN112637520 B CN 112637520B
Authority
CN
China
Prior art keywords
video
node
data
directed graph
graph data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011539365.3A
Other languages
Chinese (zh)
Other versions
CN112637520A (en
Inventor
王家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Fusion Media Technology Development Beijing Co ltd
Xinhua Zhiyun Technology Co ltd
Original Assignee
Xinhua Zhiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Zhiyun Technology Co ltd filed Critical Xinhua Zhiyun Technology Co ltd
Priority to CN202011539365.3A priority Critical patent/CN112637520B/en
Publication of CN112637520A publication Critical patent/CN112637520A/en
Application granted granted Critical
Publication of CN112637520B publication Critical patent/CN112637520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Library & Information Science (AREA)
  • Development Economics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to the technical field of video synthesis, and particularly discloses a dynamic video editing method and a system thereof, wherein the method comprises the steps of inputting element nodes and logic nodes, connecting the element nodes and the logic nodes, and generating directed graph data; acquiring video elements, and generating video element directed graph data with a video element showing logical relationship by combining the directed graph data; calculating the node duration according to the video element directed graph data and the video elements, and generating video element weighted directed graph data with sometimes long data; according to the video element weighted directed graph data, calculating the starting time and the showing time of each element node and the total video time to generate video time axis data; and synthesizing the final video according to the video time axis data. The invention solves the problem that the time positions of other video elements need to be adjusted when the time length of the variable time length element changes every time when the video is edited in a time axis mode in the prior art and the time length of the variable time length element is corresponding to the variable time length element.

Description

Dynamic video editing method and system
Technical Field
The invention relates to a video synthesis technology, in particular to a video synthesis technology aiming at dynamic variable element duration.
Background
With the advent of the 5G era, videos increasingly become important information transmission media, and people tend to watch videos more easily, intuitively and conveniently to acquire information. The data visualization video is a video combining data and visualization technology, and displays data in a visualization and animation mode, wherein vivid data is displayed in a chart mode and the like, wherein the visualization content of the data is usually dynamic, and the duration of the content is usually determined according to the data.
Currently, a common video editor such as Adobe After Effect generally uses a time-based editing mode, time information of all video elements is time of the whole time axis relatively, since data may cause change of duration of visual elements, time of each video element needs to be adjusted when data change occurs, and each data visual video needs to be produced and produced through a single video engineering file.
In a data visualization video, usually only data of a visualization element in the video needs to be replaced, and a change in data may cause a change in duration of the visualization element, and when the data is edited in a time axis manner, corresponding adjustment and editing of other video elements in the video needs to be manually performed repeatedly.
Disclosure of Invention
The invention provides a dynamic video editing method, which solves the problem that the time positions of other video elements need to be adjusted when the time length of a variable time length element changes every time when the video is edited in a time axis mode in the prior art and the variable time length element (such as a data visualization element and the like) of the video is responded.
A method of dynamic video editing comprising the steps of:
inputting element nodes and logic nodes, and connecting the element nodes and the logic nodes to generate directed graph data;
acquiring video elements, and generating video element directed graph data by combining the directed graph data;
generating video element weighted directed graph data containing node display duration according to the video element directed graph data and the video elements;
according to the video element weighted directed graph data, calculating time position information and total video duration of each element node to generate video time axis data;
and synthesizing the final video according to the video time axis data.
Optionally, the directed graph data includes node data and link data, the node data includes element nodes and logic nodes, all the element nodes and the logic nodes are stored in a node array, all the links are stored in a node link array, and video node directed graph data is generated;
analyzing the directed graph data, traversing the video node directed graph data from a starting node in the node array through connecting line data, and generating video element directed graph data with a video element showing logical relation according to the logic of the logic node;
optionally, the video elements are assembled according to the video element directed graph data, the durations of all the video elements are obtained, the actual duration of each element node after the video elements are assembled is calculated, and the video element weighted directed graph data containing the node showing duration is generated.
Optionally, the method for calculating the time position information and the total video duration of each element node includes:
traversing the video element weighted directed graph data, acquiring a path with the longest time in paths before a current element node, and recording the path as the playing start time of the current element node; acquiring the time length of the current element node, and recording the time length as the playing time length;
and when traversing to the stopping node, recording the duration of the path with the longest time before the stopping node as the total duration of the video.
Optionally, during traversal, the node data in the path before the current node is accessed is preferentially traversed according to the connection sequence.
Optionally, the logical nodes include a start node, an end node, a condition judgment node and a loop node,
the starting node is connected with one or more connecting lines, and is connected without connecting lines.
The termination node is provided with one or more lines for connecting in and out, and the last node of the video is the termination node;
the condition judgment node is used for judging whether the element nodes behind the node are displayed or not;
and the circulation nodes comprise a circulation starting node and a circulation ending node and are used for circulating the element node data in the circulation starting node and the circulation ending node.
Optionally, the element node data includes time information, where the time information includes a delay time relative to a previous node and a video element presentation duration of a current node, where the element presentation duration includes a dynamic duration; the logical node data presentation time length is 0.
The invention also provides a dynamic video editing system, which solves the problem that the automatic production cannot be realized, and when the video display logic is determined, a new video can be generated only by replacing required data.
The system comprises a node editor and a video editor, wherein the node editor is used for editing element nodes and logic nodes and generating directed graph data; and the video editor is used for calling the directed graph data and the video elements, generating video element directed graph data, namely video element weighted directed graph data, calculating the showing duration to generate time axis data, and generating videos according to the time axis data.
Optionally, the user edits and stores a plurality of directed graph data by using a node editor;
and selecting the video elements and the corresponding directed graph data by the user at the video editor, and automatically generating and outputting the video by the video editor.
The present invention also discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described dynamic video editing method.
The invention also discloses a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the dynamic video editing method.
The invention has the beneficial effects that:
compared with the traditional method for editing the video in a time axis manner, due to the fact that the presentation logic of the video elements cannot be added, particularly when data is combined (for example, one group of video elements is presented in an ascending state, and another group of video elements is presented in a descending state), the video elements need to be edited repeatedly according to the data manually when the data changes. For example, stock market quotation data A changes every day, wherein the stock A comprises thousands of stocks, video display logic cannot be expressed by editing through the existing video time axis, the time position of each video element cannot be automatically and dynamically modified according to time length change, and a large amount of manpower is needed to complete data visualization video production of all stock quotation data.
According to the technical scheme disclosed by the invention, in a data visualization video with determined presentation logic, only the data of visualization elements in the data visualization video needs to be replaced, and the data visualization video is produced in an automatic mode.
Specifically, when the duration of the video elements is not dynamically fixed, video element digraph data is generated by editing and stored as a video template, when the content of the dynamic video elements is changed, the video template is analyzed to determine the playing start time, the playing duration and the playing end time of all the video elements, a real video time axis is generated, and then the pages are combined into a video.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a general flowchart of a dynamic video editing method;
FIG. 2 is an exemplary graph of directed graph data;
fig. 3 is a traversal sequence diagram of video node links in embodiment 1;
FIG. 4 is a video element weighted directed graph generated at video element directed graph data add duration;
fig. 5 is a traversal sequence diagram for generating video timeline data.
Note: the arc arrows of fig. 3 and 5 are depicted in a traversal order.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
Example 1:
a method of dynamic video editing comprising the steps of:
step 1, inputting element nodes and logic nodes, connecting the element nodes and the logic nodes, and generating directed graph data;
step 2, acquiring video elements, and generating video element directed graph data with a video element showing logical relationship by combining the directed graph data;
step 3, calculating the node duration according to the video element directed graph data and the video elements, and generating video element weighted directed graph data containing node display duration;
step 4, calculating time position information and total video duration of each element node according to the video element weighted directed graph data to generate video time axis data;
and 5, synthesizing the final video according to the video time axis data.
The element nodes comprise presentation elements in the video, such as elements required to be presented by the video, such as characters, diagrams, pictures and the like.
The time information contained in the element node data includes: delay time and element presentation duration relative to the last node.
The invention also solves the problem of how to automatically adjust the time position of each video element in a video and automatically generate a final video time axis when facing the video elements with dynamically changing duration.
Specifically, when two nodes are connected, it is indicated that the node connected with the connection starts to be displayed after the display of the node connected with the connection is finished, in addition, the element node is defined to allow only one connection to enter and one connection to exit, and when the time length of the element node is 0, the display time length of the element node is continued until the first logic node of the connection is finished.
The logical nodes are logics for showing how the element nodes are shown, and do not occupy the showing time, and the following logical nodes are disclosed in the example:
A. a start node: the video starting node is a video starting node, one video only has one starting node, one or more connecting lines are connected, and no connecting line is connected;
B. and (4) stopping the node: showing that the video elements connected into the node are shown to be finished, wherein the stopping node is connected with one or more lines and is also a stopping node;
C. condition judging nodes: the node is used for judging whether the element node behind the node is displayed or not, and the node is connected with one or more lines;
D. and (3) circulating the nodes: the cycle nodes comprise a cycle start node and a cycle end node, and are used for circularly showing the node elements in the cycle start node and the cycle end node,
the loop starting node transmits array data, the loop times are array lengths, element contents in the loop nodes are displayed according to different element contents from the data of the current loop, and one or more lines are connected in and out of the loop starting node and the loop ending node.
The above is the logical node disclosed in the present embodiment, and the logical node is not limited to the above nodes and may be expanded as needed.
The connection data is a directional connection between two nodes, each connection data comprises a connection node and a connection node, and the connection node corresponds to an edge defined in the data structure of the directed graph. The start time of each node relative to the video timeline depends on the longest one of the previously connected paths.
Step 1 further describes that the directed graph data includes node data and link data, and the node data includes element nodes and logic nodes, which correspond to vertices in the directed graph data structure. Storing all element nodes and logic nodes into a node array, storing all connecting lines into a node connecting line array, and generating video node directed graph data; the video node directed graph data is stored as a video template. Fig. 2 is an exemplary diagram of the video node directed graph data.
Step 2, further describing, analyzing the directed graph data, traversing the video node directed graph data from a starting node in the node array through connecting line data, and generating associated video element directed graph data according to the logic of the logic node;
during the traversal, the node data in the path before the current node is accessed is traversed preferentially according to the connection sequence.
The traversal process is illustrated in detail by fig. 3:
when the 'big title' of the node is traversed from the start node, and then the 'circulation start' node is traversed downwards according to the connection relation, because the 'date character' node of the node connected before the 'circulation start' node is not traversed, the 'date character' node is traversed firstly, and then the 'circulation start' node is traversed. The 'loop node' in the example has two loops, so after the nodes in the 'loop node' are all traversed, the 'loop start node' is returned, and the nodes in the loop are traversed again until the loop number reaches 2 times. After the loop is cycled for 2 times, the loop is traversed to the 'loop end node', and then the loop is connected to two condition judgment nodes, in the example, the 'condition 1' node is judged to be false, and the 'condition 2' node is judged to be true, so that the content of the node behind the 'condition 1' node is ignored, and only the path behind the 'condition 2' node is traversed. Since two paths are not traversed before the terminating node, nodes in the two paths are preferentially traversed.
Step 3 further describes that after traversal according to the steps shown in fig. 3, video element directed graph data which really needs to be displayed is generated, then each dynamic video element and dynamic contents such as data or voice are assembled, the actual time length of the node is calculated, wherein the time lengths of all logic nodes are 0s and do not occupy the time length, and the time length is added to the video element directed graph data to generate final video element weighted directed graph data, as shown in fig. 4.
Step 4, further describing, according to the video element weighted directed graph data, the time position information and the total video duration, generating video time axis data; the time position information is the time position relative to the whole time axis. Including element node play start time and play duration.
The method specifically comprises the following steps: preferentially traversing the node data in the path before the current node is connected according to the connection sequence, acquiring the path with the longest time in the path before the current element node during the traversal, and recording the path as the play starting time of the current element node;
during the passing, acquiring the time length of the current element node, and recording the time length as the playing time length;
and when traversing to the stopping node, recording the duration of the path with the longest time before the stopping node as the total duration of the video.
The traversal process is shown in fig. 5, the longest duration of the path before the current element node is recorded before the slash above the node in the graph, the actual duration of the current element node is after the slash, the total duration of the video calculated in the graph is 23s, and the start playing time (i.e., the longest duration of the path before the current node) and the playing duration equivalent to the time axis have been calculated by all the element nodes.
And the longest time length of the path before the current element node is the longest time length in the result obtained by adding the longest path time length of the node connected before the current element node to the time length of the node connected before the current element node.
Specifically, for example, if the display duration of the start node before the "headline" is 0, the time before the slash is 0s, and the display duration of the element node "headline" as the current node is 3 s;
the 'cycle start' node is a logic node and does not occupy time, so the time before the slash is the longest time of the front path of the node for 3s, and the time after the slash is 0 s; the calculation mode is analogized in the same way.
It should be noted that the "node" described in embodiment 1 is a generic name of all different types of nodes, and does not refer to any element node or any logic node.
Example 2:
a dynamic video editing system comprises a node editor and a video editor, wherein the node editor is used for editing element nodes and logic nodes to generate directed graph data;
and the video editor is used for calling the digraph data and the video elements, generating video element digraph data, namely video element weighted digraph data, calculating the presentation duration to generate time axis data, and generating a video according to the time axis data.
Based on the above system, this embodiment further discloses a dynamic video editing system method, which specifically includes:
1. a template design stage: the method comprises the steps of adding required element nodes and logic nodes into an editing interface of a node editor, constructing required video element display logic through node connecting lines, generating and storing video node connecting line directed graph data, using the data as a video template, allowing data of contents (such as voice, dynamic graphs and the like) of the video elements in the video template to change, and performing different renderings when different data are transmitted to achieve the effect of video dynamic rendering.
2. And acquiring data, voice and the like required by the video elements (such as dynamic diagrams), and sending the data and the generated video node connecting directed graph data (namely, selecting a required template) to the video editor.
3. The video editor analyzes the video node connecting line directed graph data, firstly traverses the video node connecting line directed graph data, and generates the video element directed graph data which is really required to be displayed according to the logic of the logic nodes.
4. Assembling the video element directed graph data and the video elements, determining the real time length of all dynamic video elements, calculating the time positions of all element nodes and the total time length of the video by combining the video directed graph data, generating complete video time axis data, and finally generating a final video by the video editor according to the video time axis data.
The solution disclosed in this embodiment is based on a Web technology, and relates to a frame-by-frame rendering technology (controlling frame-by-frame rendering of page elements by JavaScript) including animation of page to video elements, and a page composite video technology (performing frame-by-frame screenshot composite video on a page supporting frame-by-frame rendering).
This embodiment also discloses a computer device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program according to the method disclosed in embodiment 1.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed.
The units may or may not be physically separate, and components displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present invention may be essentially or partially contributed to by the prior art, or all or part of the technical solution may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions within the technical scope of the present invention are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for dynamic video editing, comprising the steps of:
inputting element nodes and logic nodes, and connecting the element nodes and the logic nodes to generate directed graph data;
acquiring video elements, and generating video element directed graph data by combining the directed graph data;
generating video element weighted directed graph data containing node display duration according to the video element directed graph data and the video elements;
according to the video element weighted directed graph data, calculating time position information and total video duration of each element node to generate video time axis data;
synthesizing a final video according to the video time axis data;
traversing video element weighted directed graph data, acquiring a path with the longest time in paths before a current element node, and recording the path as the playing start time of the current element node; acquiring the time length of the current element node, and recording the time length as the playing time length;
and when traversing to the stopping node, recording the duration of the path with the longest time before the stopping node as the total duration of the video.
2. The method for dynamic video editing according to claim 1,
the directed graph data comprises node data and connecting line data, the node data comprises element nodes and logic nodes, all the element nodes and the logic nodes are stored in a node array, all the connecting lines are stored in a node connecting line array, and the video node directed graph data is generated;
and analyzing the directed graph data, traversing the video node directed graph data from a starting node in the node array through the connecting line data, and generating the video element directed graph data with the video element showing logical relation according to the logic of the logical node.
3. A dynamic video editing method according to claim 1 or 2,
and assembling the video elements according to the video element directed graph data, acquiring the time length of all the video elements, calculating the actual time length of each element node after the video elements are assembled, and generating the video element weighted directed graph data containing the node display time length.
4. A method for dynamic video editing as claimed in claim 2, wherein during traversal, the node data in the path before the current node is accessed is traversed first according to the connection order.
5. The dynamic video editing method according to claim 1, wherein the logical nodes include a start node, an end node, a condition judgment node and a loop node,
the starting node is connected with one or more connecting lines without connecting lines;
the termination node is provided with one or more lines for connecting in and out, and the last node of the video is the termination node;
the condition judgment node is used for judging whether the element nodes behind the node are displayed or not;
and the circulation nodes comprise a circulation starting node and a circulation ending node and are used for circulating the element node data in the circulation starting node and the circulation ending node.
6. A dynamic video editing method according to claim 1 or 2,
the element node data comprises time information, wherein the time information comprises delay time relative to a previous node and video element display duration of a current node, and the element display duration comprises dynamic duration; the logical node data presentation time length is 0.
7. A dynamic video editing system is characterized by comprising a node editor and a video editor,
the node editor is used for editing element nodes and logic nodes to generate directed graph data;
and the video editor is used for calling the digraph data and the video elements, generating video element digraph data, namely video element weighted digraph data, calculating the presentation duration to generate time axis data, and generating a video according to the time axis data.
8. The dynamic video editing system of claim 7,
a user edits and stores a plurality of directed graph data by adopting a node editor;
and selecting the video elements and the corresponding directed graph data by the user at the video editor, and automatically generating and outputting the video by the video editor.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 6.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any one of claims 1-6 when executing the program.
CN202011539365.3A 2020-12-23 2020-12-23 Dynamic video editing method and system Active CN112637520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011539365.3A CN112637520B (en) 2020-12-23 2020-12-23 Dynamic video editing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011539365.3A CN112637520B (en) 2020-12-23 2020-12-23 Dynamic video editing method and system

Publications (2)

Publication Number Publication Date
CN112637520A CN112637520A (en) 2021-04-09
CN112637520B true CN112637520B (en) 2022-06-21

Family

ID=75321681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011539365.3A Active CN112637520B (en) 2020-12-23 2020-12-23 Dynamic video editing method and system

Country Status (1)

Country Link
CN (1) CN112637520B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704539A (en) * 2021-09-09 2021-11-26 北京跳悦智能科技有限公司 Video sequence storage and search method and system, and computer equipment
CN116347005B (en) * 2023-04-10 2023-10-13 徐州三叉戟信息科技有限公司 Coal mine safety education method and system based on Internet interactive animation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002300528A (en) * 2001-03-30 2002-10-11 Toshiba Corp Method and device for editing video stream
EP1684528A2 (en) * 2005-01-21 2006-07-26 STMicroelectronics, Inc. Spatio-temporal graph-segmentation encoding for multiple video streams
CN101963899A (en) * 2009-07-24 2011-02-02 华中师范大学 Logic cartoon platform system
CN105260170A (en) * 2015-07-08 2016-01-20 中国科学院计算技术研究所 Method and system for deducing sudden event situation based on case
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN106446820A (en) * 2016-09-19 2017-02-22 清华大学 Background feature point identification method and device in dynamic video editing
CN107491311A (en) * 2017-08-16 2017-12-19 广州视源电子科技股份有限公司 Generate method, system and the computer equipment of pagefile
CN110532427A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 A kind of visualization video generation method and system based on Form data
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9645727B2 (en) * 2014-03-11 2017-05-09 Sas Institute Inc. Drag and drop of graph elements

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002300528A (en) * 2001-03-30 2002-10-11 Toshiba Corp Method and device for editing video stream
EP1684528A2 (en) * 2005-01-21 2006-07-26 STMicroelectronics, Inc. Spatio-temporal graph-segmentation encoding for multiple video streams
CN101963899A (en) * 2009-07-24 2011-02-02 华中师范大学 Logic cartoon platform system
CN105260170A (en) * 2015-07-08 2016-01-20 中国科学院计算技术研究所 Method and system for deducing sudden event situation based on case
CN106373170A (en) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 Video making method and video making device
CN106446820A (en) * 2016-09-19 2017-02-22 清华大学 Background feature point identification method and device in dynamic video editing
CN107491311A (en) * 2017-08-16 2017-12-19 广州视源电子科技股份有限公司 Generate method, system and the computer equipment of pagefile
CN110532427A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 A kind of visualization video generation method and system based on Form data
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
CN111899322A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, animation rendering SDK, device and computer storage medium

Also Published As

Publication number Publication date
CN112637520A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN107124624B (en) Method and device for generating video data
US9478059B2 (en) Animated audiovisual experiences driven by scripts
CN113891113B (en) Video clip synthesis method and electronic equipment
CN107925786B (en) Animated data visualization video
US9280542B2 (en) Process for creating a media sequence by coherent groups of media files
US20060204214A1 (en) Picture line audio augmentation
US10319409B2 (en) System and method for generating videos
CN112637520B (en) Dynamic video editing method and system
US8982132B2 (en) Value templates in animation timelines
US20130132840A1 (en) Declarative Animation Timelines
KR20060042161A (en) Blended object attribute keyframing model
US20120177345A1 (en) Automated Video Creation Techniques
US20130127877A1 (en) Parameterizing Animation Timelines
KR20060051999A (en) Features such as titles, transitions, and/or effects which vary according to positions
CN102576561A (en) Apparatus and method for editing
KR20100054078A (en) Animation authoring tool and authoring method through storyboard
CN104091608B (en) A kind of video editing method and device based on ios device
CN112004137A (en) Intelligent video creation method and device
US8223153B2 (en) Apparatus and method of authoring animation through storyboard
WO2010045736A1 (en) Reduced-latency rendering for a text-to-movie system
CN115170700A (en) Method for realizing CSS animation based on Flutter framework, computer equipment and storage medium
CN111311715B (en) Method and device for adding animation effect in webpage
US20090019084A1 (en) Method and system for preloading
CN111475418B (en) Method and device for debugging play content
CN113556576B (en) Video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230120

Address after: Room 430, cultural center, 460 Wenyi West Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Patentee after: XINHUA ZHIYUN TECHNOLOGY Co.,Ltd.

Patentee after: Xinhua fusion media technology development (Beijing) Co.,Ltd.

Address before: Room 430, cultural center, 460 Wenyi West Road, Xihu District, Hangzhou City, Zhejiang Province, 310012

Patentee before: XINHUA ZHIYUN TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right