CN112132931B - Processing method, device and system for templated video synthesis - Google Patents

Processing method, device and system for templated video synthesis Download PDF

Info

Publication number
CN112132931B
CN112132931B CN202011048860.4A CN202011048860A CN112132931B CN 112132931 B CN112132931 B CN 112132931B CN 202011048860 A CN202011048860 A CN 202011048860A CN 112132931 B CN112132931 B CN 112132931B
Authority
CN
China
Prior art keywords
slicing
task
material data
sample
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011048860.4A
Other languages
Chinese (zh)
Other versions
CN112132931A (en
Inventor
孙浩伟
刘冕
陈舟锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Zhiyun Technology Co ltd
Original Assignee
Xinhua Zhiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Zhiyun Technology Co ltd filed Critical Xinhua Zhiyun Technology Co ltd
Priority to CN202011048860.4A priority Critical patent/CN112132931B/en
Publication of CN112132931A publication Critical patent/CN112132931A/en
Application granted granted Critical
Publication of CN112132931B publication Critical patent/CN112132931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Abstract

The invention discloses a processing method, a device and a system for templated video synthesis, wherein the processing method comprises the following steps: acquiring a video synthesis task, wherein the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data; slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks; when the slicing task does not contain variable material data, the slicing task is matched with sample slicing information in a preset database, and when the matching is successful, slicing results corresponding to the matched sample slicing information are extracted from the database. According to the method, the device and the system, aiming at the slicing task which does not contain the variable material data, the slicing task is firstly matched with the sample slicing information, and the slicing result is extracted from the database according to the matching result, so that the repeated execution of the slicing task is avoided, and the video synthesis efficiency is improved.

Description

Processing method, device and system for templated video synthesis
Technical Field
The present invention relates to the field of video encoding and decoding, and in particular, to a processing method, apparatus and system for templatized video synthesis.
Background
At present, a user inserts related materials into a preset video template, so that corresponding videos can be generated for the user, the threshold for video production is reduced, and meanwhile, the efficiency for video production can be improved.
In order to accelerate the video synthesis efficiency, a distributed cluster mode is generally adopted, and video synthesis tasks are executed in parallel, wherein the operation flow is as follows: and sending the video synthesis task at the front end to a rear end server, performing task slicing on the whole synthesis task by the rear end server according to a preset rule, for example, according to a preset slicing interval, respectively sending the sliced subtasks to each service node, executing the assigned subtasks by the service nodes, generating corresponding video slicing, and finally merging the sliced videos generated by all the subtasks into a final video.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a processing method, a device and a system for templated video synthesis, which can avoid repeated encoding and decoding.
In order to solve the technical problems, the invention is solved by the following technical scheme:
a processing method for templated video synthesis comprises the following steps:
acquiring a video synthesis task, wherein the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data;
slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks;
when the slicing task does not contain variable material data, the slicing task is matched with sample slicing information in a preset database, when the matching is successful, slicing results corresponding to the matched sample slicing information are extracted from the database, otherwise, the slicing task is executed, corresponding slicing results are generated, corresponding sample slicing information is generated based on the slicing task, and the sample slicing information and the slicing results are added into the database.
As one possible implementation:
the material data comprises materials, material types, identifications, first time data and task configuration data;
the first time data is used for indicating the position of the material in the corresponding original material.
As one possible implementation:
when the slicing task does not contain variable material data, extracting the identification of each material data in the slicing task to obtain an identification set;
searching a database based on the identification set to obtain a search result;
when sample slicing information with matched identifiers is retrieved, the sample slicing information is used as first sample slicing information, and based on first time data, each template material data in the slicing task is compared with the first sample slicing information, so that a comparison result is obtained;
when first sample slicing information with consistent first time data is obtained through comparison, the first sample slicing information is used as second sample slicing information, and based on the task configuration data, each template material data in the slicing task is matched with the second sample slicing information, so that a matching result is obtained; and when the matching is successful, extracting a slicing result corresponding to the slicing information of the second sample.
As one possible implementation:
when the matching fails, extracting the identification, the first time data and the task configuration data of each material data in the slicing task;
summarizing the identifications to generate index labels;
and generating corresponding sample fragment information based on the index tag, the identification of each material data, the first time data and the task configuration data.
As one possible implementation:
the material data also comprises second time data, wherein the second time data is used for indicating the position of the material in a time axis;
searching the region where the variable material data appears in the time axis based on the material type and the second time data, and generating a corresponding variable region;
and extracting starting and stopping time points of each variable region, taking the starting and stopping time points as slicing points, and slicing the video synthesis task to obtain a plurality of slicing tasks.
As one possible implementation:
when the slicing task contains variable material data, executing the slicing task to generate a corresponding slicing result;
and summarizing all the slicing results to generate a composite video.
The invention also provides a processing device for templated video synthesis, which comprises:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video synthesis task, the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data;
the slicing module is used for slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks;
and the execution module is used for matching the slicing task with each sample slicing information in a preset database when the slicing task does not contain variable material data, extracting slicing results corresponding to the matched sample slicing information from the database when the matching is successful, otherwise, executing the slicing task to generate corresponding slicing results, generating corresponding sample slicing information based on the slicing task, and adding the sample slicing information and the slicing results into the database.
The invention also provides a processing system for templated video synthesis, which comprises a database, a client, an allocation node and a service node;
the client is used for generating a video synthesis task comprising a time axis, variable material data and template material data and sending the video synthesis task to the distribution node;
the distribution node is used for carrying out slicing on the video synthesis task based on the time axis and each piece of variable material data to obtain a plurality of slicing tasks, distributing the slicing tasks containing the variable material data to the service node, matching the slicing tasks not containing the variable material data with sample slicing information in a preset database, and extracting slicing results corresponding to the matched sample slicing information from the database when the matching is successful, otherwise, distributing the slicing tasks to the service node;
the service node is used for executing the distributed slicing task, generating a corresponding slicing result, generating corresponding sample slicing information based on the slicing task when the slicing task does not contain variable material data, and adding the sample slicing information and the slicing result into the database.
As one possible implementation:
the client is used for marking the material uploaded by the user as a variable material, marking the material called from a preset template material library as a template material, and collecting operation data of the user to generate corresponding variable material data or template material data.
The invention also proposes a computer device comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any one of the preceding claims when executing the program.
The invention has the remarkable technical effects due to the adoption of the technical scheme:
the invention segments the video synthesis task based on the distribution condition of the variable material data on the time axis, firstly judges whether the segmentation task corresponding to the pure template material data is a repeated segmentation task or not, and directly extracts the corresponding segmentation result from the preset database when the segmentation task belongs to the repeated segmentation task so as to avoid repeatedly executing the same segmentation task, thereby effectively improving the video synthesis efficiency.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for processing templated video synthesis according to the present invention;
FIG. 2 is a flow chart of the matching of sample fragment information in step S300 of the present invention;
FIG. 3 is a schematic illustration of a video composition task at step S200;
fig. 4 is a schematic block diagram of a processing device for templated video synthesis according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following examples, which are illustrative of the present invention and are not intended to limit the present invention thereto.
Embodiment 1, a processing method for templated video synthesis, includes the following steps:
s100, acquiring a video synthesis task, wherein the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data;
the variable material data is derived from original materials uploaded by a user, the template material data is derived from a preset template material library, and the user can reserve the template materials in the corresponding video template or select other template materials from the preset template material library.
S200, slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks;
and S300, when the slicing task does not contain variable material data, matching the slicing task with sample slicing information in a preset database, and when the matching is successful, extracting slicing results corresponding to the matched sample slicing information from the database, otherwise, executing the slicing task to generate corresponding slicing results, generating corresponding sample slicing information based on the slicing task, and adding the sample slicing information and the slicing results into the database.
S400, when the slicing task contains variable material data, executing the slicing task to generate a corresponding slicing result;
s500, summarizing all the slicing results to generate a composite video.
The template video has a large number of identical template fragments, and in the video synthesis process, the corresponding slicing tasks of the template fragments are repeatedly executed, so that the calculation amount of corresponding service nodes is wasted, and the efficiency of the video synthesis service of the whole cluster is greatly reduced.
In this embodiment, the video synthesis task is sliced based on the distribution condition of the variable material data on the time axis, referring to fig. 1, for the slicing task corresponding to the pure template material data, whether the slicing task is a repeated slicing task is first determined, and when the slicing task belongs to the repeated slicing task, the slicing result corresponding to the repeated slicing task is directly extracted from the preset database, so as to avoid the repeated execution of the same slicing task, thereby effectively improving the efficiency of video synthesis.
The material data comprises materials, material types, identifications, first time data and task configuration data;
the materials comprise video, audio and images;
the material types include template materials and variable materials, the identification of the template materials has uniqueness, the identification of the variable materials has uniqueness in the corresponding video synthesis task, and in the embodiment, the identification adopts URL (Uniform Resource Locator ).
The first time data comprises a first starting time stamp and a first ending time stamp, and the first time data is used for indicating the position of the material in the corresponding original material;
the task configuration data includes layout, transparency, and/or rendering hierarchy information.
Further, referring to fig. 2, in step S300, when the slicing task does not include variable material data, the slicing task is matched with each sample slicing information in a preset database, and when the matching is successful, the specific steps of extracting the slicing result corresponding to the matched sample slicing information from the database are as follows:
s311, extracting the identification of each material data in the slicing task to obtain an identification set;
since the slicing task does not contain variable material data, the extracted identifier is an identifier of each template material data, which has uniqueness.
S312, searching a database based on the identification set to obtain a search result;
that is, whether or not there is a fragmentation result synthesized using the same material in the search database.
S313, when sample slicing information with matched identifiers is retrieved, the sample slicing information is used as first sample slicing information, and based on first time data, each template material data in the slicing task is compared with the first sample slicing information to obtain a comparison result, wherein the comparison result is specifically as follows:
sequentially extracting first sample slicing information and first time data corresponding to each mark in the slicing task, and when the first time data corresponding to each mark is identical, further matching the first sample slicing information as second slicing information;
s314, when comparing and obtaining first sample slicing information with consistent first time data, taking the first sample slicing information as second sample slicing information, and matching each template material data in the slicing task with the second sample slicing information based on the task configuration data to obtain a matching result; and when the matching is successful, extracting a slicing result corresponding to the slicing information of the second sample.
The method comprises the following steps:
and sequentially extracting the first sample slicing information and task configuration data corresponding to each identifier in the slicing task, performing full-quantity matching on the obtained task configuration data, and extracting the slicing result corresponding to the second sample slicing information when the matching is successful.
And when the hit sample slicing information is not obtained in the searching step, the comparing step or the matching step, judging that the matching is failed, executing the slicing task at the moment, and generating a corresponding slicing result.
Because the user can upload the variable material when making the video, and can cut or adjust the configuration information for the provided template material, if the slicing result is directly returned according to the search result obtained in step S312, the finally synthesized video is inconsistent with the video made by the user.
The embodiment matches based on the first time data and the task configuration data, so that repeated misjudgment of the slicing task can be avoided, and the synthesis efficiency is improved on the premise of guaranteeing the video synthesis effect.
Further, in step S300, corresponding sample slicing information is generated based on the slicing task, and the specific steps of adding the sample slicing information and the slicing result to the database are as follows:
s321, extracting identification, first time data and task configuration data of each material data in the slicing task;
s322, summarizing the identifications to generate index labels;
s323, corresponding sample fragment information is generated based on the index tag, the identification of each material data, the first time data and the task configuration data.
When the database does not have the slicing results corresponding to the same slicing task, the embodiment executes the corresponding slicing task, and updates the database according to the slicing task and the corresponding slicing results, so that the slicing results can be directly called when the repeated slicing task is subsequently sent out.
In order to avoid the influence of massive sample slicing information and slicing results on the matching speed, a person skilled in the art can set a database cleaning rule by himself, for example, clean sample slicing information which is stored in a database for 4 hours and is not called, and the implementation method is as follows:
recording writing time of the sample slicing information into a database, and recording the number of times of calling the slicing result corresponding to the sample slicing information;
when the difference value between the current time and the writing time is monitored to exceed a preset expiration threshold, judging that the sample fragment information is expired, comparing the calling times with a preset times threshold at the moment, when the calling times are smaller than the times threshold, clearing the sample fragment information and the corresponding fragment result, otherwise, updating the writing time based on the current time, and resetting the calling times.
Further, the material data further comprises second time data, wherein the second time data is used for indicating the position of the material in a time axis;
in the step S200, the video synthesis task is sliced based on the time axis and the variable material data, and the specific steps of obtaining a plurality of slicing tasks are as follows:
s210, searching the region where the variable material data appears in the time axis based on the material type and the second time data, and generating a corresponding variable region;
s220, extracting starting and stopping time points of each variable region, taking the starting and stopping time points as slicing points, slicing the video synthesis task, and obtaining a plurality of slicing tasks.
In the actual slicing process, a person skilled in the art can further slice the slicing task according to actual needs, for example, when the slicing task contains variable material data, further slicing is performed based on the length of the slicing task, for example, when the time length of the slicing task exceeds a preset length threshold, the slicing task is equally sliced until the obtained slicing task is less than or equal to the preset length threshold.
Referring to fig. 3, in fig. 3, a template material a, a template material B, a variable material a, and a variable material B are video materials, a template material D, and a template material E are audio materials, the materials are arranged according to a time sequence, and slicing is performed based on start and stop time points of the variable material a and the variable material B, so as to obtain corresponding slicing tasks.
In fig. 3, the slicing task 1, the slicing task 3, and the slicing task 5 do not include variable materials, taking the slicing task 1 as an example, the specific steps for obtaining the corresponding slicing result are as follows:
extracting the marks of the template material A and the template material D, and retrieving corresponding sample fragment information from a database to obtain a plurality of first sample fragment information;
judging whether the first time data and task configuration information of the template material A and the template material D in the first sample slicing information are consistent with those in the slicing task 1 or not, wherein slicing results of the template material A and the template material D which are simultaneously played start to exist in a database are eliminated through matching steps of the first time data and the task configuration information, and error selection is effectively avoided.
Embodiment 2, a processing apparatus for templated video synthesis, as shown in fig. 4, includes:
the acquiring module 100 is configured to acquire a video synthesis task, where the video synthesis task includes a time axis and material data mapped with the time axis, where the material data is divided into variable material data and template material data;
the slicing module 200 is configured to slice the video synthesis task based on the time axis and each variable material data, so as to obtain a plurality of slicing tasks;
and the execution module 300 is configured to, when the slicing task does not include variable material data, match the slicing task with sample slicing information in a preset database, and when the matching is successful, extract a slicing result corresponding to the matched sample slicing information from the database, otherwise, execute the slicing task to generate a corresponding slicing result, generate corresponding sample slicing information based on the slicing task, and add the sample slicing information and the slicing result to the database.
This embodiment is an embodiment of the apparatus corresponding to embodiment 1, and since it is substantially similar to embodiment 1, the description is relatively simple, and reference is made to the description of the method embodiment for relevant points.
Embodiment 3, a processing system for templated video synthesis, comprising a database, a client, an allocation node, and a service node;
the client is used for generating a video synthesis task comprising a time axis, variable material data and template material data and sending the video synthesis task to the distribution node;
the distribution node is used for carrying out slicing on the video synthesis task based on the time axis and each piece of variable material data to obtain a plurality of slicing tasks, distributing the slicing tasks containing the variable material data to the service node, matching the slicing tasks not containing the variable material data with sample slicing information in a preset database, and extracting slicing results corresponding to the matched sample slicing information from the database when the matching is successful, otherwise, distributing the slicing tasks to the service node;
the service node is used for executing the distributed slicing task, generating a corresponding slicing result, generating corresponding sample slicing information based on the slicing task when the slicing task does not contain variable material data, and adding the sample slicing information and the slicing result into the database.
Further, the client is configured to mark the material uploaded by the user as a variable material, mark the material called from a preset template material library as a template material, and collect operation data of the user to generate corresponding variable material data or template material data.
Embodiment 4, a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to embodiment 1 when executing the program.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, the specific embodiments described in the present specification may differ in terms of parts, shapes of components, names, and the like. All equivalent or simple changes of the structure, characteristics and principle according to the inventive concept are included in the protection scope of the present invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions in a similar manner without departing from the scope of the invention as defined in the accompanying claims.

Claims (10)

1. The processing method for templated video synthesis is characterized by comprising the following steps:
acquiring a video synthesis task, wherein the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data;
slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks;
when the slicing task does not contain variable material data, the following steps are carried out:
matching the slicing task with sample slicing information in a preset database;
when the matching is successful, extracting a slicing result corresponding to the matched sample slicing information from the database;
and when the matching is unsuccessful, executing the slicing task, generating a corresponding slicing result, generating corresponding sample slicing information based on the slicing task, and adding the sample slicing information and the slicing result into the database.
2. The method for processing templated video composition according to claim 1, wherein:
the material data comprises materials, material types, identifications, first time data and task configuration data;
the first time data is used for indicating the position of the material in the corresponding original material.
3. The method for processing templated video composition according to claim 2, wherein:
when the slicing task does not contain variable material data, extracting the identification of each material data in the slicing task to obtain an identification set;
searching a database based on the identification set to obtain a search result;
when sample slicing information with matched identifiers is retrieved, the sample slicing information is used as first sample slicing information, and based on first time data, each template material data in the slicing task is compared with the first sample slicing information, so that a comparison result is obtained;
when first sample slicing information with consistent first time data is obtained through comparison, the first sample slicing information is used as second sample slicing information, and based on the task configuration data, each template material data in the slicing task is matched with the second sample slicing information, so that a matching result is obtained; and when the matching is successful, extracting a slicing result corresponding to the slicing information of the second sample.
4. A method of processing templated video synthesis according to claim 3, wherein:
when the matching fails, extracting the identification, the first time data and the task configuration data of each material data in the slicing task;
summarizing the identifications to generate index labels;
and generating corresponding sample fragment information based on the index tag, the identification of each material data, the first time data and the task configuration data.
5. The method for processing templated video synthesis according to any one of claims 2 to 4, wherein:
the material data also comprises second time data, wherein the second time data is used for indicating the position of the material in a time axis;
searching the region where the variable material data appears in the time axis based on the material type and the second time data, and generating a corresponding variable region;
and extracting starting and stopping time points of each variable region, taking the starting and stopping time points as slicing points, and slicing the video synthesis task to obtain a plurality of slicing tasks.
6. The method for processing templated video synthesis according to any one of claims 1 to 4, wherein:
when the slicing task contains variable material data, executing the slicing task to generate a corresponding slicing result;
and summarizing all the slicing results to generate a composite video.
7. A processing apparatus for templated video synthesis, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video synthesis task, the video synthesis task comprises a time axis and material data mapped with the time axis, and the material data is divided into variable material data and template material data;
the slicing module is used for slicing the video synthesis task based on the time axis and the variable material data to obtain a plurality of slicing tasks;
and the execution module is used for matching the slicing task with each piece of sample slicing information in a preset database when the slicing task does not contain variable material data, extracting slicing results corresponding to the matched sample slicing information from the database when the matching is successful, executing the slicing task when the matching is unsuccessful, generating corresponding slicing results, generating corresponding sample slicing information based on the slicing task, and adding the sample slicing information and the slicing results into the database.
8. The processing system for templated video synthesis is characterized by comprising a database, a client, an allocation node and a service node;
the client is used for generating a video synthesis task comprising a time axis, variable material data and template material data and sending the video synthesis task to the distribution node;
the distribution node is used for carrying out slicing on the video synthesis task based on the time axis and each piece of variable material data to obtain a plurality of slicing tasks, distributing the slicing tasks containing the variable material data to the service node, matching the slicing tasks not containing the variable material data with sample slicing information in a preset database, extracting slicing results corresponding to the matched sample slicing information from the database when the matching is successful, and distributing the slicing tasks to the service node when the matching is unsuccessful;
the service node is used for executing the distributed slicing task, generating a corresponding slicing result, generating corresponding sample slicing information based on the slicing task when the slicing task does not contain variable material data, and adding the sample slicing information and the slicing result into the database.
9. The processing system for templated video synthesis according to claim 8, wherein: the client is used for marking the material uploaded by the user as a variable material, marking the material called from a preset template material library as a template material, and collecting operation data of the user to generate corresponding variable material data or template material data.
10. A computer device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when the program is executed by the processor.
CN202011048860.4A 2020-09-29 2020-09-29 Processing method, device and system for templated video synthesis Active CN112132931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011048860.4A CN112132931B (en) 2020-09-29 2020-09-29 Processing method, device and system for templated video synthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011048860.4A CN112132931B (en) 2020-09-29 2020-09-29 Processing method, device and system for templated video synthesis

Publications (2)

Publication Number Publication Date
CN112132931A CN112132931A (en) 2020-12-25
CN112132931B true CN112132931B (en) 2023-12-19

Family

ID=73844628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011048860.4A Active CN112132931B (en) 2020-09-29 2020-09-29 Processing method, device and system for templated video synthesis

Country Status (1)

Country Link
CN (1) CN112132931B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738558A (en) * 2021-01-19 2021-04-30 深圳市前海手绘科技文化有限公司 Distributed video synthesis method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189658A (en) * 2005-02-08 2008-05-28 兰德马克数字服务有限责任公司 Automatic identification of repeated material in audio signals
CN104735468A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Method and system for synthesizing images into new video based on semantic analysis
CN108989885A (en) * 2017-06-05 2018-12-11 腾讯科技(深圳)有限公司 Video file trans-coding system, dividing method, code-transferring method and device
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110266971A (en) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 A kind of short video creating method and system
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110856038A (en) * 2019-11-25 2020-02-28 新华智云科技有限公司 Video generation method and system, and storage medium
CN111182367A (en) * 2019-12-30 2020-05-19 苏宁云计算有限公司 Video generation method and device and computer system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI592021B (en) * 2015-02-04 2017-07-11 騰訊科技(深圳)有限公司 Method, device, and terminal for generating video

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101189658A (en) * 2005-02-08 2008-05-28 兰德马克数字服务有限责任公司 Automatic identification of repeated material in audio signals
CN104735468A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Method and system for synthesizing images into new video based on semantic analysis
CN108989885A (en) * 2017-06-05 2018-12-11 腾讯科技(深圳)有限公司 Video file trans-coding system, dividing method, code-transferring method and device
CN109391826A (en) * 2018-08-07 2019-02-26 上海奇邑文化传播有限公司 A kind of video generates system and its generation method online
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110266971A (en) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 A kind of short video creating method and system
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110856038A (en) * 2019-11-25 2020-02-28 新华智云科技有限公司 Video generation method and system, and storage medium
CN111182367A (en) * 2019-12-30 2020-05-19 苏宁云计算有限公司 Video generation method and device and computer system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex;Kuan Han, Haiguang Wen;Neuroimage;第198卷;125-136 *
Web浏览器下的智能视频数据库设计与开发;刘丽斐;赵龙;;应用科技;41(06);1-6 *

Also Published As

Publication number Publication date
CN112132931A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN105989048B (en) Data record processing method, device and system
CN111475575B (en) Data synchronization method and device based on block chain and computer readable storage medium
CN110321383A (en) Big data platform method of data synchronization, device, computer equipment and storage medium
CN110209714A (en) Report form generation method, device, computer equipment and computer readable storage medium
CN112132931B (en) Processing method, device and system for templated video synthesis
CN111741331B (en) Video clip processing method, device, storage medium and equipment
CN111125298A (en) Method, equipment and storage medium for reconstructing NTFS file directory tree
CN112347143A (en) Multi-data stream processing method, device, terminal and storage medium
CN110851675B (en) Data extraction method, device and medium
CN109688422B (en) Video processing method and device
CN112148920B (en) Data management method
CN113672692A (en) Data processing method, data processing device, computer equipment and storage medium
CN107544894B (en) Log processing method and device and server
CN114782879B (en) Video identification method and device, computer equipment and storage medium
CN115599793B (en) Method, device and storage medium for updating data
CN111046077A (en) Data acquisition method and device, storage medium and terminal
CN111143582B (en) Multimedia resource recommendation method and device for updating association words in double indexes in real time
CN110929207B (en) Data processing method, device and computer readable storage medium
CN111563123B (en) Real-time synchronization method for hive warehouse metadata
CN110913240B (en) Video interception method, device, server and computer readable storage medium
CN114051162A (en) Caching method and device based on play records
CN114493701A (en) User grouping method and device
CN112417281A (en) Data analysis method, device and equipment
CN111061719A (en) Data collection method, device, equipment and storage medium
CN112866742B (en) Audio and video file management method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant