CN107770626A - Processing method, image synthesizing method, device and the storage medium of video material - Google Patents
Processing method, image synthesizing method, device and the storage medium of video material Download PDFInfo
- Publication number
- CN107770626A CN107770626A CN201711076478.2A CN201711076478A CN107770626A CN 107770626 A CN107770626 A CN 107770626A CN 201711076478 A CN201711076478 A CN 201711076478A CN 107770626 A CN107770626 A CN 107770626A
- Authority
- CN
- China
- Prior art keywords
- video
- material set
- efficacy parameter
- user interface
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000002194 synthesizing effect Effects 0.000 title claims abstract description 9
- 239000000463 material Substances 0.000 claims abstract description 380
- 230000000694 effects Effects 0.000 claims abstract description 118
- 239000000203 mixture Substances 0.000 claims abstract description 95
- 230000004044 response Effects 0.000 claims description 61
- 238000009877 rendering Methods 0.000 claims description 40
- 238000013515 script Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 18
- 239000012634 fragment Substances 0.000 claims description 17
- 238000004891 communication Methods 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 claims description 9
- 235000013399 edible fruits Nutrition 0.000 claims description 9
- 238000003786 synthesis reaction Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000013461 design Methods 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000001727 in vivo Methods 0.000 claims 1
- 230000000875 corresponding effect Effects 0.000 description 97
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 10
- 238000012360 testing method Methods 0.000 description 6
- 230000007704 transition Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 101000911390 Homo sapiens Coagulation factor VIII Proteins 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 102000057593 human F8 Human genes 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000010181 polygamy Effects 0.000 description 1
- 229940047431 recombinate Drugs 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
This application discloses the processing method of video material, image synthesizing method, device and storage medium.Wherein, a kind of processing method of video material, including:Obtain the material set of video to be synthesized, and determine the attribute of material set, wherein, the material set includes multiple material elements, each material element includes at least one of picture, word, Voice & Video media content, and attribute includes the playing sequence and playing duration of each material element in the material set;It is determined that efficacy parameter corresponding with material set, efficacy parameter corresponds to a kind of video effect pattern;Material set and efficacy parameter are transmitted to Video Composition server, so that Video Composition server is according to efficacy parameter and the attribute of material set, multiple material elements in material set are synthesized to the video corresponding to video effect pattern.
Description
Technical field
The application is related to Video Composition field, more particularly to the processing method of video material, image synthesizing method, device and
Storage medium.
Background technology
With the development of multimedia technology, video production is had been widely used in the life of people.Video production is to figure
The materials such as piece, video, audio recombinate coding and generate video.At present, video production is usually required in personal computing devices
Middle installation video production software.These video production softwares can provide feature-rich video editing function, but complex operation.
The content of the invention
Therefore, present applicant proposes a kind of new Video Composition scheme, answered with solving how to reduce the operation of Video Composition
The problem of polygamy.
According to the application on one side, it is proposed that a kind of processing method of video material, including:Obtain video to be synthesized
Material set, and the attribute of the material set is determined, wherein, the material set includes multiple material elements, each material member
Element includes at least one of picture, word, Voice & Video media content, and the attribute includes each material member in the material set
The playing sequence and playing duration of element;It is determined that efficacy parameter corresponding with material set, efficacy parameter is imitated corresponding to a kind of video
Fruit pattern;Material set and the efficacy parameter are transmitted to Video Composition server, so that Video Composition server is according to effect
Fruit parameter and the attribute of the material set, multiple material elements in material set are synthesized corresponding to video effect pattern
Video.
In certain embodiments, the material set for obtaining video to be synthesized, including:There is provided for obtaining material element
User interface, the user interface includes corresponding respectively at least one control of at least one medium type, it is described at least
A kind of medium type includes:It is at least one in word, picture, audio, video;In response to any control in the user interface
The operation of part, media content corresponding with the medium type of the control is obtained, and as an element in the material set
One media content of material element.
In certain embodiments, the operation in response to any control in the user interface, obtain and the control
Medium type corresponding to media content, and as a media content of a material element in the material set,
Including:In response to the operation to a picture control in the user interface, a pictures are obtained and as the material
The image content of one material element of set.
In certain embodiments, the operation in response to any control in the user interface, obtain and the control
Medium type corresponding to media content, and as a media content of a material element in the material set,
Also include:In response to the operation pair the word input control associated with the picture control, acquisition inputs and the picture
The text information of relevance, and as the word content of the material element.
In certain embodiments, the operation in response to any control in the user interface, obtain and the control
Medium type corresponding to media content, and as a media content of a material element in the material set,
Also include:In response to the operation pair the Audio Controls associated with the picture control, acquisition inputs and the image content
The audio-frequency information of association, and as the audio content of the material element.
In certain embodiments, the operation in response to any control in the user interface, obtains the control pair
The media content for the medium type answered, and as a media content of a material element in the material set, bag
Include:In response to the operation to a video control in the user interface, a video segment is obtained and as the element
The video content of one material element of material set.
In certain embodiments, the material set for obtaining video to be synthesized, including:Obtain one section of video;According to pre-
Fixed video clipping algorithm, at least one video segment is extracted from this section of video and generates the description letter of each video segment
Breath;The user interface of the description information of display each video segment is provided, so as to user's retouching according to each video segment
State information and carry out Piece Selection;, respectively will be selected each in response to the selection operation at least one video segment
Video content of the video segment as a material element in the material set.
In certain embodiments, it is described according to predetermined video clipping algorithm, at least one regard is extracted from this section of video
Frequency fragment and the description information for generating each video segment, including:Determine at least one key images frame of the video;For
Each key images frame, extracts a video segment for including the key images frame from the video, and the video segment includes one
Individual corresponding audio fragment;Text region is carried out to the audio fragment to obtain corresponding word, and is given birth to according to the word
Into description information corresponding with the video segment.
In certain embodiments, the attribute for determining the material set, including:There is provided and present in the material set
The user interface of thumbnail corresponding to each material element, phase of the thumbnail in the user interface corresponding to each material element
Answer in viewing area and be arranged in order;The material set is adjusted in response to the moving operation to thumbnail in the user interface
Middle each element puts in order, and using putting in order as the playing sequence of the material set after regulation.
In certain embodiments, described to determine efficacy parameter corresponding with the material set, the efficacy parameter corresponds to
In a kind of video effect pattern, including:The user interface for including multiple effect options is provided, wherein, each effect option is corresponding
In a kind of efficacy parameter;In response to the preview operation to any of the multiple effect option, show in the user interface
Show corresponding preview design sketch;In response to the selected operation to any of the multiple effect option, by selected effect
Efficacy parameter corresponding to option is as efficacy parameter corresponding with the material set.
In certain embodiments, each material attribute of an element in the determination material set, including:In an element
When material element includes image content, the playing duration using the playing duration of the image content as the material element;In an element
When material element includes video content, the playing duration using the playing duration of the video content as the material element.
In certain embodiments, methods described further comprises, sending Video Composition to the Video Composition server please
Ask, so as to the Video Composition server in response to the Video Composition ask and by the material set multiple materials member
Element synthesizes the video corresponding to the video effect pattern.According to the application another aspect, propose that a kind of video closes
Into method, including:Effect from the material set of video material client one video to be synthesized of acquisition and on the material set
Fruit parameter, wherein, material set includes multiple material elements, and each material element is included in picture, word, Voice & Video extremely
Few a kind of media content, when the attribute of material set includes the playing sequence of each material element and broadcasting in the material set
Long, the efficacy parameter corresponds to a kind of video effect pattern;According to efficacy parameter and the attribute of material set, by material set
In multiple material elements synthesize the video of video effect pattern.
In certain embodiments, when in the material set material element include in image content and corresponding word
Rong Shi, this method also include:Generation voice messaging corresponding with the word content;Generation word corresponding with the voice messaging
Curtain information;The voice messaging and the caption information are added in the video.
In certain embodiments, it is described according to the efficacy parameter, the material set is synthesized into the video effect
The video of pattern, including:The material set is standardized, to cause each material element to be converted into pre- fix
Formula, the predetermined format include coding format, image player frame per second and picture size;, will be through according to the efficacy parameter
The material set for crossing standardization synthesizes the video.
In certain embodiments, it is described according to the efficacy parameter and the attribute of the material set, by the material collection
Multiple material elements in conjunction synthesize the video of the video effect pattern, including:Based on in predetermined Video Composition
The multiple Video Composition scripts performed in, determine multiple rendering stages corresponding to the efficacy parameter, wherein, each regard
Frequency synthesis script corresponds to a kind of Video Composition effect, and each rendering stage is included at least one in the multiple Video Composition script
Individual script, the rendering result of each rendering stage are the input content of next rendering stage;Based on the multiple rendering stage,
The material set is rendered, to generate the video.
In certain embodiments, the video effect pattern includes the video transition pattern between adjacent material element.
According to the application another aspect, there is provided a kind of processing unit of video material, including material obtaining unit, effect
Determining unit and transmission unit.Material obtaining unit, obtains the material set of video to be synthesized, and determines the material set
Attribute.Wherein, the material set includes multiple material elements, and each material element is included in picture, word, Voice & Video extremely
A kind of few media content, attribute include the playing sequence and playing duration of each material element in the material set.Effect determines single
Member determines efficacy parameter corresponding with material set.Efficacy parameter corresponds to a kind of video effect pattern.Transmission unit is by material
Set and efficacy parameter are transmitted to Video Composition server, so that Video Composition server is according to efficacy parameter and material set
Attribute, multiple material elements in material set are synthesized to the video corresponding to video effect pattern.
In certain embodiments, material obtaining unit is used for the material set that video to be synthesized is obtained according to following manner:
The user interface for obtaining material element is provided, the user interface includes corresponding respectively at least one medium type extremely
A few control, at least one medium type include:It is at least one in word, picture, audio, video;In response to right
The operation of any control in the user interface, acquisition media content corresponding with the medium type of the control, and as
A media content of a material element in the material set.
In certain embodiments, material obtaining unit is used for according to following manner in response to any in the user interface
The operation of control, media content corresponding with the medium type of the control is obtained, and as one in the material set
One media content of material element:In response to the operation to a picture control in the user interface, a pictures are obtained
And the image content of a material element as the material set.
In certain embodiments, material obtaining unit is additionally operable to:In response to a pair word associated with the picture control
The operation of input control, the text information for inputting and being associated with the image content is obtained, and as the material element
Word content.
In certain embodiments, material obtaining unit is additionally operable to:In response to a pair audio associated with the picture control
The operation of control, the audio-frequency information for inputting and being associated with the image content is obtained, and as the sound of the material element
Frequency content.
In certain embodiments, material obtaining unit is used for according to following manner in response to any in the user interface
The operation of control, the media content of medium type corresponding to the control is obtained, and as an element in the material set
One media content of material element:In response to the operation to a video control in the user interface, a piece of video is obtained
The video content of section and a material element as the material set.
In certain embodiments, material obtaining unit is used for the material set that video to be synthesized is obtained according to following manner:
Obtain one section of video;According to predetermined video clipping algorithm, at least one video segment is extracted from this section of video and is generated every
The description information of individual video segment;The user interface of the description information of display each video segment is provided, so as to user's root
Piece Selection is carried out according to the description information of each video segment;In response to the selection operation at least one video segment,
Video content using selected each video segment as a material element in the material set respectively.
In certain embodiments, material obtaining unit is used to be performed according to predetermined video clipping calculation according to following manner
Method, at least one video segment is extracted from this section of video and generates the description information of each video segment:Determine video-frequency band
At least one key images frame;For each key images frame, include the key images frame one is extracted from this section of video
Video segment, the video segment include a corresponding audio fragment;It is corresponding to obtain that Text region is carried out to audio fragment
Word, and description information corresponding with video segment is generated according to word.
In certain embodiments, material obtaining unit is used for the attribute that material set is determined according to following manner:Offer is in
The user interface of thumbnail corresponding to each material element, thumbnail corresponding to each material element exist in the existing material set
It is arranged in order in the corresponding viewing area of the user interface;Adjusted in response to the moving operation to thumbnail in user interface
Each element puts in order in material set, and using putting in order as the playing sequence of material set after regulation.
In certain embodiments, material obtaining unit is used to determine each material element in material set according to following manner
Attribute:When a material element includes image content, using the playing duration of the image content broadcasting as the material element
Put duration;When a material element includes video content, using the playing duration of the video content broadcasting as the material element
Put duration.
In certain embodiments, effect determination unit is used to determine that effect corresponding with material set is joined according to following manner
Number:The user interface for including multiple effect options is provided, wherein, each effect option corresponds to a kind of efficacy parameter;In response to
To the preview operation of any of the multiple effect option, corresponding preview design sketch is shown in the user interface;Ring
Selected operations of the Ying Yu to any of the multiple effect option, the efficacy parameter corresponding to selected effect option is made
For efficacy parameter corresponding with the material set.
According to the application another aspect, there is provided a kind of Video Composition device, including:Communication unit, from video material visitor
The material set of one video to be synthesized of family end acquisition and the efficacy parameter on the material set, wherein, the material set
Including multiple material elements, each material element includes at least one of picture, word, Voice & Video media content, described
The attribute of material set includes the playing sequence and playing duration of each material element in the material set, the efficacy parameter pair
Should be in a kind of video effect pattern;Video Composition unit, according to the efficacy parameter and the attribute of the material set, by described in
Multiple material elements in material set synthesize the video of the video effect pattern.
In certain embodiments, Video Composition device also includes phonetic synthesis unit, captions generation unit and adding device.
When a material element includes image content and corresponding word content in material set, phonetic synthesis unit generates the text
Voice messaging corresponding to word content, the captions generation unit generate caption information corresponding to the voice messaging;Adding device
For the voice messaging and the caption information to be added in the video.
In certain embodiments, Video Composition unit is used to be performed according to efficacy parameter according to following manner, by material collection
Synthesize the operation of the video of the video effect pattern:The material set is standardized, it is each to cause
Material element is converted into predetermined format, and the predetermined format includes coding format, image player frame per second and picture size;Root
According to the efficacy parameter, the material set Jing Guo standardization is synthesized into the video.
In certain embodiments, the Video Composition unit be used to be performed according to the efficacy parameter according to following manner and
The attribute of the material set, multiple material elements in the material set are synthesized to the video of the video effect pattern
Operation:Based on multiple Video Composition scripts for being performed in predetermined Video Composition application, the efficacy parameter is determined
Corresponding multiple rendering stages, wherein, each Video Composition script corresponds to a kind of Video Composition effect, each rendering stage
Including at least one script in the multiple Video Composition script, the rendering result of each rendering stage is next rendering stage
Input content;Based on the multiple rendering stage, the material set is rendered, to generate the video.Video is imitated
Fruit pattern includes the video transition pattern between adjacent material element.
According to the application another aspect, there is provided a kind of computing device, including:One or more processors, memory with
And one or more programs.Program storage is in the memory and is configured as by one or more of computing devices, institute
Stating one or more programs includes being used for the instruction for performing the present processes.
According to the application another aspect, there is provided a kind of storage medium, be stored with one or more programs.It is one or
Multiple programs include instruction.The instruction is when executed by a computing apparatus so that the computing device the present processes.
To sum up, can be in user interface (such as Fig. 3 A to 3G user according to the processing scheme of the video material of the application
Interface) the simple and direct content selection of middle progress, so as to easily obtain the material set of video to be synthesized.Particularly, this Shen
Processing scheme please can also carry out automating editing and generating video segment and corresponding description information to video, so as to
So that user quickly determines the content of video segment and carries out Piece Selection by checking description information.In addition, the application
The preview design sketch (such as effect animation etc.) of various video effect mode can be intuitively presented in processing scheme to user, so as to
It is easy to user quickly to determine the effect mode of video to be synthesized, and then user can be avoided to carry out and regard on local computing device
The relevant complex operations of yupin effect.On this basis, the processing scheme of the application can be synthesized by Video Composition server and regarded
Frequently, so that user experience be greatly improved.
Brief description of the drawings
It is required in being described below to example to use in order to illustrate more clearly of the technical scheme in present application example
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some examples of the application, for this area
For those of ordinary skill, without having to pay creative labor, it can also be obtained according to these accompanying drawings other attached
Figure.
Fig. 1 shows the schematic diagram of the application scenarios 100 according to the application some embodiments;
Fig. 2 shows the flow chart of the processing method 200 of the video material according to the application some embodiments;
Fig. 3 A show the user interface schematic diagram of the acquisition image content according to the application one embodiment;
Fig. 3 B show the interface schematic diagram of the display picture of one embodiment;
Fig. 3 C show the schematic diagram of the acquisition audio-frequency information according to the application one embodiment;
Fig. 3 D show the user interface of the generation video segment according to the application one embodiment;
Fig. 3 E show the editing interface of a video segment;
Fig. 3 F show the user interface of the regulation playing sequence according to the application one embodiment;
Fig. 3 G show the user interface of the determination efficacy parameter according to the application one embodiment;
Fig. 4 shows the flow chart of the image synthesizing method 400 according to the application some embodiments;
Fig. 5 shows the Video Rendering process according to the application one embodiment;
Fig. 6 shows the flow chart of the image synthesizing method 600 according to the application some embodiments;
Fig. 7 shows the schematic diagram of the processing unit 700 of the video material according to the application some embodiments;
Fig. 8 shows the schematic diagram of the Video Composition device 800 according to the application some embodiments;
Fig. 9 shows the schematic diagram of the Video Composition device 900 according to the application some embodiments;And
Figure 10 shows the composition structure chart of a computing device.
Embodiment
Below in conjunction with the accompanying drawing in present application example, the technical scheme in present application example is carried out clearly and completely
Description, it is clear that described example is only a part of example of the application, rather than whole examples.Based on the reality in the application
Example, the every other example that those of ordinary skill in the art are obtained under the premise of creative work is not made, belong to this
Apply for the scope of protection.
Fig. 1 shows the schematic diagram of the application scenarios 100 according to the application some embodiments.As shown in figure 1, application scenarios
100 include computing device 110 and server 120.Here, computing device 110 may be implemented as desktop computer, notebook electricity
The various terminal equipments such as brain, tablet personal computer, mobile phone or handheld device, but unlimited it is limited to this.Server 120 can be by reality
It is now the device resources such as server, virtual server or the distributed type assemblies of hardware independence, but not limited to this.Computing device
110 can be resident various applications, such as using 111.Using 111 video materials that can obtain video to be synthesized, and by video
Material is transferred in server 120.So, server 120 can synthesize corresponding video based on the video material received.Clothes
Being engaged in device 120 can also be by synthesized transmission of video to computing device 110.Here, it may be implemented as material processing using 111
Using or browser etc., the application is not restricted to this.The processing method of video material is illustrated with reference to Fig. 2.
Fig. 2 shows the flow chart of the processing method 200 of the video material according to the application some embodiments.Method 200
Such as can be performed in application 111, but not limited to this.Here, it may be implemented as browser using 111 and material processing should
With.In addition, instant messaging application (QQ, wechat etc.), social networking application, Video Applications can also be implemented as using 111
One component of the application such as (such as Tengxun's video etc.) or news client.
As shown in Fig. 2 method 200 includes step S201, the material set of video to be synthesized is obtained, and determines material set
Attribute.Here, material set can include multiple material elements.Each material element includes picture, word, Voice & Video
At least one of media content.The attribute of material set includes in the material set playing sequence of each material element and during broadcasting
It is long.According to some embodiments of the application, in step s 201, there is provided for obtaining the user interface of material element.User interface
At least one control for corresponding respectively at least one medium type can be included.Here, a control is that user interface is used for
The view object interacted with user, for example, input frame, drop-down choice box, button etc..The scope of medium type is for example
Including:Word, picture, Voice & Video, but not limited to this.On this basis, step S201 can be in response to user interface
In any control operation, obtain corresponding with the medium type of control media content, and as in material set one
One media content of individual material element.
In one embodiment, a width is selected to be locally stored or picture come automatic network by picture control in user
When, step S201 can be in response to the operation to picture control, the image content using the picture as a material element.In addition
Illustrate, the material element comprising image content can also generally include word associated with picture or audio.In a reality
Apply in example, when user corresponds to the word of the picture by a word input control input, step S201 is in response to text
The operation of word input control, the text information associated with image content is obtained, and as in the word of corresponding material element
Hold.In yet another embodiment, step S201 can obtain the sound associated with image content in response to the operation to Audio Controls
Frequency information, and as the audio content of corresponding material element.Here, audio content is, for example, aside or background music etc.
Deng.In addition, step S201 can be using the playing duration of picture as corresponding material element playing duration.For vivider explanation
Step S201 implementation procedure, exemplary description is carried out with reference to Fig. 3 A to Fig. 3 C.
Fig. 3 A show the user interface schematic diagram of the acquisition image content according to the application one embodiment.Fig. 3 B are shown
The interface schematic diagram of the display picture of one embodiment.As shown in figures 3 a and 3b, when user operates to control 301,
Step S201 can obtain a pictures and be shown in the preview window 302.Step S201 can be in response to playing duration control
The operation of part 303, determine the playing duration of picture.Step S201 can obtain in response to the operation to word input control 304
The text information related to picture in the preview window 302.In other words, text information is the supplementary notes to picture.Fig. 3 C are shown
According to the schematic diagram of the acquisition audio-frequency information of the application one embodiment.For example, step S201 can be in response to control 305
Operation obtain the audio (be, for example, background music) being locally stored.In another example step S201 can be in response to control 306
Operation, record a section audio content.The audio content is, for example, the aside for being directed to picture in the preview window 302 and recording.
In yet another embodiment, step S201 can obtain one section of video, and regarding as material element
Frequency content.For example, step S201 in response to the operation to a video control in user interface, obtains a video segment, and
As the video content of a material element.Here, video can for example be stored in local video file, can also
It is stored in the video content in high in the clouds.For the material element comprising video content, step S201 can also add text wherein
Word content, audio content etc..When a material element includes video content, step S201 can be by the broadcasting of video content
Playing duration of the duration as the material element.
In yet another embodiment, step S201 may be implemented as the method 400 shown in Fig. 4.As shown in figure 4, in step
In rapid S401, one section of video is obtained.In step S402, according to predetermined video clipping algorithm, from this section of video extraction to
A few video segment simultaneously generates the description information of each video segment.Specifically, according to the application one embodiment, step
S402 determines at least one key images frame of video-frequency band first.For each key images frame, step S402 can be from the section
Extraction includes a video segment of the key images frame in video.The video segment can include a corresponding audio piece
Section.So, step S402 carries out Text region to audio fragment to obtain corresponding word, and according to word generation and video
Description information corresponding to fragment.It should be appreciated that step S402 can use the various algorithms for being capable of editing video automatically, the application
This is not restricted.
On this basis, in step S403, there is provided the user interface of the description information of each video segment is shown, so as to
User carries out Piece Selection according to the description information of each video segment.
In step s 404, in response to the selection operation at least one video segment, selected is each regarded respectively
Video content of the frequency fragment as a material element in material set.In other words, step S404 can will be selected each
Video segment is generated as a corresponding material element.
Illustrate in addition, be not limited by step S402 and editing is carried out to video, embodiments herein can also be to
High in the clouds sends video clipping request, and carries out video clipping by cloud device (such as server 120).On this basis, this Shen
Embodiment please can obtain the video segment of editing from cloud device.In addition, in order to which vivider explanation generation includes video
The process of the material element of content, it is illustrative with reference to Fig. 3 D and 3E.
Fig. 3 D show the user interface of the generation video segment according to the application one embodiment.As shown in Figure 3 D, window
Mouth 307 is the preview window of video to be clipped.In response to being operated to control 308, embodiments herein can generate multiple regard
Frequency fragment, such as fragment 309.Fig. 3 E show the editing interface of a video segment.For example, in response to the behaviour to fragment 309
Make (for example, click on or double-click etc.), and enter interface shown in Fig. 3 E.Wherein, window 310 for fragment 309 the preview window,
Region 311 is the description information on fragment 309.In addition, user can be inputted by word input control 312 and video segment
Corresponding word content.User can also obtain the audio content for video segment by control 313 or control 314.Example
Such as, icon 315 represents an acquired audio file.In addition, by tick boxes in operation diagram 3D, user can select at least
One video segment.So, the present embodiment can be by selected each video segment and corresponding word content and audio
Content is a material element.
To sum up, step S201 can obtain multiple material elements.Here, step S201 can be by the life of multiple material elements
Into sequentially corresponding playing sequence by default.In addition, step S201 may also respond to user's operation to multiple material elements
Playing sequence be adjusted.For example, Fig. 3 F show user circle of the regulation playing sequence according to the application one embodiment
Face.Fig. 3 F present thumbnail corresponding to each material element.Such as 316 and 317.Thumbnail is arranged in order in viewing area.
The arrangement that step S201 can adjust each element in material set in response to the moving operation to thumbnail in user interface is suitable
Sequence, and using putting in order as the playing sequence of material set after regulation.
For the material set determined in step S201, method 200 can perform step S202.In step S202, really
Fixed efficacy parameter corresponding with material set.Here, every kind of efficacy parameter corresponds to a kind of video effect pattern.Video effect example
Such as include the transitions and particle effect between adjacent material element.Wherein, transitions refers in two scenes (i.e.
Two material elements) between scene transitional effect.For example, embodiments herein can use predetermined skill (as draw as,
Dissolve, scroll), realize the smooth excessiveness of scene.Transitions is, for example, that the effect of picture into picture (is referred to as figure
Piece flies into effect).Particle effect is the animation effect of the objects such as the real reclaimed water of simulation, fire, mist, gas.Illustrate in addition, it is a kind of
Video effect pattern corresponds to the whole structure of a video to be synthesized.In fact, a kind of video effect pattern can be a kind of
The combination of video desired effects either a variety of predetermined video effects.In order to avoid user is directed to video in computing device 110
Effect mode carries out complicated operation, and step S202 can provide the user interface for including multiple effect options.Wherein, Mei Gexiao
Fruit option corresponds to a kind of efficacy parameter.Here, a kind of efficacy parameter is considered corresponding with a kind of video effect pattern
Mark.In response to the preview operation to any of multiple effect options, step S202 can show phase in the user interface
The preview design sketch answered.In response to the selected operation to any of multiple effect options, step S202 can will be selected
Efficacy parameter corresponding to effect option is as efficacy parameter corresponding with material set.For example, Fig. 3 G are shown according to this Shen
Please one embodiment determination efficacy parameter user interface.As shown in Figure 3 G, region 319 shows multiple effect options, example
Such as 320 and 321.Each option corresponds to a kind of video effect pattern.For example, when effect option 320 is previewed, corresponding effect is moved
Picture is displayed in window 318.Option is the current effect option being just previewed in window 322.Here, effect animation can be straight
A kind of video effect pattern of expression of sight.So, user can be by view result animation, and selectes a kind of video effect mould
Formula, without carrying out the complex operations relevant with video effect in computing device.For example, step S202 can be in response to right
The operation of control 323, and efficacy parameter corresponding to the selected current effect option being just previewed.
It is determined that when material set and efficacy parameter, method 200 can perform step S203.In step S203, by element
Material set and efficacy parameter are transmitted to Video Composition server.So, Video Composition server can be according to efficacy parameter and element
The attribute of material set, multiple material elements in material set are synthesized corresponding to the video for determining video effect pattern.
According to one embodiment, in step S203, sending Video Composition to Video Composition server (being, for example, server 120) please
Ask.Video Composition request can include material set and efficacy parameter.So, Video Composition server can close in response to video
Into request, by material set synthetic video.According to another embodiment of the application, Video Composition server can be to using 111
Send the prompt message on providing Video Composition service.In step S203, in response to receiving the prompt message, to video
Synthesis server sends material set and efficacy parameter, so as to Video Composition server can according to the material set received and
Efficacy parameter synthesizes corresponding video.
To sum up, according to the present processes 200, can be carried out in user interface (such as Fig. 3 A to 3G user interface)
Simple and direct content selection, so as to easily obtain the material set of video to be synthesized.Particularly, method 200 can also be right
Video automate editing and generate video segment and corresponding description information, so that user is by checking description
Information and quickly determine the content of video segment and carry out Piece Selection.In addition, method 200 can be intuitively a variety of to user's presentation
The preview design sketch (such as effect animation etc.) of video effect pattern, consequently facilitating user quickly determines the effect of video to be synthesized
Fruit pattern, and then user can be avoided to carry out the complex operations relevant with video effect on local computing device.It is basic herein
On, the present processes 200 can be by Video Composition server synthetic video, so as to which user experience be greatly improved.
Video Composition mode is further illustrated with reference to Fig. 4.Fig. 4 shows regarding according to some embodiments of the application
The flow chart of frequency synthesis method 400.Method 400 can perform in Video Composition application.Video Composition application for example can be with
Reside in server 120, but not limited to this.
As shown in figure 4, method 400 includes step S401.In step S401, obtain one from video material client and treat
The material set of synthetic video and the efficacy parameter on the material set, here, video material client are, for example, to apply
111, but not limited to this.Wherein, material set includes multiple material elements, each material element include picture, word, audio and
At least one of video media content.The attribute of material set include the material set in each material element playing sequence and
Playing duration.Efficacy parameter corresponds to a kind of video effect pattern.
In step S402, according to efficacy parameter and the attribute of material set, by multiple material elements in material set
Synthesize the video of corresponding video effect mode.In certain embodiments, step S402 is standardized to material set,
To cause each material element to be converted into predetermined format.Predetermined format for example including coding format, image player frame per second and
Picture size etc..In one embodiment, predetermined format can be configured as associated with efficacy parameter.In other words, it is every kind of
Efficacy parameter is configured with corresponding predetermined format.So, step S402 can be determined corresponding predetermined according to efficacy parameter
Form, with normalized processing.On this basis, step S402 can be according to efficacy parameter, by by standardization
Material set synthesizes video.
In one embodiment, Video Composition application configuration has multiple Video Composition scripts.Here, each Video Composition pin
This (being referred to as Video Composition template) is used in Video Composition application perform, and is imitated corresponding to a kind of Video Composition
Fruit.Based on efficacy parameter, step S402 can determine multiple rendering stages corresponding with the efficacy parameter.Each rendering stage bag
At least one script in above-mentioned multiple Video Composition scripts is included, the rendering result of each rendering stage is next rendering stage
Input content.So, step S402 can be according to multiple rendering stages, to being rendered in material set, with synthetic video.Here,
By multiple rendering stages, step S402 can realize superposition synthetic effect (i.e. video effect pattern corresponding to efficacy parameter).
Fig. 5 shows the Video Rendering process according to the application one embodiment.Process as shown in Figure 5 includes S1, S2 and S3 tri-
Rendering stage.Stage S1 perform script X1 and X2.Here, material set can for example include 20 material elements.Step S402
Preceding 10 material elements can be rendered by perform script X1, wash with watercolours is carried out to rear 10 material elements by script X2
Dye.For S1 rendered structure, step S402 can continue through perform script X3 and X4 in stage S2 and continue to fold
Add effect process.For stage S2 rendering result.Step S402 can continue Overlay processing in stage S3, so as to generate
Corresponding to the rendering result of efficacy parameter.Here, the form of each script is, for example, extensible markup language (Extensible
Markup Language, are abbreviated as XML).Effect (After Effects, be abbreviated as AE) after step S402 can for example be called
Using coming perform script, but not limited to this.Step S402 calls the example code of AE execution Rendering operations as follows:
“aerender-project test.aepx–comp“test”-RStemplate“test_1”–Omtemplate
“test_2”-output test.mov
Wherein, aerender represents the title of AE order line configuration processors.
Project test.aepx represent that current engineering template file is test.aepx.
Comp represents that it is tes this time to render the synthesizer title used.
RStemplate represents that script name is test_1.
Omtemplate represents that video frequency output template name is test_2.
Output represents the entitled test.mov of output video.
To sum up, the material set from video material client can be obtained according to the image synthesizing method 400 of the application,
And determine multiple rendering stages corresponding with efficacy parameter.On this basis, method 400 can be by performing multiple rendering stages
And synthesize the rendering result with overlay video effect.Special instruction, method 400 carry out multiple stages by material set
Render, various complicated video effects can be generated, so as to greatly improve the efficiency of Video Composition and increase Video Composition
The type of effect.
Fig. 6 shows the flow chart of the image synthesizing method 600 according to the application some embodiments.Method 600 can be
Performed in Video Composition application.Video Composition application for example may reside within server 120, but not limited to this.
As shown in fig. 6, method 600 includes step S601 to S602.Step S601 to S602 embodiment respectively with step
Rapid S401 to S402 is consistent, repeats no more here.In addition, method 600 also includes step S603.
In step S603, voice messaging corresponding to word content is generated.Specifically, in a material element
Content of text, step S603 can be converted into voice messaging.Here, step S603 can use various predetermined voice conversions to calculate
Method carries out voice conversion.For example, step S603 can call news to fly voice synthesis module, to obtain corresponding audio file.
In step s 604, caption information corresponding to voice messaging is generated.Here, step S604 can use it is various can
The technology of captions is generated, the application is without limitation.For example, step S604 can call quick dynamic image (Fast forward
Forward MPEG, are abbreviated as FFMPEG) software progress captions generation, but not limited to this.Here, the captions generated include word
The parameters such as curtain effect, Subtitle Demonstration time.
In step s 605, voice messaging and caption information are added in the video of step S602 synthesis.
Fig. 7 shows the schematic diagram of the processing unit 700 of the video material according to the application some embodiments.Device 700
Such as it may reside within using in 111.As shown in fig. 7, device 700 includes material obtaining unit 701, effect determination unit 702
With transmission unit 703.Wherein, material obtaining unit 701 can obtain the material set of video to be synthesized, and determine the material
The attribute of set.The material set includes multiple material elements, and each material element is included in picture, word, Voice & Video
At least one media content.The attribute includes the playing sequence and playing duration of each material element in the material set.One
In individual embodiment, material obtaining unit 701 can provide the user interface for obtaining material element.User interface includes difference
Corresponding at least one control of at least one medium type.Above-mentioned at least one medium type includes:Word, picture, audio,
It is at least one in video;In response to the operation to any control in user interface, material obtaining unit 701 obtains and the control
Medium type corresponding to media content, and as a media content of a material element in material set.Again
In one embodiment, material obtaining unit 701 can obtain one in response to the operation to a picture control in user interface
The image content of picture and a material element as material set.In yet another embodiment, material obtaining unit
701 are additionally operable in response to the operation pair the word input control associated with picture control, and acquisition inputs closes with image content
The text information of connection, and as the word content of material element.In yet another embodiment, material obtaining unit 701 is gone back
For in response to the operation pair the Audio Controls associated with picture control, obtaining the audio letter for inputting and being associated with image content
Breath, and as the audio content of material element.In yet another embodiment, material obtaining unit 701 be additionally operable in response to
Operation to a video control in the user interface, obtain a video segment and as the one of the material set
The video content of individual material element.
In yet another embodiment, material obtaining unit 701 obtains one section of video first, is then cut according to predetermined video
Algorithm is collected, at least one video segment is extracted from this section of video and generates the description information of each video segment.Specifically,
Material obtaining unit 701 can determine at least one key images frame of video.
For each key images frame, material obtaining unit 701 can extract from the video includes the key images frame
A video segment.The video segment includes a corresponding audio fragment.Material obtaining unit 701 can also be to audio piece
Duan Jinhang Text regions generate description information corresponding with video segment to obtain corresponding word according to word.
On this basis, material obtaining unit 701 can provide user circle for the description information for showing each video segment
Face, so that user carries out Piece Selection according to the description information of each video segment.Material obtaining unit 701 is in response to video
The selection operation of fragment, respectively using selected each video segment as in the video of a material element in material set
Hold.
In one embodiment, material obtaining unit 701, which can provide, is presented each material element pair in the material set
The user interface for the thumbnail answered.Thumbnail is arranged successively in the corresponding viewing area of user interface corresponding to each material element
Row.Material obtaining unit 701 can adjust each member in material set in response to the moving operation to thumbnail in user interface
Element puts in order, and using putting in order as the playing sequence of material set after regulation.In yet another embodiment, one
When individual material element includes image content, material obtaining unit 701 can be using the playing duration of the image content as the material
The playing duration of element.When a material element includes video content, material obtaining unit 701 can be by the video content
Playing duration of the playing duration as the material element.
Effect determination unit 702 can determine efficacy parameter corresponding with material set.Efficacy parameter regards corresponding to one kind
Yupin effect pattern.In one embodiment, effect determination unit 702 can provide the user interface for including multiple effect options.
Wherein each effect option corresponds to a kind of efficacy parameter.In response to the preview operation to any of multiple effect options, effect
Fruit determining unit 702 shows corresponding preview design sketch in the user interface.In response to any of multiple effect options
Selected operation, effect determination unit 702 is using the efficacy parameter corresponding to selected effect option as corresponding with material set
Efficacy parameter.
Transmission unit 703 can transmit material set and efficacy parameter to Video Composition server, so as to Video Composition
Server synthesizes multiple material elements in material set corresponding to video according to efficacy parameter and the attribute of material set
The video of effect mode.It should be noted that the more specifically embodiment of device 700 is consistent with method 200, it is no longer superfluous here
State.
Fig. 8 shows the schematic diagram of the Video Composition device 800 according to the application some embodiments.Device 800 for example may be used
To reside in Video Composition application.Video Composition application for example may reside within server 120, but not limited to this.
As shown in figure 8, device 800 can include communication unit 801 and Video Composition unit 802.Communication unit 801 can be with
Efficacy parameter from the material set of video material client one video to be synthesized of acquisition and on the material set.Wherein,
Material set includes multiple material elements, and each material element is included at least one of picture, word, Voice & Video media
Hold.The attribute of material set includes the playing sequence and playing duration of each material element in the material set.Efficacy parameter pair
Should be in a kind of video effect pattern.
Video Composition unit 802, can be according to efficacy parameter and the attribute of material set, by multiple elements in material set
Material element synthesizes the video of video effect pattern.In one embodiment, Video Composition unit 802 can enter to material set
Row standardization, to cause each material element to be converted into predetermined format.Predetermined format includes coding format, image is broadcast
Put frame per second and picture size.According to efficacy parameter, Video Composition unit 802 synthesizes the material set Jing Guo standardization
Corresponding video.In yet another embodiment, Video Composition unit 802 can be based on for holding in predetermined Video Composition application
Capable multiple Video Composition scripts, determine multiple rendering stages corresponding to efficacy parameter.Wherein, each Video Composition script pair
At least one script in above-mentioned multiple Video Composition scripts should be included in a kind of Video Composition effect, each rendering stage, each
The rendering result of rendering stage is the input content of next rendering stage.Based on multiple rendering stages, material set is carried out
Render, to generate corresponding video.Video effect pattern can for example include the video transition pattern between adjacent material element.Need
It is noted that the more specifically embodiment of device 800 is consistent with method 400, repeat no more here.
Fig. 9 shows the schematic diagram of the Video Composition device 900 according to the application some embodiments.Device 900 for example may be used
To reside in Video Composition application.Video Composition application for example may reside within server 120, but not limited to this.
As shown in figure 9, device 900 includes communication unit 901 and Video Composition unit 902.Here, communication unit 901 can
To be implemented as the embodiment consistent with communication unit 801.Video Composition unit 902 may be implemented as and Video Composition list
First 802 consistent embodiments, are repeated no more here.In addition, device 900 can also include phonetic synthesis unit 903, captions are given birth to
Into unit 904 and adding device 905.
When a material element includes image content and corresponding word content in material set, phonetic synthesis unit
903 can generate voice messaging corresponding with word content.Captions generation unit 904 can generate word corresponding with voice messaging
Curtain information.On this basis, adding device 905 is used to voice messaging and caption information being added in the video generated.Need
It is noted that the more specifically embodiment of device 900 is consistent with method 600, repeat no more here.
Figure 10 shows the composition structure chart of a computing device.As shown in Figure 10, the computing device include one or
Multiple processors (CPU or GPU) 1002, communication module 1004, memory 1006, user interface 1010, and for interconnecting this
The communication bus 1008 of a little components.
Processor 1002 can be received and be sent data by communication module 1004 to realize network service and/or locally lead to
Letter.
User interface 1010 includes one or more output equipments 1012, and it includes one or more loudspeakers and/or one
Individual or multiple visual displays.User interface 1010 also includes one or more input equipments 1014, and it is included such as, key
Disk, mouse, voice command input block or loudspeaker, touch screen displays, touch sensitive tablet, posture capture camera or other are defeated
Enter button or control etc..
Memory 1006 can be high-speed random access memory, such as DRAM, SRAM, DDR RAM or other deposit at random
Take solid storage device;Or nonvolatile memory, such as one or more disk storage equipments, optical disc memory apparatus, sudden strain of a muscle
Deposit equipment, or other non-volatile solid-state memory devices.
Memory 1006 stores the executable instruction set of processor 1002, including:
Operating system 1016, including for handling various basic system services and journey for performing hardware dependent tasks
Sequence;
Using 1018, including for realizing the various programs of the above method, this program can be realized in above-mentioned each example
Handling process, such as can include according to the video material of the application handle apply.Video material processing application can include
The processing unit 700 of video material shown in Fig. 7.In addition, when computing device is implemented as server 120, can using 1018
With including Video Composition application.Video Composition application can for example be included shown in Video Composition device 800 or Fig. 9 shown in Fig. 8
Video Composition device 900.
In addition, each example of the application can pass through the data processing journey by data processing equipment such as computer execution
Sequence is realized.Obviously, data processor constitutes the application.In addition, at the data being generally stored inside in a storage medium
Reason program by program by directly reading out storage medium or by installing or copying to depositing for data processing equipment by program
Store up in equipment (such as hard disk or internal memory) and perform.Therefore, such storage medium also constitutes the present invention.Storage medium can use
Any kind of recording mode, for example, paper storage medium (such as paper tape), magnetic storage medium (such as floppy disk, hard disk, flash memory),
Optical storage media (such as CD-ROM), magnetic-optical storage medium (such as MO) etc..
Therefore disclosed herein as well is a kind of non-volatile memory medium, wherein data processor is stored with, the data
Processing routine is used for any example for performing the application above method.
In addition, method and step described herein is with data processor except can be realized, can also by hardware Lai
Realize, for example, can be by gate, switch, application specific integrated circuit (ASIC), programmable logic controller (PLC) and embedding microcontroller etc.
To realize.Therefore this hardware that can realize herein described method can also form the application.
The preferred embodiments of the application are the foregoing is only, it is all in spirit herein not to limit the application
Within principle, any modification, equivalent substitution and improvements done etc., it should be included within the scope of the application protection.
Claims (15)
- A kind of 1. processing method of video material, it is characterised in that including:The material set of video to be synthesized is obtained, and determines the attribute of the material set, wherein, the material set includes multiple Material element, each material element, which includes at least one of picture, word, Voice & Video media content, the attribute, to be included The playing sequence and playing duration of each material element in the material set;It is determined that efficacy parameter corresponding with the material set, the efficacy parameter corresponds to a kind of video effect pattern;AndThe material set and the efficacy parameter are transmitted to the Video Composition server, so as to the Video Composition service Device synthesizes multiple material elements in the material set pair according to the efficacy parameter and the attribute of the material set The video of video effect pattern described in Ying Yu.
- 2. the method for claim 1, wherein material set for obtaining video to be synthesized, including:The user interface for obtaining material element is provided, the user interface includes corresponding respectively at least one medium type At least one control, at least one medium type includes:It is at least one in word, picture, audio, video;In response to the operation to any control in the user interface, obtain in media corresponding with the medium type of the control Hold, and as a media content of a material element in the material set.
- 3. method as claimed in claim 2, wherein, the operation in response to any control in the user interface, obtain Media content corresponding with the medium type of the control is taken, and as one of a material element in the material set Media content, including:In response to the operation to a picture control in the user interface, a pictures are obtained and as the material collection The image content of the material element closed.
- 4. method as claimed in claim 2, wherein, the operation in response to any control in the user interface, obtain The media content of medium type corresponding to the control is taken, and as a matchmaker of a material element in the material set Hold in vivo, including:In response to the operation to a video control in the user interface, a video segment is obtained and as the element The video content of one material element of material set.
- 5. the method for claim 1, wherein material set for obtaining video to be synthesized, including:Obtain one section of video;According to predetermined video clipping algorithm, at least one video segment is extracted from this section of video and generates each video segment Description information;The user interface of the description information of display each video segment is provided, so as to user's retouching according to each video segment State information and carry out Piece Selection;In response to the selection operation at least one video segment, respectively using selected each video segment as described in The video content of a material element in material set.
- 6. method as claimed in claim 5, wherein, it is described according to predetermined video clipping algorithm, extracted from this section of video At least one video segment and the description information for generating each video segment, including:Determine at least one key images frame of the video;For each key images frame, a video segment for including the key images frame, the piece of video are extracted from the video Section includes a corresponding audio fragment;Text region is carried out to the audio fragment to obtain corresponding word, and according to word generation and the piece of video Description information corresponding to section.
- 7. the method for claim 1, wherein attribute for determining the material set, including:The user interface that thumbnail corresponding to each material element in the material set is presented is provided, each material element is corresponding Thumbnail be arranged in order in the corresponding viewing area of the user interface;The arrangement for adjusting each element in the material set in response to the moving operation to thumbnail in the user interface is suitable Sequence, and using putting in order as the playing sequence of the material set after regulation.
- 8. the method for claim 1, wherein described determine efficacy parameter corresponding with the material set, the effect Fruit parameter corresponds to a kind of video effect pattern, including:The user interface for including multiple effect options is provided, wherein, each effect option corresponds to a kind of efficacy parameter;In response to the preview operation to any of the multiple effect option, corresponding preview is shown in the user interface Design sketch;In response to the selected operation to any of the multiple effect option, by the effect corresponding to selected effect option Parameter is as efficacy parameter corresponding with the material set.
- 9. the method as described in claim 1, further comprising, Video Composition request is sent to the Video Composition server, So that the Video Composition server is asked in response to the Video Composition and by multiple material elements in the material set Synthesize the video corresponding to the video effect pattern.
- A kind of 10. image synthesizing method, it is characterised in that including:Efficacy parameter from the material set of video material client one video to be synthesized of acquisition and on the material set, its In, the material set includes multiple material elements, and each material element is included at least one in picture, word, Voice & Video Kind of media content, when the attribute of the material set includes the playing sequence of each material element and broadcasting in the material set Long, the efficacy parameter corresponds to a kind of video effect pattern;According to the efficacy parameter and the attribute of the material set, multiple material elements in the material set are synthesized The video of the video effect pattern.
- 11. method as claimed in claim 10, wherein, when in the material set material element include image content and During corresponding word content, this method also includes:Generation voice messaging corresponding with the word content;Generation caption information corresponding with the voice messaging;The voice messaging and the caption information are added in the video.
- 12. method as claimed in claim 10, wherein, it is described according to the efficacy parameter and the attribute of the material set, Multiple material elements in the material set are synthesized to the video of the video effect pattern, including:Based on multiple Video Composition scripts for being performed in predetermined Video Composition application, determine that the efficacy parameter institute is right The multiple rendering stages answered, wherein, each Video Composition script corresponds to a kind of Video Composition effect, and each rendering stage includes At least one script in the multiple Video Composition script, the rendering result of each rendering stage is the defeated of next rendering stage Enter content;Based on the multiple rendering stage, the material set is rendered, to generate the video.
- A kind of 13. processing unit of video material, it is characterised in that including:Material obtaining unit, obtains the material set of video to be synthesized, and determines the attribute of the material set, wherein, the element Material set includes multiple material elements, and each material element is included at least one of picture, word, Voice & Video media Hold, the attribute includes the playing sequence and playing duration of each material element in the material set;Effect determination unit, it is determined that efficacy parameter corresponding with the material set, the efficacy parameter corresponds to a kind of video Effect mode;AndTransmission unit, the material set and the efficacy parameter are transmitted to the Video Composition server, regarded so as to described Frequency synthesis server is according to the efficacy parameter and the attribute of the material set, by multiple materials member in the material set Element synthesizes the video corresponding to the video effect pattern.
- A kind of 14. Video Composition device, it is characterised in that including:Communication unit, the effect from the material set of video material client one video to be synthesized of acquisition and on the material set Fruit parameter, wherein, the material set includes multiple material elements, and each material element includes picture, word, Voice & Video At least one of media content, the attribute of the material set include in the material set playing sequence of each material element and Playing duration, the efficacy parameter correspond to a kind of video effect pattern;Video Composition unit, will be multiple in the material set according to the efficacy parameter and the attribute of the material set Material element synthesizes the video of the video effect pattern.
- 15. a kind of storage medium, is stored with one or more programs, one or more of programs include instruction, the instruction When executed by a computing apparatus so that method of the computing device as any one of claim 1-12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711076478.2A CN107770626B (en) | 2017-11-06 | 2017-11-06 | Video material processing method, video synthesizing device and storage medium |
PCT/CN2018/114100 WO2019086037A1 (en) | 2017-11-06 | 2018-11-06 | Video material processing method, video synthesis method, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711076478.2A CN107770626B (en) | 2017-11-06 | 2017-11-06 | Video material processing method, video synthesizing device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107770626A true CN107770626A (en) | 2018-03-06 |
CN107770626B CN107770626B (en) | 2020-03-17 |
Family
ID=61273334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711076478.2A Active CN107770626B (en) | 2017-11-06 | 2017-11-06 | Video material processing method, video synthesizing device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107770626B (en) |
WO (1) | WO2019086037A1 (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108495171A (en) * | 2018-04-03 | 2018-09-04 | 优视科技有限公司 | Method for processing video frequency and its device, storage medium, electronic product |
CN108540854A (en) * | 2018-03-29 | 2018-09-14 | 努比亚技术有限公司 | Live video clipping method, terminal and computer readable storage medium |
CN108536790A (en) * | 2018-03-30 | 2018-09-14 | 北京市商汤科技开发有限公司 | The generation of sound special efficacy program file packet and sound special efficacy generation method and device |
CN108900897A (en) * | 2018-07-09 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of multimedia data processing method, device and relevant device |
CN108900927A (en) * | 2018-06-06 | 2018-11-27 | 芽宝贝(珠海)企业管理有限公司 | The generation method and device of video |
CN108924584A (en) * | 2018-05-30 | 2018-11-30 | 互影科技(北京)有限公司 | The packaging method and device of interactive video |
CN108986227A (en) * | 2018-06-28 | 2018-12-11 | 北京市商汤科技开发有限公司 | The generation of particle effect program file packet and particle effect generation method and device |
CN109168027A (en) * | 2018-10-25 | 2019-01-08 | 北京字节跳动网络技术有限公司 | Instant video methods of exhibiting, device, terminal device and storage medium |
CN109658483A (en) * | 2018-11-20 | 2019-04-19 | 北京弯月亮科技有限公司 | The generation system and generation method of Video processing software data file |
WO2019086037A1 (en) * | 2017-11-06 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Video material processing method, video synthesis method, terminal device and storage medium |
CN109819179A (en) * | 2019-03-21 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of video clipping method and device |
CN110336960A (en) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | Method, apparatus, terminal and the storage medium of Video Composition |
CN110445992A (en) * | 2019-08-16 | 2019-11-12 | 深圳特蓝图科技有限公司 | A kind of video clipping synthetic method based on XML |
CN111010591A (en) * | 2019-12-05 | 2020-04-14 | 北京中网易企秀科技有限公司 | Video editing method, browser and server |
WO2020103548A1 (en) * | 2018-11-21 | 2020-05-28 | 北京达佳互联信息技术有限公司 | Video synthesis method and device, and terminal and storage medium |
WO2020107297A1 (en) * | 2018-11-28 | 2020-06-04 | 深圳市大疆创新科技有限公司 | Video clipping control method, terminal device, system |
CN111416991A (en) * | 2020-04-28 | 2020-07-14 | Oppo(重庆)智能科技有限公司 | Special effect processing method and apparatus, and storage medium |
CN111479158A (en) * | 2020-04-16 | 2020-07-31 | 北京达佳互联信息技术有限公司 | Video display method and device, electronic equipment and storage medium |
CN111614912A (en) * | 2020-05-26 | 2020-09-01 | 北京达佳互联信息技术有限公司 | Video generation method, device, equipment and storage medium |
CN111683280A (en) * | 2020-06-04 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
CN111710021A (en) * | 2020-05-26 | 2020-09-25 | 珠海九松科技有限公司 | Method and system for generating dynamic video based on static medical materials |
CN111787395A (en) * | 2020-05-27 | 2020-10-16 | 北京达佳互联信息技术有限公司 | Video generation method and device, electronic equipment and storage medium |
CN111831615A (en) * | 2020-05-28 | 2020-10-27 | 北京达佳互联信息技术有限公司 | Method, device and system for generating audio-video file |
CN111883099A (en) * | 2020-04-14 | 2020-11-03 | 北京沃东天骏信息技术有限公司 | Audio processing method, device, system, browser module and readable storage medium |
CN111951357A (en) * | 2020-08-11 | 2020-11-17 | 深圳市前海手绘科技文化有限公司 | Application method of sound material in hand-drawn animation |
CN112040271A (en) * | 2020-09-04 | 2020-12-04 | 杭州七依久科技有限公司 | Cloud intelligent editing system and method for visual programming |
CN112287168A (en) * | 2020-10-30 | 2021-01-29 | 北京有竹居网络技术有限公司 | Method and apparatus for generating video |
CN112632326A (en) * | 2020-12-24 | 2021-04-09 | 北京风平科技有限公司 | Video production method and device based on video script semantic recognition |
CN112822541A (en) * | 2019-11-18 | 2021-05-18 | 北京字节跳动网络技术有限公司 | Video generation method and device, electronic equipment and computer readable medium |
CN113055730A (en) * | 2021-02-07 | 2021-06-29 | 深圳市欢太科技有限公司 | Video generation method and device, electronic equipment and storage medium |
CN113302659A (en) * | 2019-01-18 | 2021-08-24 | 斯纳普公司 | System and method for generating personalized video with customized text messages |
WO2021189995A1 (en) * | 2020-03-24 | 2021-09-30 | 北京达佳互联信息技术有限公司 | Video rendering method and apparatus, electronic device, and storage medium |
CN113810538A (en) * | 2021-09-24 | 2021-12-17 | 维沃移动通信有限公司 | Video editing method and video editing device |
CN113838490A (en) * | 2020-06-24 | 2021-12-24 | 华为技术有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN113986087A (en) * | 2021-12-27 | 2022-01-28 | 深圳市大头兄弟科技有限公司 | Video rendering method based on subscription |
CN113992940A (en) * | 2021-12-27 | 2022-01-28 | 北京美摄网络科技有限公司 | Web end character video editing method, system, electronic equipment and storage medium |
CN114125512A (en) * | 2018-04-10 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Promotion content pushing method and device and storage medium |
CN114286164A (en) * | 2021-12-28 | 2022-04-05 | 北京思明启创科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN114401377A (en) * | 2021-12-30 | 2022-04-26 | 杭州摸象大数据科技有限公司 | Financial marketing video generation method and device, computer equipment and storage medium |
CN114466222A (en) * | 2022-01-29 | 2022-05-10 | 北京百度网讯科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
WO2022160743A1 (en) * | 2021-01-29 | 2022-08-04 | 稿定(厦门)科技有限公司 | Video file playing system, audio/video playing process, and storage medium |
CN114979054A (en) * | 2022-05-13 | 2022-08-30 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
CN115134659A (en) * | 2022-06-15 | 2022-09-30 | 阿里巴巴云计算(北京)有限公司 | Video editing and configuring method and device, browser, electronic equipment and storage medium |
WO2022213801A1 (en) * | 2021-04-09 | 2022-10-13 | 北京字跳网络技术有限公司 | Video processing method, apparatus, and device |
CN116634058A (en) * | 2022-05-30 | 2023-08-22 | 荣耀终端有限公司 | Editing method of media resources and electronic equipment |
WO2023231568A1 (en) * | 2022-05-30 | 2023-12-07 | 腾讯科技(深圳)有限公司 | Video editing method and apparatus, computer device, storage medium, and product |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112532896A (en) * | 2020-10-28 | 2021-03-19 | 北京达佳互联信息技术有限公司 | Video production method, video production device, electronic device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101086886A (en) * | 2006-06-07 | 2007-12-12 | 索尼株式会社 | Recording system and recording method |
US20080247726A1 (en) * | 2007-04-04 | 2008-10-09 | Nhn Corporation | Video editor and method of editing videos |
CN104780439A (en) * | 2014-01-15 | 2015-07-15 | 腾讯科技(深圳)有限公司 | Video processing method and device |
CN105657538A (en) * | 2015-12-31 | 2016-06-08 | 北京东方云图科技有限公司 | Method and device for synthesizing video file by mobile terminal |
CN105679347A (en) * | 2016-01-07 | 2016-06-15 | 北京东方云图科技有限公司 | Method and apparatus for making video file through programming process |
CN107085612A (en) * | 2017-05-15 | 2017-08-22 | 腾讯科技(深圳)有限公司 | media content display method, device and storage medium |
CN107193841A (en) * | 2016-03-15 | 2017-09-22 | 北京三星通信技术研究有限公司 | Media file accelerates the method and apparatus played, transmit and stored |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060233514A1 (en) * | 2005-04-14 | 2006-10-19 | Shih-Hsiung Weng | System and method of video editing |
CN103928039B (en) * | 2014-04-15 | 2016-09-21 | 北京奇艺世纪科技有限公司 | A kind of image synthesizing method and device |
CN107770626B (en) * | 2017-11-06 | 2020-03-17 | 腾讯科技(深圳)有限公司 | Video material processing method, video synthesizing device and storage medium |
-
2017
- 2017-11-06 CN CN201711076478.2A patent/CN107770626B/en active Active
-
2018
- 2018-11-06 WO PCT/CN2018/114100 patent/WO2019086037A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101086886A (en) * | 2006-06-07 | 2007-12-12 | 索尼株式会社 | Recording system and recording method |
US20080247726A1 (en) * | 2007-04-04 | 2008-10-09 | Nhn Corporation | Video editor and method of editing videos |
CN104780439A (en) * | 2014-01-15 | 2015-07-15 | 腾讯科技(深圳)有限公司 | Video processing method and device |
CN105657538A (en) * | 2015-12-31 | 2016-06-08 | 北京东方云图科技有限公司 | Method and device for synthesizing video file by mobile terminal |
CN105679347A (en) * | 2016-01-07 | 2016-06-15 | 北京东方云图科技有限公司 | Method and apparatus for making video file through programming process |
CN107193841A (en) * | 2016-03-15 | 2017-09-22 | 北京三星通信技术研究有限公司 | Media file accelerates the method and apparatus played, transmit and stored |
CN107085612A (en) * | 2017-05-15 | 2017-08-22 | 腾讯科技(深圳)有限公司 | media content display method, device and storage medium |
Cited By (68)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019086037A1 (en) * | 2017-11-06 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Video material processing method, video synthesis method, terminal device and storage medium |
CN108540854A (en) * | 2018-03-29 | 2018-09-14 | 努比亚技术有限公司 | Live video clipping method, terminal and computer readable storage medium |
CN108536790A (en) * | 2018-03-30 | 2018-09-14 | 北京市商汤科技开发有限公司 | The generation of sound special efficacy program file packet and sound special efficacy generation method and device |
CN108495171A (en) * | 2018-04-03 | 2018-09-04 | 优视科技有限公司 | Method for processing video frequency and its device, storage medium, electronic product |
CN114125512A (en) * | 2018-04-10 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Promotion content pushing method and device and storage medium |
CN108924584A (en) * | 2018-05-30 | 2018-11-30 | 互影科技(北京)有限公司 | The packaging method and device of interactive video |
CN108900927A (en) * | 2018-06-06 | 2018-11-27 | 芽宝贝(珠海)企业管理有限公司 | The generation method and device of video |
CN108986227A (en) * | 2018-06-28 | 2018-12-11 | 北京市商汤科技开发有限公司 | The generation of particle effect program file packet and particle effect generation method and device |
CN108900897A (en) * | 2018-07-09 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of multimedia data processing method, device and relevant device |
CN109168027A (en) * | 2018-10-25 | 2019-01-08 | 北京字节跳动网络技术有限公司 | Instant video methods of exhibiting, device, terminal device and storage medium |
CN109168027B (en) * | 2018-10-25 | 2020-12-11 | 北京字节跳动网络技术有限公司 | Instant video display method and device, terminal equipment and storage medium |
CN109658483A (en) * | 2018-11-20 | 2019-04-19 | 北京弯月亮科技有限公司 | The generation system and generation method of Video processing software data file |
WO2020103548A1 (en) * | 2018-11-21 | 2020-05-28 | 北京达佳互联信息技术有限公司 | Video synthesis method and device, and terminal and storage medium |
US11551726B2 (en) | 2018-11-21 | 2023-01-10 | Beijing Dajia Internet Information Technology Co., Ltd. | Video synthesis method terminal and computer storage medium |
WO2020107297A1 (en) * | 2018-11-28 | 2020-06-04 | 深圳市大疆创新科技有限公司 | Video clipping control method, terminal device, system |
CN111357277A (en) * | 2018-11-28 | 2020-06-30 | 深圳市大疆创新科技有限公司 | Video clip control method, terminal device and system |
CN113302659A (en) * | 2019-01-18 | 2021-08-24 | 斯纳普公司 | System and method for generating personalized video with customized text messages |
WO2020187086A1 (en) * | 2019-03-21 | 2020-09-24 | 腾讯科技(深圳)有限公司 | Video editing method and apparatus, device, and storage medium |
US11715497B2 (en) | 2019-03-21 | 2023-08-01 | Tencent Technology (Shenzhen) Company Limited | Video editing method, apparatus, and device, and storage medium |
CN109819179B (en) * | 2019-03-21 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Video editing method and device |
CN109819179A (en) * | 2019-03-21 | 2019-05-28 | 腾讯科技(深圳)有限公司 | A kind of video clipping method and device |
CN110336960A (en) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | Method, apparatus, terminal and the storage medium of Video Composition |
CN110336960B (en) * | 2019-07-17 | 2021-12-10 | 广州酷狗计算机科技有限公司 | Video synthesis method, device, terminal and storage medium |
CN110445992A (en) * | 2019-08-16 | 2019-11-12 | 深圳特蓝图科技有限公司 | A kind of video clipping synthetic method based on XML |
US11636879B2 (en) | 2019-11-18 | 2023-04-25 | Beijing Bytedance Network Technology Co., Ltd. | Video generating method, apparatus, electronic device, and computer-readable medium |
CN112822541A (en) * | 2019-11-18 | 2021-05-18 | 北京字节跳动网络技术有限公司 | Video generation method and device, electronic equipment and computer readable medium |
CN111010591A (en) * | 2019-12-05 | 2020-04-14 | 北京中网易企秀科技有限公司 | Video editing method, browser and server |
WO2021189995A1 (en) * | 2020-03-24 | 2021-09-30 | 北京达佳互联信息技术有限公司 | Video rendering method and apparatus, electronic device, and storage medium |
CN111883099A (en) * | 2020-04-14 | 2020-11-03 | 北京沃东天骏信息技术有限公司 | Audio processing method, device, system, browser module and readable storage medium |
CN111883099B (en) * | 2020-04-14 | 2021-10-15 | 北京沃东天骏信息技术有限公司 | Audio processing method, device, system, browser module and readable storage medium |
CN111479158A (en) * | 2020-04-16 | 2020-07-31 | 北京达佳互联信息技术有限公司 | Video display method and device, electronic equipment and storage medium |
CN111479158B (en) * | 2020-04-16 | 2022-06-10 | 北京达佳互联信息技术有限公司 | Video display method and device, electronic equipment and storage medium |
CN111416991A (en) * | 2020-04-28 | 2020-07-14 | Oppo(重庆)智能科技有限公司 | Special effect processing method and apparatus, and storage medium |
CN111614912A (en) * | 2020-05-26 | 2020-09-01 | 北京达佳互联信息技术有限公司 | Video generation method, device, equipment and storage medium |
CN111614912B (en) * | 2020-05-26 | 2023-10-03 | 北京达佳互联信息技术有限公司 | Video generation method, device, equipment and storage medium |
CN111710021A (en) * | 2020-05-26 | 2020-09-25 | 珠海九松科技有限公司 | Method and system for generating dynamic video based on static medical materials |
CN111787395A (en) * | 2020-05-27 | 2020-10-16 | 北京达佳互联信息技术有限公司 | Video generation method and device, electronic equipment and storage medium |
CN111831615A (en) * | 2020-05-28 | 2020-10-27 | 北京达佳互联信息技术有限公司 | Method, device and system for generating audio-video file |
CN111831615B (en) * | 2020-05-28 | 2024-03-12 | 北京达佳互联信息技术有限公司 | Method, device and system for generating video file |
CN111683280A (en) * | 2020-06-04 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
CN113838490B (en) * | 2020-06-24 | 2022-11-11 | 华为技术有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN113838490A (en) * | 2020-06-24 | 2021-12-24 | 华为技术有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN111951357A (en) * | 2020-08-11 | 2020-11-17 | 深圳市前海手绘科技文化有限公司 | Application method of sound material in hand-drawn animation |
CN112040271A (en) * | 2020-09-04 | 2020-12-04 | 杭州七依久科技有限公司 | Cloud intelligent editing system and method for visual programming |
CN114390354B (en) * | 2020-10-21 | 2024-05-10 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN112287168A (en) * | 2020-10-30 | 2021-01-29 | 北京有竹居网络技术有限公司 | Method and apparatus for generating video |
CN112632326A (en) * | 2020-12-24 | 2021-04-09 | 北京风平科技有限公司 | Video production method and device based on video script semantic recognition |
CN112632326B (en) * | 2020-12-24 | 2022-02-18 | 北京风平科技有限公司 | Video production method and device based on video script semantic recognition |
WO2022160743A1 (en) * | 2021-01-29 | 2022-08-04 | 稿定(厦门)科技有限公司 | Video file playing system, audio/video playing process, and storage medium |
CN113055730B (en) * | 2021-02-07 | 2023-08-18 | 深圳市欢太科技有限公司 | Video generation method, device, electronic equipment and storage medium |
CN113055730A (en) * | 2021-02-07 | 2021-06-29 | 深圳市欢太科技有限公司 | Video generation method and device, electronic equipment and storage medium |
WO2022213801A1 (en) * | 2021-04-09 | 2022-10-13 | 北京字跳网络技术有限公司 | Video processing method, apparatus, and device |
CN113810538A (en) * | 2021-09-24 | 2021-12-17 | 维沃移动通信有限公司 | Video editing method and video editing device |
CN113992940A (en) * | 2021-12-27 | 2022-01-28 | 北京美摄网络科技有限公司 | Web end character video editing method, system, electronic equipment and storage medium |
CN113986087A (en) * | 2021-12-27 | 2022-01-28 | 深圳市大头兄弟科技有限公司 | Video rendering method based on subscription |
CN113992940B (en) * | 2021-12-27 | 2022-03-29 | 北京美摄网络科技有限公司 | Web end character video editing method, system, electronic equipment and storage medium |
CN113986087B (en) * | 2021-12-27 | 2022-04-12 | 深圳市大头兄弟科技有限公司 | Video rendering method based on subscription |
CN114286164A (en) * | 2021-12-28 | 2022-04-05 | 北京思明启创科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN114286164B (en) * | 2021-12-28 | 2024-02-09 | 北京思明启创科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN114401377A (en) * | 2021-12-30 | 2022-04-26 | 杭州摸象大数据科技有限公司 | Financial marketing video generation method and device, computer equipment and storage medium |
CN114466222B (en) * | 2022-01-29 | 2023-09-26 | 北京百度网讯科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN114466222A (en) * | 2022-01-29 | 2022-05-10 | 北京百度网讯科技有限公司 | Video synthesis method and device, electronic equipment and storage medium |
CN114979054A (en) * | 2022-05-13 | 2022-08-30 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
WO2023231568A1 (en) * | 2022-05-30 | 2023-12-07 | 腾讯科技(深圳)有限公司 | Video editing method and apparatus, computer device, storage medium, and product |
CN116634058B (en) * | 2022-05-30 | 2023-12-22 | 荣耀终端有限公司 | Editing method of media resources, electronic equipment and readable storage medium |
CN116634058A (en) * | 2022-05-30 | 2023-08-22 | 荣耀终端有限公司 | Editing method of media resources and electronic equipment |
CN115134659A (en) * | 2022-06-15 | 2022-09-30 | 阿里巴巴云计算(北京)有限公司 | Video editing and configuring method and device, browser, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107770626B (en) | 2020-03-17 |
WO2019086037A1 (en) | 2019-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107770626A (en) | Processing method, image synthesizing method, device and the storage medium of video material | |
US11943486B2 (en) | Live video broadcast method, live broadcast device and storage medium | |
WO2022048478A1 (en) | Multimedia data processing method, multimedia data generation method, and related device | |
US10033967B2 (en) | System and method for interactive video conferencing | |
WO2020077856A1 (en) | Video photographing method and apparatus, electronic device and computer readable storage medium | |
WO2020077855A1 (en) | Video photographing method and apparatus, electronic device and computer readable storage medium | |
CN113891113B (en) | Video clip synthesis method and electronic equipment | |
CN111541930B (en) | Live broadcast picture display method and device, terminal and storage medium | |
WO2019227429A1 (en) | Method, device, apparatus, terminal, server for generating multimedia content | |
US11457176B2 (en) | System and method for providing and interacting with coordinated presentations | |
EP3024223B1 (en) | Videoconference terminal, secondary-stream data accessing method, and computer storage medium | |
WO2020220773A1 (en) | Method and apparatus for displaying picture preview information, electronic device and computer-readable storage medium | |
WO2020150693A1 (en) | Systems and methods for generating personalized videos with customized text messages | |
CN111279687A (en) | Video subtitle processing method and director system | |
US20230120437A1 (en) | Systems for generating dynamic panoramic video content | |
CN112907703A (en) | Expression package generation method and system | |
CN112839190A (en) | Method for synchronously recording or live broadcasting video of virtual image and real scene | |
CN115510347A (en) | Presentation file conversion method and device, electronic equipment and storage medium | |
KR101915792B1 (en) | System and Method for Inserting an Advertisement Using Face Recognition | |
US20200349976A1 (en) | Movies with user defined alternate endings | |
CN116126177A (en) | Data interaction control method and device, electronic equipment and storage medium | |
CN108876866B (en) | Media data processing method, device and storage medium | |
US20150371661A1 (en) | Conveying Audio Messages to Mobile Display Devices | |
CN116847147A (en) | Special effect video determining method and device, electronic equipment and storage medium | |
KR20120097785A (en) | Interactive media mapping system and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |