WO2021073315A1 - 视频文件的生成方法、装置、终端及存储介质 - Google Patents
视频文件的生成方法、装置、终端及存储介质 Download PDFInfo
- Publication number
- WO2021073315A1 WO2021073315A1 PCT/CN2020/113987 CN2020113987W WO2021073315A1 WO 2021073315 A1 WO2021073315 A1 WO 2021073315A1 CN 2020113987 W CN2020113987 W CN 2020113987W WO 2021073315 A1 WO2021073315 A1 WO 2021073315A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio data
- target
- playback
- video
- editing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000004044 response Effects 0.000 claims abstract description 78
- 238000001228 spectrum Methods 0.000 claims description 21
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000002194 synthesizing effect Effects 0.000 claims description 10
- 238000010008 shearing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 17
- 230000001960 triggered effect Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008676 import Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000003997 social interaction Effects 0.000 description 2
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/005—Reproducing at a different information rate from the information rate of recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/437—Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
Definitions
- the embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, device, terminal, and storage medium for generating a video file.
- the embodiments of the present disclosure provide a method, device, terminal, and storage medium for generating a video file.
- an embodiment of the present disclosure provides a method for generating a video file, including:
- acquiring a resource set corresponding to the target video template including: audio data, image data, and video configuration parameters;
- an editing interface including an editing button is presented;
- the video file is synthesized to obtain the target video file.
- the editing the audio data in response to the click operation on the editing button to obtain the edited audio data includes:
- the replacing the audio data in the resource set with the target audio data includes:
- the method further includes:
- the editing the audio data in response to the click operation on the editing button to obtain the edited audio data includes:
- the adjusting the playback parameters of the picture data based on the edited audio data includes:
- At least one of the following parameters of the picture data is adjusted: the number of pictures and the playback speed.
- the video file synthesis based on the edited audio data and the adjusted playback parameters to obtain the target video file includes:
- video encoding is performed to obtain the target video file.
- an apparatus for generating a video file including:
- the first presentation unit is configured to receive an editing instruction of the video and present at least one video template corresponding to the video;
- An obtaining unit configured to obtain a resource set corresponding to the target video template in response to a selection instruction for the target video template, the resource set including: audio data, image data, and video configuration parameters;
- the second presentation unit is configured to present an editing interface including editing buttons when it is determined that the audio data is editable audio data based on the video configuration parameters;
- the editing unit is configured to edit the audio data in response to the click operation on the editing button to obtain the edited audio data
- An adjustment unit configured to adjust the playback parameters of the picture data based on the edited audio data
- the synthesis unit is used to synthesize the video file based on the edited audio data and the adjusted playback parameters to obtain the target video file.
- the editing unit is further configured to present multiple audio icons on the editing interface in response to a click operation on the editing button;
- the editing unit is further configured to obtain a playback time axis of the audio data in the resource set, where the playback time axis at least indicates the start time and the end time of audio playback;
- the device further includes a shearing unit
- the cutting unit is configured to present a sound spectrum line corresponding to the target audio data in response to a click operation on the displayed cutting button;
- the editing unit is further configured to present a volume adjustment axis for adjusting the playback volume of the audio data in response to a click operation on the editing button;
- the adjustment unit is further configured to obtain a picture presentation mode corresponding to the target video template
- At least one of the following parameters of the picture data is adjusted: the number of pictures and the playback speed.
- the synthesizing unit is further configured to obtain a picture for synthesizing the target video file based on the adjusted playback parameter
- video encoding is performed to obtain the target video file.
- a terminal including:
- Memory used to store executable instructions
- the processor is configured to implement the method for generating a video file provided in the embodiment of the present disclosure when the executable instruction is executed.
- the embodiments of the present disclosure provide a non-transitory storage medium that stores executable instructions, and when the executable instructions are executed, they are used to implement the method for generating a video file provided by the embodiments of the present disclosure.
- the edited audio data is obtained, and the playback parameters of the picture data in the resource collection are adjusted based on the edited audio data, and based on the edited audio data
- the video file is synthesized with the adjusted playback parameters to obtain the target video file; in this way, by changing or switching the audio data, the timeline of the video file becomes more flexible, so as to realize the change of the resource template and improve the user’s visibility. Operational.
- FIG. 1 is a schematic diagram of the architecture of a video file generation system provided by an embodiment of the disclosure
- FIG. 2 is a schematic diagram of a terminal structure provided by an embodiment of the disclosure.
- FIG. 3 is a schematic flowchart of a method for generating a video file provided by an embodiment of the disclosure
- FIGS. 4A-4C are schematic diagrams of editing interfaces provided by embodiments of the present disclosure.
- FIG. 5 is a schematic diagram of an import interface of a custom picture provided by an embodiment of the disclosure.
- FIGS. 6A-6G are schematic diagrams of editing interfaces provided by embodiments of the disclosure.
- FIG. 7 is a schematic flowchart of a method for generating a video file provided by an embodiment of the disclosure.
- FIG. 8 is a schematic diagram of the composition structure of a device for generating a video file provided by an embodiment of the disclosure.
- the devices provided by the embodiments of the present disclosure can be implemented as various types of user terminals such as smart phones, tablet computers, notebook computers, etc., or can be implemented by a terminal and a server in cooperation. In the following, exemplary applications of the device will be explained.
- the terminal is used to receive the editing instruction of the video and present at least one video template corresponding to the video; in response to the selection instruction for the target video template, obtain the resource set corresponding to the target video template, where ,
- the resource collection includes: audio data, picture data, and video configuration parameters; based on the video configuration parameters, when the audio data is determined to be editable audio data, the editing interface containing the editing buttons is presented; in response to the click operation on the editing buttons, the audio data Edit to obtain the edited audio data; adjust the playback parameters of the picture data based on the edited audio data; synthesize the video files based on the edited audio data and the adjusted playback parameters to obtain the target video file; so,
- the editing of audio data, the adjustment of picture data playback parameters, and the synthesis of video files can be realized on the terminal side in real-time, which improves the efficiency of switching audio data acquisition and enhances user experience.
- FIG. 1 is a schematic diagram of the architecture of a video file generation system 100 provided by an embodiment of the present disclosure.
- the terminal 200 (including the terminal 200 -1 and terminal 200-2).
- the terminal 400 is connected to the server 400 through the network 300.
- the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to realize data transmission.
- the terminal 200 is configured to receive an editing instruction of the video, and present at least one video template corresponding to the video; in response to the selection instruction for the target video template, generate and send a selection request for the target video template to the slave server 400;
- the server 400 is configured to obtain a resource set corresponding to the target video template based on the selection request, where the resource set includes: audio data, picture data, and video configuration parameters; based on the video configuration parameters, when it is determined that the audio data is editable audio data, it sends the corresponding To the terminal 200;
- the terminal 200 presents an editing interface including an editing button; in response to a click operation on the editing button, an editing request is sent to the server 400;
- the server 400 is used to edit the audio data to obtain the edited audio data; adjust the playback parameters of the picture data based on the edited audio data; perform video file synthesis based on the edited audio data and the adjusted playback parameters, Obtain the target video file, and return the obtained target video file to the terminal 200, so that the terminal 200 can play the received target video file; in this way, the editing of audio data, the adjustment of the playback parameters of the picture data, and the synthesis of the video file are completed by the server, The data processing pressure on the terminal side is reduced, and it is suitable for situations where the audio data volume of the switching target video template is large.
- FIG. 2 is a schematic structural diagram of a terminal 200 according to an embodiment of the disclosure.
- the terminal can be a variety of terminals, including mobile phones, laptops, digital broadcast receivers, personal digital assistants (PDAs, Personal Digital Assistant), tablet computers (PAD), portable multimedia players (PMP, Portable Media Player), and vehicle-mounted terminals (For example, a car navigation terminal) and other mobile terminals and fixed terminals such as digital televisions (TV), desktop computers, and the like.
- PDAs personal digital assistants
- PDAs Personal Digital Assistant
- PAD tablet computers
- PMP portable multimedia players
- vehicle-mounted terminals For example, a car navigation terminal
- the terminal shown in FIG. 2 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
- the terminal 200 may include a processing device (such as a central processing unit, a graphics processor, etc.) 210, which may be based on a program stored in a read-only memory (ROM, Read-Only Memory) 220 or from a storage device 280
- the program loaded in the random access memory (RAM, Random Access Memory) 230 executes various appropriate actions and processing.
- RAM 230 various programs and data required for terminal operation are also stored.
- the processing device 210, the ROM 220, and the RAM 230 are connected to each other through a bus 240.
- An input/output (I/O, Input/Output) interface 250 is also connected to the bus 240.
- the following devices can be connected to the I/O interface 250: including input devices 260 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations, etc.
- An output device 270 such as a device; a storage device 280 such as a magnetic tape, a hard disk, etc.; and a communication device 290.
- the communication device 290 may allow the terminal to perform wireless or wired communication with other devices to exchange data.
- FIG. 2 shows various devices there are, it should be understood that it is not required to implement or have all the devices shown. It may alternatively be implemented or provided with more or fewer devices.
- an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program includes program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 290, or installed from the storage device 280, or installed from the ROM 220.
- the processing device 210 executes the above-mentioned functions defined in the method for generating a video file of the embodiment of the present disclosure.
- the above-mentioned computer-readable medium in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
- the computer-readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, and Erasable Programmable Read-Only Memory (EPROM). Only Memory), flash memory, optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- EPROM Erasable Programmable Read-Only Memory
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with the instruction execution system, apparatus, or device.
- the program code contained on the computer-readable medium can be transmitted by any suitable medium, including wire, optical cable, radio frequency (RF, Radio Frequency), etc., or any suitable combination of the foregoing.
- the above-mentioned computer-readable medium may be included in the above-mentioned terminal 200; or it may exist alone without being assembled into the terminal 200.
- the foregoing computer-readable medium carries one or more programs, and when the foregoing one or more programs are executed by the terminal 200, the terminal 200 causes the terminal to execute the method for generating a video file provided by the embodiment of the present disclosure.
- the computer program code used to perform the operations in the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof.
- the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and Conventional procedural programming language-such as "C" language or similar programming language.
- the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including local area network (LAN, Local Area Network) and wide area network (WAN, Wide Area Network), or it can be connected to an external computer (For example, use an Internet service provider to connect via the Internet).
- LAN local area network
- WAN wide Area Network
- Internet Internet service provider
- the units and/or modules involved in the described embodiments of the present disclosure may be implemented in software or hardware.
- the units and/or modules that implement the terminal of the embodiments of the present disclosure can be implemented by one or more Application Specific Integrated Circuits (ASIC, Application Specific Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device). Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic component implementations, used to implement the implementation of the video files provided by the embodiments of the present disclosure Generation method.
- ASIC Application Specific Integrated Circuit
- DSP Programmable Logic Device
- PLD Programmable Logic Device
- CPLD Complex Programmable Logic Device
- FPGA Field-Programmable Gate Array
- FIG. 3 is a schematic flowchart of a method for generating a video file provided by an embodiment of the present disclosure.
- the method for generating a video file according to an embodiment of the present disclosure includes:
- Step 301 The terminal receives the video editing instruction, and presents at least one video template corresponding to the video.
- the user can implement social interaction by loading prop resources on the client.
- the prop resources include at least one of the following : Video props, audio props, and User Interface (UI) animation props;
- video props can include video templates, video covers, text associated with the video, such as titles, video tags, etc.
- audio props can be background music ,
- UI animation can be an interface for network interaction.
- the user can trigger the corresponding editing instruction to the terminal by clicking the editing button for the video on the client.
- the terminal receives the editing instruction triggered by the user, it will correspondingly present multiple video templates corresponding to the video.
- FIGS. 4A-4C are schematic diagrams of the editing interface provided by the embodiments of the present disclosure.
- the short video client appears as shown in FIG. 4A Interface
- the edit button "+” in Figure 4A the short video client shows the interface shown in Figure 4B.
- the "Album” button in Figure 4B the corresponding editing instruction is triggered.
- the video client receives this editing instruction and presents 16 video templates such as "Retro Magazine”, “Full Moon Mid-Autumn Festival", and "Exclusive Mansion” as shown in FIG. 4C.
- Step 302 In response to a selection instruction for the target video template, obtain a resource set corresponding to the target video template, where the resource set includes audio data, image data, and video configuration parameters.
- the terminal will present the corresponding target video template and obtain the resource set corresponding to the target video template.
- the terminal receives To the selection instruction triggered by the user's touch operation on the selection button of the target video template of "Exclusive Building”, load the resource collection corresponding to the target video template of "Exclusive Building".
- Step 303 When it is determined that the audio data is editable audio data based on the video configuration parameters, an editing interface including an editing button is presented.
- FIG 5 is a schematic diagram of the import interface of the custom picture provided by the embodiment of the present disclosure.
- the target video template selected by the user is "Exclusive Building”
- “Exclusive Building” is the target video template Indicates that when the 8 pictures are best presented, the user can import up to 8 pictures into the target video template of "Exclusive Building”.
- the video template is matched in the background based on the motion vector (MV, Motion Vector) algorithm.
- the video configuration parameters corresponding to the video template have a flag indicating whether the time axis can be dynamically changed for the audio data.
- the terminal presents a corresponding editing interface containing editing buttons.
- Figure 6A is a schematic diagram of an editing interface provided by an embodiment of the present disclosure. As shown in Figure 6A, multiple editing buttons are presented on the editing interface, such as selecting voice, special effects, text and stickers. Clicking on different buttons will trigger Different editing methods.
- Step 304 In response to the click operation on the edited button, edit the audio data to obtain the edited audio data.
- the terminal may edit the audio data in the following manner to obtain the edited audio data:
- multiple audio icons are presented on the editing interface; in response to the selection instruction for the target audio icon, the target audio data corresponding to the target audio icon is obtained; the audio data in the resource collection is replaced with the target audio data.
- the editing interface on the terminal will present corresponding multiple audio icons to be switched, and when the user selects one of the multiple audio icons When there is an audio icon, the audio data in the resource collection will be switched to the target audio data corresponding to the icon to play the target audio data.
- FIG. 6B is a schematic diagram of an editing interface provided by an embodiment of the present disclosure, and the editing interface on the terminal displays "recommendation” and "favorites” For the audio data options to be switched in the two columns, in the "Recommended” option, multiple audio icons such as "123 I Love You", “Asian Power”, “Drunk Chibi” and “Search” for more music are presented;
- the terminal receives a selection instruction triggered by the user's click operation on the audio icon corresponding to the audio data of "123 I love you”, it obtains the target audio data of "123 I love you” and transfers the resources in the target video template
- the audio data in the collection is replaced with the target audio data "123 I love you” to play the music "123 I love you”; in this way, based on the user's choice, the background music in the resource template is replaced, which satisfies the user Individual needs.
- the duration of the target audio data selected by the user may be different from the duration of the audio data in the resource collection of the target resource template, and the user does not cut the duration of the target audio data.
- the terminal can replace the audio data in the resource collection with the target audio data in the following manner:
- the playback timeline at least indicates the start time and end time of the audio playback; based on the playback timeline, adjust the playback timeline of the target audio data; replace the audio data in the resource collection with and adjust Play the target audio data after the timeline.
- the playback time axis of the audio data in the resource collection indicates the start time and end time of the audio data in the target resource template.
- the playback time axis of the audio data in the resource collection indicates : Play from the 10th second of the audio data to the 30th second. Then, when the audio data in the resource set is replaced with the target audio data, the target video data will also be played from the 10th second to the 30th second End playback; in this way, the intro part of the audio data can be eliminated, and the climax part can be played directly to achieve a better playback effect.
- the user can cut the duration of the target audio data in a targeted manner.
- the terminal can also implement audio in the following manner The cut:
- the sound spectrum line corresponding to the target audio data is presented; in response to the drag operation on the sound spectrum line, the playback start time and/or playback end time of the target audio data is determined ; Based on the determined playback start time and/or playback end time, cut the target audio data.
- the terminal can determine the playback start time of the target audio data based on the cutting instruction triggered by the user's drag operation on the sound spectrum lines of the target audio data, and based on the duration of the audio data in the resource set, Determine the time axis of the target audio data; for example, see Figures 6C-6D.
- Figures 6C-6D are schematic diagrams of the editing interface provided by the embodiments of the present disclosure. In Figure 6C, the user clicks the cut button, and the editing interface of the terminal appears as shown in Figure 6C. The sound spectrum line shown in 6D, the user drags the sound spectrum line of the target audio data to the 10th second.
- the target audio data will be played from the 10th second to the 10th second.
- the target audio data from the 10th to the 30th second can be played or looped.
- the terminal can determine the playback start time and playback end time of the target audio data based on the cutting instruction triggered by the user's drag operation on the sound spectrum lines of the target audio data, and start the playback of the audio data.
- the audio data between the time and the playback end time are cut; for example, see FIG. 6E, which is a schematic diagram of an editing interface provided by an embodiment of the present disclosure.
- FIG. 6E the user starts from the first sound spectrum line of the target audio data. Drag from 10 seconds to the 25th second, cut the target audio data between the 10th and 25th second to play the cut target audio data; in this way, the audio data switching in the target resource template and the corresponding playback duration are realized
- the terminal may also edit the audio data in the following manner to obtain the edited audio data:
- a volume adjustment axis for adjusting the playback volume of the audio data is presented; in response to the drag operation of the adjustment node in the volume adjustment axis, the volume value of the audio data at different playback nodes is adjusted; The audio data in the resource collection is replaced with the audio data after adjusting the volume value.
- the editing interface on the terminal will present a corresponding volume adjustment axis for adjusting the playback volume of the audio data.
- the terminal is based on the user's volume Drag the adjustment node in the adjustment axis to adjust the volume value of the audio data at different playback nodes. See FIG. 6F.
- FIG. 6F is a schematic diagram of the editing interface provided by the embodiment of the present disclosure. The audio data is divided into three segments based on the playback node.
- One piece of audio data is played at a volume of 20 decibels
- the second piece of audio data is played at a volume of 60 decibels
- the third piece of audio data is played at a volume of 80 decibels; thus, based on user needs, different pieces of audio data are played Play with different volume values, giving users a brand new listening experience.
- the volume of the target audio data to be switched can also be adjusted.
- the terminal adjusts the playback volume of the target audio data based on the volume adjustment instruction triggered by the user dragging the adjustment node in the volume adjustment axis, see Figure 6G, FIG. 6G is a schematic diagram of an editing interface provided by an embodiment of the present disclosure.
- the playback volume value of the audio data in the resource collection is 50 decibels
- the volume adjustment axis of the target audio data that is, selecting the soundtrack
- Adjust the volume of the target audio data to 40 decibels
- the terminal obtains the target audio data after the volume is adjusted, and accordingly, replaces the audio data in the resource collection with the target audio data after the volume value is adjusted.
- Step 305 Adjust the playback parameters of the picture data based on the edited audio data.
- the terminal can adjust the playback parameters of the picture data in the following manner:
- the configuration parameters of the resource collection are also set, such as the picture presentation method: the best effect of several pictures, the switching speed of the background and the presentation method (such as left entry or rotating entry, etc.) );
- the terminal when editing the audio data in the resource collection, in order to make the edited audio data better adapt to the target video template, the terminal also adjusts parameters such as the number of pictures or the playback speed according to the edited audio data.
- the audio data in the target video template has a duration of 20 seconds, and the effect of importing 8 photos is the best. Assuming that after editing the audio data, the duration of the audio data is cut to 15 seconds, you can reduce the imported pictures Number or speed up the picture playback speed to present a better playback effect.
- Step 306 Perform video file synthesis based on the edited audio data and the adjusted playback parameters to obtain the target video file.
- the terminal can obtain the target video file in the following manner:
- the picture used to synthesize the target video file is obtained; the picture presentation mode corresponding to the target video template is obtained; the target video file is obtained by video encoding based on the edited audio data, the obtained picture, and the picture presentation mode.
- the edited audio data is obtained, and the playback parameters of the picture data in the resource collection are adjusted based on the edited audio data, and based on the edited audio data and adjustments
- the subsequent playback parameters are combined with the video file to obtain the target video file; in this way, by changing or switching the audio data, the resource template is changed, which improves the user's operability and meets the user's personalized needs.
- FIG. 7 is a schematic flowchart of a method for generating a video file provided by an embodiment of the present disclosure.
- the method for generating a video file can be implemented through the coordination of a client and a server set on a terminal.
- the generation of a video file in the embodiment of the present disclosure Methods include:
- Step 701 In response to the video editing instruction, the client presents at least one video template corresponding to the video.
- clients on the terminal such as an instant messaging client, a microblog client, a short video client, etc.
- users can implement social interaction by loading prop resources on the client.
- the user can trigger the corresponding editing instruction to the terminal by clicking the editing button of the video on the client.
- the terminal receives the editing instruction triggered by the user, it will correspondingly present multiple video templates corresponding to the video.
- Step 702 The client generates a selection request for the target video template in response to the selection instruction for the target video template.
- the client receives the selection instruction triggered by the user's touch operation on the selection button of the target video template, and generates a corresponding selection request.
- Step 703 The client sends the generated selection request for the target video template to the server.
- Step 704 The server obtains the resource set corresponding to the target video template based on the selection request.
- the resource set includes: audio data, picture data, and video configuration parameters.
- Step 705 When the server determines that the audio data is editable audio data based on the video configuration parameters, it generates a corresponding editing instruction.
- the video template is matched by the server based on the MV algorithm.
- the video configuration parameters corresponding to the video template have a flag indicating whether the time axis can be dynamically changed for the audio data.
- the flag bit indicates the time axis of the audio data.
- Step 706 The server sends the corresponding editing instruction to the client.
- Step 707 Based on the editing instruction, the client presents an editing interface including editing buttons.
- the client presents the corresponding editing interface based on the editing instruction sent by the server.
- Step 708 In response to the click operation on the editing button, the client presents multiple audio icons on the editing interface.
- Step 709 The client obtains target audio data corresponding to the target audio icon in response to the selection instruction for the target audio icon.
- Step 710 In response to the click operation on the presented cut button, the client presents sound spectrum lines corresponding to the target audio data.
- Step 711 The client determines the playback start time of the target audio data in response to the drag operation on the sound spectrum line.
- Step 712 The client cuts the target audio data based on the determined playback start time and the duration of the audio data in the resource set to obtain cut target audio data.
- the target audio data is acquired, the audio data in the target resource template is switched, and the target audio data is cut.
- Step 713 In response to the click operation on the edit button, the client presents a volume adjustment axis for adjusting the playback volume of the cut target audio data.
- the volume of the target audio data obtained after cutting is adjusted.
- Step 714 The client adjusts the volume value of the cut target audio data at different playback nodes in response to the drag operation on the adjustment node in the volume adjustment axis.
- the volume of different segments of the cut target audio data is adjusted, so as to play different segments of the target audio data with different volume values.
- Step 715 The client uses the target audio data after adjusting the volume value as the edited audio data.
- the audio data in the target video template is switched, and the time axis and volume of the switched target audio data are adjusted to obtain the edited audio data, and replace the edited audio data Audio data in the resource collection.
- Step 716 The client sends the edited audio data to the server.
- Step 717 The server adjusts the playback parameters of the picture data based on the edited audio data.
- the server obtains the picture presentation mode corresponding to the target video template; based on the picture presentation mode and the edited audio data, adjusts at least one of the following parameters of the picture data: the number of pictures and the playback speed.
- Step 718 The server performs video file synthesis based on the edited audio data and the adjusted playback parameters to obtain the target video file.
- Step 719 The server sends the target video file to the client.
- Step 720 The client plays the target video file.
- FIG. 8 is a schematic diagram of the composition structure of a video file generating apparatus provided by an embodiment of the present disclosure.
- the video file generating apparatus 80 provided by an embodiment of the present disclosure includes:
- the first presentation unit 81 is configured to receive an editing instruction of the video, and present at least one video template corresponding to the video;
- the obtaining unit 82 is configured to obtain a resource set corresponding to the target video template in response to a selection instruction for the target video template, the resource set including: audio data, picture data, and video configuration parameters;
- the second presentation unit 83 is configured to present an editing interface including editing buttons when it is determined that the audio data is editable audio data based on the video configuration parameters;
- the editing unit 84 is configured to edit the audio data in response to the click operation on the editing button to obtain the edited audio data;
- the adjustment unit 85 is configured to adjust the playback parameters of the picture data based on the edited audio data
- the synthesis unit 86 is configured to perform video file synthesis based on the edited audio data and adjusted playback parameters to obtain a target video file.
- the editing unit is further configured to present multiple audio icons on the editing interface in response to a click operation on the editing button;
- the editing unit is further configured to obtain a play time axis of audio data in the resource set, where the play time axis at least indicates a start time and an end time of audio play;
- the device further includes a shearing unit
- the cutting unit is configured to present a sound spectrum line corresponding to the target audio data in response to a click operation on the displayed cutting button;
- the editing unit is further configured to present a volume adjustment axis for adjusting the playback volume of the audio data in response to a click operation on the editing button;
- the adjustment unit is further configured to obtain a picture presentation mode corresponding to the target video template
- At least one of the following parameters of the picture data is adjusted: the number of pictures and the playback speed.
- the synthesizing unit is further configured to obtain a picture for synthesizing the target video file based on the adjusted playback parameter
- video encoding is performed to obtain the target video file.
- the embodiment of the present disclosure provides a terminal, including:
- Memory used to store executable instructions
- the processor is configured to implement the method for generating a video file provided in the embodiment of the present disclosure when the executable instruction is executed.
- the embodiments of the present disclosure provide a non-transitory storage medium that stores executable instructions, and when the executable instructions are executed, they are used to implement the method for generating a video file provided in the embodiments of the present disclosure.
- an embodiment of the present disclosure provides a method for generating a video file, including:
- acquiring a resource set corresponding to the target video template including: audio data, image data, and video configuration parameters;
- an editing interface including an editing button is presented;
- the video file is synthesized to obtain the target video file.
- the editing the audio data in response to a clicking operation on the editing button to obtain the edited audio data includes:
- the replacing the audio data in the resource set with the target audio data includes:
- the method further includes:
- the editing the audio data in response to a clicking operation on the editing button to obtain the edited audio data includes:
- the adjusting the playback parameters of the picture data based on the edited audio data includes:
- At least one of the following parameters of the picture data is adjusted: the number of pictures and the playback speed.
- the step of synthesizing video files based on the edited audio data and adjusted playback parameters to obtain the target video file includes:
- video encoding is performed to obtain the target video file.
- the embodiment of the present disclosure provides a device for generating a video file, including:
- the first presentation unit is configured to receive an editing instruction of the video and present at least one video template corresponding to the video;
- An obtaining unit configured to obtain a resource set corresponding to the target video template in response to a selection instruction for the target video template, the resource set including: audio data, image data, and video configuration parameters;
- the second presentation unit is configured to present an editing interface including editing buttons when it is determined that the audio data is editable audio data based on the video configuration parameters;
- the editing unit is configured to edit the audio data in response to the click operation on the editing button to obtain the edited audio data
- An adjustment unit configured to adjust the playback parameters of the picture data based on the edited audio data
- the synthesis unit is used to synthesize the video file based on the edited audio data and the adjusted playback parameters to obtain the target video file.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (16)
- 一种视频文件的生成方法,其特征在于,所述方法包括:接收到视频的编辑指令,呈现对应视频的至少一个视频模板;响应于针对目标视频模板的选择指令,获取所述目标视频模板对应的资源集合,所述资源集合包括:音频数据、图片数据及视频配置参数;基于所述视频配置参数,确定所述音频数据为可编辑音频数据时,呈现包含编辑按键的编辑界面;响应于针对所述编辑按键的点击操作,对所述音频数据进行编辑,得到编辑后的音频数据;基于所述编辑后的音频数据,调整所述图片数据的播放参数;基于所述编辑后的音频数据、及调整后的播放参数,进行视频文件合成,得到目标视频文件。
- 如权利要求1所述的方法,其特征在于,所述响应于针对所述编辑按键的点击操作,对所述音频数据进行编辑,得到编辑后的音频数据,包括:响应于针对所述编辑按键的点击操作,在所述编辑界面上呈现多个音频图标;响应于针对目标音频图标的选择指令,获取所述目标音频图标对应的目标音频数据;将所述资源集合中的音频数据更换为所述目标音频数据。
- 如权利要求2所述的方法,其特征在于,所述将所述资源集合中的音频数据更换为所述目标音频数据,包括:获取所述资源集合中音频数据的播放时间轴,所述播放时间轴至少指示音频播放的开始时间及结束时间;基于所述播放时间轴,调整所述目标音频数据的播放时间轴;将所述资源集合中的音频数据更换为,调整所述播放时间轴后的所述目标音频数据。
- 如权利要求2所述的方法,其特征在于,所述方法还包括:响应于针对呈现的剪切按键的点击操作,呈现对应所述目标音频数据的声谱线条;响应于针对所述声谱线条的拖动操作,确定所述目标音频数据的播放起始时间和/或播放结束时间;基于确定的所述播放起始时间和/或播放结束时间,对所述目标音频数据进行剪切。
- 如权利要求1所述的方法,其特征在于,所述响应于针对所述编辑按键的点击操作,对所述音频数据进行编辑,得到编辑后的音频数据,包括:响应于针对所述编辑按键的点击操作,呈现用于调节所述音频数据的播放音量的音量调节轴;响应于针对所述音量调节轴中调节节点的拖动操作,调节所述音频数据在不同的播放节点的音量值;将所述资源集合中的音频数据更换为,调节所述音量值后的所述音频数据。
- 如权利要求1所述的方法,其特征在于,所述基于所述编辑后的音频数据,调整所述图片数据的播放参数,包括:获取所述目标视频模板对应的图片呈现方式;基于所述图片呈现方式及所述编辑后的音频数据,调整所述图片数据的以下参数至少之一:图片数量、播放速度。
- 如权利要求1所述的方法,其特征在于,所述基于所述编辑后的音频数据、及调整后的播放参数,进行视频文件合成,得到目标视频文件,包括:基于调整后的所述播放参数,获取用于合成所述目标视频文件的图片;获取所述目标视频模板对应的图片呈现方式;基于所述编辑后的音频数据、获取的所述图片及所述图片呈现方式,进行视频编码得到所述目标视频文件。
- 一种视频文件的生成装置,其特征在于,所述装置包括:第一呈现单元,用于接收到视频的编辑指令,呈现对应视频的至少一个视 频模板;获取单元,用于响应于针对目标视频模板的选择指令,获取所述目标视频模板对应的资源集合,所述资源集合包括:音频数据、图片数据及视频配置参数;第二呈现单元,用于基于所述视频配置参数,确定所述音频数据为可编辑音频数据时,呈现包含编辑按键的编辑界面;编辑单元,用于响应于针对所述编辑按键的点击操作,对所述音频数据进行编辑,得到编辑后的音频数据;调整单元,用于基于所述编辑后的音频数据,调整所述图片数据的播放参数;合成单元,用于基于所述编辑后的音频数据、及调整后的播放参数,进行视频文件合成,得到目标视频文件。
- 如权利要求8所述的装置,其特征在于,所述编辑单元,还用于响应于针对所述编辑按键的点击操作,在所述编辑界面上呈现多个音频图标;响应于针对目标音频图标的选择指令,获取所述目标音频图标对应的目标音频数据;将所述资源集合中的音频数据更换为所述目标音频数据。
- 如权利要求9所述的装置,其特征在于,所述编辑单元,还用于获取所述资源集合中音频数据的播放时间轴,所述播放时间轴至少指示音频播放的开始时间及结束时间;基于所述播放时间轴,调整所述目标音频数据的播放时间轴;将所述资源集合中的音频数据更换为,调整所述播放时间轴后的所述目标音频数据。
- 如权利要求9所述的装置,其特征在于,所述装置还包括剪切单元,所述剪切单元,用于响应于针对呈现的剪切按键的点击操作,呈现对应所述目标音频数据的声谱线条;响应于针对所述声谱线条的拖动操作,确定所述目标音频数据的播放起始时间和/或播放结束时间;基于确定的所述播放起始时间和/或播放结束时间,对所述目标音频数据进行剪切。
- 如权利要求8所述的装置,其特征在于,所述编辑单元,还用于响应于针对所述编辑按键的点击操作,呈现用于调节所述音频数据的播放音量的音量调节轴;响应于针对所述音量调节轴中调节节点的拖动操作,调节所述音频数据在不同的播放节点的音量值;将所述资源集合中的音频数据更换为,调节所述音量值后的所述音频数据。
- 如权利要求8所述的装置,其特征在于,所述调整单元,还用于获取所述目标视频模板对应的图片呈现方式;基于所述图片呈现方式及所述编辑后的音频数据,调整所述图片数据的以下参数至少之一:图片数量、播放速度。
- 如权利要求8所述的方法,其特征在于,所述合成单元,还用于基于调整后的所述播放参数,获取用于合成所述目标视频文件的图片;获取所述目标视频模板对应的图片呈现方式;基于所述编辑后的音频数据、获取的所述图片及所述图片呈现方式,进行视频编码得到所述目标视频文件。
- 一种终端,其特征在于,所述终端包括:存储器,用于存储可执行指令;处理器,用于执行所述可执行指令时,实现如权利要求1至7任一项所述的视频文件的生成方法。
- 一种非暂态存储介质,其特征在于,存储有可执行指令,所述可执行指令被执行时,用于实现如权利要求1至7任一项所述的视频文件的生成方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022522402A JP7387891B2 (ja) | 2019-10-14 | 2020-09-08 | 動画ファイルの生成方法、装置、端末及び記憶媒体 |
US17/719,237 US11621022B2 (en) | 2019-10-14 | 2022-04-12 | Video file generation method and device, terminal and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910975347.0A CN112738623B (zh) | 2019-10-14 | 2019-10-14 | 视频文件的生成方法、装置、终端及存储介质 |
CN201910975347.0 | 2019-10-14 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/719,237 Continuation US11621022B2 (en) | 2019-10-14 | 2022-04-12 | Video file generation method and device, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021073315A1 true WO2021073315A1 (zh) | 2021-04-22 |
Family
ID=75537694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/113987 WO2021073315A1 (zh) | 2019-10-14 | 2020-09-08 | 视频文件的生成方法、装置、终端及存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11621022B2 (zh) |
JP (1) | JP7387891B2 (zh) |
CN (1) | CN112738623B (zh) |
WO (1) | WO2021073315A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113590247A (zh) * | 2021-07-21 | 2021-11-02 | 阿里巴巴达摩院(杭州)科技有限公司 | 文本创作方法及计算机程序产品 |
CN113891113A (zh) * | 2021-09-29 | 2022-01-04 | 阿里巴巴(中国)有限公司 | 视频剪辑合成方法及电子设备 |
CN114286181A (zh) * | 2021-10-25 | 2022-04-05 | 腾讯科技(深圳)有限公司 | 一种视频优化方法、装置、电子设备和存储介质 |
CN114286164A (zh) * | 2021-12-28 | 2022-04-05 | 北京思明启创科技有限公司 | 一种视频合成的方法、装置、电子设备及存储介质 |
CN114666637A (zh) * | 2022-03-10 | 2022-06-24 | 阿里巴巴(中国)有限公司 | 视频剪辑方法、音频剪辑方法及电子设备 |
WO2022245747A1 (en) * | 2021-05-19 | 2022-11-24 | Snap Inc. | Shortcuts from scan operation within messaging system |
WO2023104078A1 (zh) * | 2021-12-09 | 2023-06-15 | 北京字跳网络技术有限公司 | 一种视频编辑模板的生成方法、装置、设备及存储介质 |
US11831592B2 (en) | 2021-05-19 | 2023-11-28 | Snap Inc. | Combining individual functions into shortcuts within a messaging system |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035041B (zh) * | 2020-08-31 | 2022-05-31 | 北京字节跳动网络技术有限公司 | 一种图像处理方法、装置、电子设备和存储介质 |
CN113507640B (zh) * | 2021-07-12 | 2023-08-18 | 北京有竹居网络技术有限公司 | 录屏视频分享方法、装置、电子设备及存储介质 |
CN114125552A (zh) * | 2021-11-30 | 2022-03-01 | 完美世界(北京)软件科技发展有限公司 | 视频数据的生成方法及装置、存储介质、电子装置 |
CN114528433B (zh) * | 2022-01-14 | 2023-10-31 | 抖音视界有限公司 | 一种模板选择方法、装置、电子设备及存储介质 |
CN115022696B (zh) * | 2022-04-18 | 2023-12-26 | 北京有竹居网络技术有限公司 | 视频预览方法、装置、可读介质及电子设备 |
CN117082292A (zh) * | 2022-05-10 | 2023-11-17 | 北京字跳网络技术有限公司 | 视频生成方法、装置、设备、存储介质和程序产品 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1862514A (zh) * | 2005-05-13 | 2006-11-15 | 雅马哈株式会社 | 内容分发服务器、内容分发方法和内容分发程序 |
CN104349175A (zh) * | 2014-08-18 | 2015-02-11 | 周敏燕 | 一种基于手机终端的视频制作系统及方法 |
US20150139613A1 (en) * | 2013-11-21 | 2015-05-21 | Microsoft Corporation | Audio-visual project generator |
CN105530440A (zh) * | 2014-09-29 | 2016-04-27 | 北京金山安全软件有限公司 | 一种视频的制作方法及装置 |
CN106303686A (zh) * | 2016-07-29 | 2017-01-04 | 乐视控股(北京)有限公司 | 视频生成方法、视频生成装置和终端设备 |
CN107357771A (zh) * | 2017-06-23 | 2017-11-17 | 厦门星罗网络科技有限公司 | 一种有声电子相册生成及打印装置和方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090273712A1 (en) * | 2008-05-01 | 2009-11-05 | Elliott Landy | System and method for real-time synchronization of a video resource and different audio resources |
CN101640057A (zh) * | 2009-05-31 | 2010-02-03 | 北京中星微电子有限公司 | 一种音视频匹配方法及装置 |
US20130272679A1 (en) * | 2012-04-12 | 2013-10-17 | Mario Luis Gomes Cavalcanti | Video Generator System |
EP3161829B1 (en) * | 2014-06-30 | 2019-12-04 | Mario Amura | Audio/video editing device, movie production method starting from still images and audio tracks and associated computer program |
CN104540028B (zh) * | 2014-12-24 | 2018-04-20 | 上海影卓信息科技有限公司 | 一种基于移动平台的视频美化交互体验系统 |
US10158825B2 (en) * | 2015-09-02 | 2018-12-18 | International Business Machines Corporation | Adapting a playback of a recording to optimize comprehension |
JP5903187B1 (ja) * | 2015-09-25 | 2016-04-13 | 株式会社グロリアス | 映像コンテンツ自動生成システム |
CN105262959A (zh) * | 2015-10-16 | 2016-01-20 | 北京易视通科技有限公司 | 一种基于“互联网+”模式的微视频生成的系统和方法 |
CN105550251A (zh) * | 2015-12-08 | 2016-05-04 | 小米科技有限责任公司 | 图片播放方法和装置 |
JP6478162B2 (ja) * | 2016-02-29 | 2019-03-06 | 株式会社Hearr | 携帯端末装置およびコンテンツ配信システム |
CN107743268A (zh) * | 2017-09-26 | 2018-02-27 | 维沃移动通信有限公司 | 一种视频的编辑方法及移动终端 |
JP7051406B2 (ja) * | 2017-12-06 | 2022-04-11 | オリンパス株式会社 | 動画編集のための携帯端末、キャプチャ機器、情報処理システム、情報処理方法、情報生成方法及び情報処理プログラム |
CN110033502B (zh) * | 2018-01-10 | 2020-11-13 | Oppo广东移动通信有限公司 | 视频制作方法、装置、存储介质及电子设备 |
CN108419035A (zh) * | 2018-02-28 | 2018-08-17 | 北京小米移动软件有限公司 | 图片视频的合成方法及装置 |
CN108882015B (zh) * | 2018-06-27 | 2021-07-23 | Oppo广东移动通信有限公司 | 回忆视频的播放速度调整方法、装置、电子设备及存储介质 |
CN108965599A (zh) * | 2018-07-23 | 2018-12-07 | Oppo广东移动通信有限公司 | 回忆视频处理方法及相关产品 |
CN108924441A (zh) * | 2018-08-07 | 2018-11-30 | 上海奇邑文化传播有限公司 | 视频智能制作终端、系统及方法 |
CN109769141B (zh) * | 2019-01-31 | 2020-07-14 | 北京字节跳动网络技术有限公司 | 一种视频生成方法、装置、电子设备及存储介质 |
CN110276057A (zh) * | 2019-05-31 | 2019-09-24 | 上海萌鱼网络科技有限公司 | 一种用于短视频制作的用户设计图生成方法和装置 |
CN110288678B (zh) * | 2019-06-27 | 2024-02-09 | 北京金山安全软件有限公司 | 动态照片的生成方法、装置、计算机设备和存储介质 |
-
2019
- 2019-10-14 CN CN201910975347.0A patent/CN112738623B/zh active Active
-
2020
- 2020-09-08 JP JP2022522402A patent/JP7387891B2/ja active Active
- 2020-09-08 WO PCT/CN2020/113987 patent/WO2021073315A1/zh active Application Filing
-
2022
- 2022-04-12 US US17/719,237 patent/US11621022B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1862514A (zh) * | 2005-05-13 | 2006-11-15 | 雅马哈株式会社 | 内容分发服务器、内容分发方法和内容分发程序 |
US20150139613A1 (en) * | 2013-11-21 | 2015-05-21 | Microsoft Corporation | Audio-visual project generator |
CN104349175A (zh) * | 2014-08-18 | 2015-02-11 | 周敏燕 | 一种基于手机终端的视频制作系统及方法 |
CN105530440A (zh) * | 2014-09-29 | 2016-04-27 | 北京金山安全软件有限公司 | 一种视频的制作方法及装置 |
CN106303686A (zh) * | 2016-07-29 | 2017-01-04 | 乐视控股(北京)有限公司 | 视频生成方法、视频生成装置和终端设备 |
CN107357771A (zh) * | 2017-06-23 | 2017-11-17 | 厦门星罗网络科技有限公司 | 一种有声电子相册生成及打印装置和方法 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022245747A1 (en) * | 2021-05-19 | 2022-11-24 | Snap Inc. | Shortcuts from scan operation within messaging system |
US11831592B2 (en) | 2021-05-19 | 2023-11-28 | Snap Inc. | Combining individual functions into shortcuts within a messaging system |
CN113590247A (zh) * | 2021-07-21 | 2021-11-02 | 阿里巴巴达摩院(杭州)科技有限公司 | 文本创作方法及计算机程序产品 |
CN113590247B (zh) * | 2021-07-21 | 2024-04-05 | 杭州阿里云飞天信息技术有限公司 | 文本创作方法及计算机程序产品 |
CN113891113B (zh) * | 2021-09-29 | 2024-03-12 | 阿里巴巴(中国)有限公司 | 视频剪辑合成方法及电子设备 |
CN113891113A (zh) * | 2021-09-29 | 2022-01-04 | 阿里巴巴(中国)有限公司 | 视频剪辑合成方法及电子设备 |
CN114286181A (zh) * | 2021-10-25 | 2022-04-05 | 腾讯科技(深圳)有限公司 | 一种视频优化方法、装置、电子设备和存储介质 |
CN114286181B (zh) * | 2021-10-25 | 2023-08-15 | 腾讯科技(深圳)有限公司 | 一种视频优化方法、装置、电子设备和存储介质 |
WO2023104078A1 (zh) * | 2021-12-09 | 2023-06-15 | 北京字跳网络技术有限公司 | 一种视频编辑模板的生成方法、装置、设备及存储介质 |
CN114286164A (zh) * | 2021-12-28 | 2022-04-05 | 北京思明启创科技有限公司 | 一种视频合成的方法、装置、电子设备及存储介质 |
CN114286164B (zh) * | 2021-12-28 | 2024-02-09 | 北京思明启创科技有限公司 | 一种视频合成的方法、装置、电子设备及存储介质 |
CN114666637B (zh) * | 2022-03-10 | 2024-02-02 | 阿里巴巴(中国)有限公司 | 视频剪辑方法、音频剪辑方法及电子设备 |
CN114666637A (zh) * | 2022-03-10 | 2022-06-24 | 阿里巴巴(中国)有限公司 | 视频剪辑方法、音频剪辑方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
JP7387891B2 (ja) | 2023-11-28 |
CN112738623B (zh) | 2022-11-01 |
CN112738623A (zh) | 2021-04-30 |
US20220238139A1 (en) | 2022-07-28 |
JP2022552344A (ja) | 2022-12-15 |
US11621022B2 (en) | 2023-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021073315A1 (zh) | 视频文件的生成方法、装置、终端及存储介质 | |
WO2022048478A1 (zh) | 多媒体数据的处理方法、生成方法及相关设备 | |
US11670339B2 (en) | Video acquisition method and device, terminal and medium | |
WO2020077855A1 (zh) | 视频拍摄方法、装置、电子设备及计算机可读存储介质 | |
WO2021093737A1 (zh) | 生成视频的方法、装置、电子设备和计算机可读介质 | |
WO2021088830A1 (zh) | 用于展示音乐点的方法、装置、电子设备和介质 | |
JP2021530147A (ja) | 背景音楽を選択して動画を撮影する方法、装置、端末機及び媒体 | |
WO2022253141A1 (zh) | 视频分享方法、装置、设备及介质 | |
JP2024502664A (ja) | ビデオ生成方法、装置、電子機器および記憶媒体 | |
WO2023051293A1 (zh) | 一种音频处理方法、装置、电子设备和存储介质 | |
WO2022007722A1 (zh) | 显示方法、装置、设备及存储介质 | |
WO2022042035A1 (zh) | 视频制作方法、装置、设备及存储介质 | |
US9128751B2 (en) | Schema-based link processing | |
WO2020220773A1 (zh) | 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质 | |
JP2023523067A (ja) | ビデオ処理方法、装置、機器及び媒体 | |
WO2022193867A1 (zh) | 一种视频处理方法、装置、电子设备及存储介质 | |
WO2023005831A1 (zh) | 一种资源播放方法、装置、电子设备和存储介质 | |
WO2022194031A1 (zh) | 视频的处理方法、装置、电子设备和存储介质 | |
WO2023169356A1 (zh) | 图像处理方法、装置、设备及存储介质 | |
WO2024078516A1 (zh) | 媒体内容展示方法、装置、设备及存储介质 | |
WO2023237102A1 (zh) | 一种连麦展示方法、装置、电子设备、计算机可读介质 | |
WO2024032635A1 (zh) | 媒体内容获取方法、装置、设备、可读存储介质及产品 | |
WO2023207543A1 (zh) | 媒体内容的发布方法、装置、设备、存储介质和程序产品 | |
WO2023072280A1 (zh) | 媒体内容发送方法、装置、设备、可读存储介质及产品 | |
WO2020133376A1 (zh) | 多媒体信息的处理方法与装置、电子设备及计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20877414 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022522402 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04-08-2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20877414 Country of ref document: EP Kind code of ref document: A1 |