WO2022161200A1 - 视频合成方法、装置、电子设备及存储介质 - Google Patents

视频合成方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022161200A1
WO2022161200A1 PCT/CN2022/072326 CN2022072326W WO2022161200A1 WO 2022161200 A1 WO2022161200 A1 WO 2022161200A1 CN 2022072326 W CN2022072326 W CN 2022072326W WO 2022161200 A1 WO2022161200 A1 WO 2022161200A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
draft
processed
computer
server
Prior art date
Application number
PCT/CN2022/072326
Other languages
English (en)
French (fr)
Inventor
周峰
初楷博
曹俊跃
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/263,518 priority Critical patent/US20240087608A1/en
Publication of WO2022161200A1 publication Critical patent/WO2022161200A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the embodiments of the present disclosure relate to the technical field of video processing, and in particular, to a video synthesis method, apparatus, electronic device, and storage medium.
  • video editing software is usually used to add audio, pictures, special effects and other editing processes to the video, and the video is synthesized before the video is uploaded, so that the video can be reproduced during playback. edit processing effect.
  • Embodiments of the present disclosure provide a video synthesis method, an apparatus, an electronic device, a storage medium, a computer program product, and a computer program, so as to overcome the problem that the web terminal cannot realize video synthesis.
  • an embodiment of the present disclosure provides a video synthesis method, which is applied to a web front end of a global wide area network.
  • the method includes: receiving a user's operation of a video to be processed, and recording the operation information as a draft; sending the draft to a server , wherein the draft is used for video synthesis after processing the video to be processed.
  • an embodiment of the present disclosure provides a video synthesis method, which is applied to a server.
  • the method includes: receiving a draft sent by a web front end of a global wide area network, where the draft records operation information of a video to be processed by a user; Process the video to be processed, and perform video synthesis to obtain a video file.
  • an embodiment of the present disclosure provides a video synthesis device, which is applied to a web front end, the device includes: a processing module, configured to receive a user's operation of the video to be processed, and record the operation information as a draft; a sending module, configured to Send the draft to the server, where the draft is used for video synthesis after the server processes the video to be processed.
  • an embodiment of the present disclosure provides a video synthesis apparatus, which is applied to a server, and the apparatus includes: a receiving module, configured to receive a draft sent by a web front end of a global wide area network, where the draft records operation information of a video to be processed by a user ; The synthesis module is used to process the video to be processed according to the draft, and perform video synthesis to obtain a video file.
  • an embodiment of the present disclosure provides an electronic device, including: at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes the computer-executable instructions stored in the memory, so that the memory At least one processor performs the video synthesis method as described in the first aspect and various possible references to the first aspect.
  • an embodiment of the present disclosure provides an electronic device, comprising: at least one processor and a memory; the memory stores computer-executable instructions; the at least one processor executes the computer-executable instructions stored in the memory, so that the memory At least one processor executes the video synthesis method as described in the second aspect and various possible references to the second aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the first aspect and the first Various possible aspects relate to the described information display method.
  • embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the second aspect and the second aspect above are implemented.
  • Various possible aspects relate to the described information display method.
  • an embodiment of the present disclosure provides a computer program product, the computer program product includes: a computer program, where the computer program is stored in a readable storage medium, and at least one processor of an electronic device can download the computer program from the computer program.
  • the computer program is read by reading the storage medium, and the at least one processor executes the computer program, so that the electronic device executes the first aspect and various possible information display methods related to the first aspect, or the first aspect.
  • the second aspect and various possible aspects of the second aspect relate to the information display method.
  • an embodiment of the present disclosure provides a computer program, where the computer program is stored in a readable storage medium, and at least one processor of an electronic device can read the computer program from the readable storage medium, so that the The at least one processor executes the computer program, so that the electronic device executes the first aspect and the various possible methods for displaying information related to the first aspect, or the second aspect and the second aspect. It relates to the information display method described.
  • the method receives the user's operation of the video to be processed through the web front end, and records the operation information as a draft; sends the draft to the server, wherein the The draft is used to synthesize the video after processing the to-be-processed video, which realizes the purpose of synthesizing the video through the web terminal.
  • FIG. 1 is a system architecture diagram provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart 1 of a video synthesis method provided by an embodiment of the present disclosure
  • FIG. 3 is a second schematic flowchart of a video synthesis method provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a hierarchical structure of a web terminal according to an embodiment of the present disclosure
  • FIG. 5 is a structural block diagram of a video synthesis apparatus applied to a web front end provided by an embodiment of the present disclosure
  • FIG. 6 is a structural block diagram of a video synthesis apparatus applied to a server according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • video editing software is usually used to add audio, pictures, special effects and other editing processes to the video, and the video is synthesized before the video is uploaded, so that the video can be reproduced during playback.
  • the existing video synthesis can only be realized through the application, and the web (browser) side cannot realize the video synthesis.
  • the technical idea of the present disclosure is to store the user's operation information on the video as a draft on the web front end, and send the draft to the backend server corresponding to the web frontend, so that the backend server on the web end restores the video according to the draft, and performs Video synthesis.
  • FIG. 1 is a system architecture diagram provided by an embodiment of the present disclosure.
  • the system architecture diagram provided by the example of the present disclosure includes a web front end 1 and a server end 2, and the server end 2 may be a Linux system.
  • the web front end 1 and the server end 2 cooperate to implement the video synthesis method of the following embodiments.
  • FIG. 2 is a first schematic flowchart of a video synthesis method provided by an embodiment of the present disclosure. As shown in Figure 2, the video synthesis method includes:
  • the web front end receives the user's operation of the video to be processed, and records the operation information as a draft.
  • users can perform various operations such as adding and editing the video to be processed, such as adding, deleting, and modifying audio and video (cropping and moving positions), adding, deleting, and modifying stickers, as well as adding text and special effects.
  • Delete, modify, etc. to obtain operation information.
  • the operation information includes video acquisition information and video editing information, etc.
  • the video acquisition information includes the link address of the video to be processed
  • the video editing information includes parameters such as additions, deletions, and modifications
  • Recorded in a draft optionally, the draft is in the format of an Object Notation (JavaScript Object Notation, JSON for short) string format.
  • JavaScript Object Notation JSON for short
  • the method further includes: displaying an operation effect of the operation on the video to be processed.
  • a small video playback window can be included on the web side.
  • the small video playback window can display the corresponding playback window to be processed.
  • the small video playback window can display the corresponding processing effect, so that the user can know the edited video effect in advance.
  • the web front end sends the draft to the server.
  • the draft is used for video synthesis after processing the video to be processed.
  • the web front end After the web front end obtains the draft, it sends the draft to the backend server corresponding to the web front end through the network.
  • the draft sent by the web front end of the global wide area network is received, and the draft records the operation information of the video to be processed by the user.
  • S103 Process the video to be processed according to the draft, and perform video synthesis to obtain a video file.
  • the server will process the to-be-processed video according to the draft, and perform video synthesis on the processed to-be-processed video to obtain a video file.
  • the server will generate a link address of the video file, and send the link address to the web front end; correspondingly, the web front end receives the link address returned by the server, wherein the link The address is the address of the video file obtained after the video synthesis is performed; the user's download request to the link address is received, and the video file is downloaded according to the download request.
  • the server After the server obtains the video file after video synthesis, it will generate the corresponding link address, and send the link address to the web front end. After the web front end receives the link address, it will be displayed on the web front end. When a user clicks the link When the address is downloaded, the corresponding synthesized video will be obtained.
  • An embodiment of the present disclosure provides a video synthesis method, receiving a user's operation of a video to be processed through a web front end, and recording the operation information as a draft; sending the draft to a server, where the draft is used for processing the to-be-processed video Then, video synthesis is performed, and the function of video synthesis through the web terminal is realized.
  • FIG. 3 is a second schematic flowchart of a video synthesis method provided by an embodiment of the present disclosure.
  • the operation information includes video acquisition information and video editing information; as shown in Figure 3, the video synthesis method includes:
  • a web front end deploys a first execution environment for forming a draft.
  • the first execution environment includes a web front-end video processing component and a web front-end draft framework; the web front-end video processing component is used to construct an operating environment for processing the video to be processed; the web front-end draft framework is used for Build the operating environment that forms the draft.
  • the web front end acquires the video to be processed, and performs editing processing on the video to be processed.
  • the second execution environment includes a server-side video processing component and a server-side draft framework;
  • the server-side video processing component is used for constructing an operating environment for restoring and processing the video to be processed;
  • the server-side draft framework is used for constructing The operating environment to which the draft is applicable, it should be noted that the code environment of the web front end and the server end are different, so the code environment needs to be redeployed on the server end.
  • the server acquires the corresponding video to be processed according to the video acquisition information.
  • the server performs editing processing on the video to be processed according to the video editing information.
  • the server performs video synthesis to obtain a video file.
  • step S204 is consistent with step S102 in the above-mentioned embodiment.
  • step S102 please refer to the discussion of step S102, which is not repeated here.
  • this embodiment further defines the specific implementation manner of step 101 and step 103 .
  • the web front end deploys a first execution environment for forming drafts; acquires the video to be processed, and performs editing processing on the to-be-processed video; records video acquisition information and video editing information as drafts; The draft is sent to the server; the server deploys a second execution environment for executing the draft; the server obtains the corresponding video to be processed according to the video acquisition information; the video to be processed is processed according to the video editing information. Editing and processing; the server performs video synthesis to obtain video files.
  • the operating environment of the web front-end including loading the video processing components of the web front-end, so that the video to be processed can be added, edited and other processing operations.
  • information download the video to be processed, edit the video to be processed according to the video editing information in the draft, and then perform video synthesis to form a video file.
  • the video acquisition information is a link address of the video to be processed in Json string format, and the server downloads the corresponding to-be-processed video according to the link address.
  • the embodiments of the present disclosure will be further described first.
  • receive the user's operation instruction initialize the video processing component and draft framework of the web front-end; then receive the user's add video instruction, add the corresponding video to be processed on the web front-end, and record the address of the to-be-processed video to the draft In the frame, it then receives the user's video editing operation instructions, performs operations such as cropping, sorting, and rotating the video to be processed, and records the video editing information into the draft frame to form a draft in Json string format; then convert the Json string through the network.
  • the draft is sent to the background server, and the background server first initializes the server-side video processing component and the draft framework; then downloads the corresponding video to be processed according to the video acquisition information in the draft, and resumes the editing processing of the to-be-processed video according to the video editing information, for example Crop, sort, rotate, etc;
  • FIG. 4 is a schematic diagram of a web-side hierarchical structure provided by an embodiment of the present disclosure.
  • the web-side hierarchical structure sequentially includes from bottom to top: an algorithm processing (algorithm, ALG for short) layer , Voice and video processing engine VESDK layer, business logic layer (Business) layer, platform (Platform) layer, and application (App) layer.
  • algorithm processing algorithm, ALG for short
  • VESDK Voice and video processing engine
  • Business business logic layer
  • Platform platform
  • App application
  • the ALG layer is the bottom-level algorithm processing, such as FFMPEG for processing each frame of video; EffectSDK for adding special effects to each frame of video;
  • VESDK layer includes VE API and VE Lib, VE Lib includes Editor, Recorder and Utils, among them, Editor is used for video editing, such as cropping, sorting, rotating, etc.; Recorder is used for recording, shooting, etc., Utils is used for processing common tools;
  • VE API includes VE public API, VE public API is used for abstract processing of VE lib Data, and provide external interface calls; Business API includes Edit (C++), Resource (C++), used to initialize the web-side operating environment, and provide external interface calls;
  • Platform API platform layer such as the example of this disclosure is applied to the web side, JS is used for processing;
  • the APP layer is the application layer, which interacts with users.
  • the Business API layer in the hierarchical structure of the web terminal provided by the embodiments of the present disclosure includes Resource (C++), which can realize the correlation of drafts. operation, and provides the function of draft export.
  • the hierarchical structure of the server is similar to that in FIG. 4 , and details are not repeated here.
  • Embodiments of the present disclosure provide a video synthesis method, which includes deploying a first execution environment for forming a draft through a web front-end; acquiring the video to be processed, and performing editing processing on the video to be processed; recording video acquisition information and video editing
  • the information is a draft; the draft is sent to the server; the server deploys a second execution environment for executing the draft; the server obtains the corresponding video to be processed according to the video acquisition information;
  • the to-be-processed video is edited and processed; the server side performs video synthesis to obtain a video file, which realizes the function of web-side video synthesis.
  • An embodiment of the present disclosure provides a video synthesis method, which is applied to a web front end of a global wide area network.
  • the method includes: receiving an operation of a video to be processed by a user, and recording the operation information as a draft; sending the draft to a server, wherein, The draft is used for video synthesis after processing the video to be processed.
  • the receiving the user's operation of the video to be processed and recording the operation information as a draft includes: acquiring the to-be-processed video, and editing the to-be-processed video; recording the video Get information and video editing information as a draft.
  • before the recording of the operation information as a draft further comprises: deploying a first execution environment for forming a draft.
  • the method further includes: receiving a link address returned by a server, where the link address is an address of a video file obtained after the video synthesis is performed; A download request from a link address, and the video file is downloaded according to the download request.
  • the method further includes: displaying an operation effect of the operation on the video to be processed.
  • the draft is in the JSON string format of object musical notation.
  • the video synthesis method provided in this embodiment is similar to the implementation principle and technical effect of the foregoing embodiment, and details are not described herein again in this embodiment.
  • An embodiment of the present disclosure also provides a video synthesis method, applied to a server, the method includes: receiving a draft sent by a global wide area network web front end, the draft recording operation information of a video to be processed by a user; to be processed according to the draft The video is processed, and video synthesis is performed to obtain a video file.
  • the operation information includes video acquisition information and video editing information
  • the processing the video to be processed according to the draft includes: acquiring a corresponding video to be processed according to the video acquisition information ; Edit the video to be processed according to the video editing information.
  • the method before processing the video to be processed according to the draft, the method further includes: deploying a second execution environment for executing the draft.
  • the method further includes: generating a link address of the video file, and sending the link address to the web front end.
  • the video synthesis method provided in this embodiment is similar to the implementation principle and technical effect of the foregoing embodiment, and is not repeated in this embodiment here.
  • FIG. 5 is a structural block diagram of a video synthesis apparatus applied to a web front end according to an embodiment of the present disclosure. For convenience of explanation, only the parts related to the embodiments of the present disclosure are shown.
  • the video synthesis apparatus includes: a processing module 10 and a sending module 20 .
  • the processing module 10 is used to receive the user's operation of the video to be processed, and record the operation information as a draft; the sending module 20 is used to send the draft to the server, wherein the draft is used for the server to process the video to be processed Video synthesis after processing.
  • the processing module 10 is specifically configured to: acquire the video to be processed, and perform editing processing on the video to be processed; and record the video acquisition information and the video editing information as a draft.
  • the processing module 10 is further configured to: deploy a first execution environment for forming a draft.
  • the processing module 10 is further configured to: receive a link address returned by the server, where the link address is an address of a video file obtained after the video synthesis is performed; download request from the link address, and download the video file according to the download request.
  • the processing module 10 is further configured to: display the operation effect of the operation on the video to be processed.
  • the video synthesizing apparatus provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principle and technical effect are similar, and details are not described herein again in this embodiment.
  • FIG. 6 is a structural block diagram of a video synthesis apparatus applied to a server according to an embodiment of the present disclosure. For convenience of explanation, only the parts related to the embodiments of the present disclosure are shown.
  • the video synthesis apparatus includes: a receiving module 30 and a synthesis module 40 .
  • the receiving module 30 is used to receive the draft sent by the global wide area network web front end, the draft records the operation information of the video to be processed by the user;
  • the synthesis module 40 is used to process the to-be-processed video according to the draft, and perform video synthesis, get the video file.
  • the operation information includes video acquisition information and video editing information
  • the synthesis module 40 is specifically configured to: acquire a corresponding video to be processed according to the video acquisition information; edit the video according to the video information to edit the video to be processed.
  • the synthesis module 40 is further configured to: deploy a second execution environment for executing the draft.
  • the synthesis module 40 is further configured to: generate a link address of the video file, and send the link address to the web front end.
  • the video synthesizing apparatus provided in this embodiment can be used to implement the technical solutions of the foregoing method embodiments, and its implementation principle and technical effect are similar, and details are not described herein again in this embodiment.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 700 may be set on a web front end, including: at least one processor and a memory; the memory stores computer execution Instructions; the at least one processor executes computer-executable instructions stored in the memory, causing the at least one processor to perform the video synthesis method described in the first aspect and various possible designs of the first aspect.
  • an example of the present disclosure further proposes an electronic device, which is arranged on the server side and includes: at least one processor and a memory; the memory stores computer-executed instructions; the at least one processor executes the memory The stored computer executable instructions cause the at least one processor to perform the video synthesis method as described in the second aspect and various possible designs of the second aspect.
  • the electronic device 700 may be a terminal device or a server.
  • the terminal equipment may include, but is not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, referred to as PDA), tablet computers (Portable Android Device, referred to as PAD), portable multimedia players (Portable Media Player, PMP for short), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable Android Device
  • PMP Portable Multimedia Player
  • mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), etc.
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 7 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 700 may include a processing device (such as a central processing unit, a graphics processor, etc.) 701, which may be stored in a read only memory (Read Only Memory, ROM for short) 702 according to a program or from a storage device 708 is a program loaded into a random access memory (Random Access Memory, RAM for short) 703 to execute various appropriate actions and processes.
  • a processing device such as a central processing unit, a graphics processor, etc.
  • ROM for short read only memory
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 700 are also stored.
  • the processing device 701 , the ROM 702 , and the RAM 703 are connected to each other through a bus 704 .
  • An Input/Output (I/O for short) interface 705 is also connected to the bus 704 .
  • the following devices can be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD for short) ), speaker, vibrator, etc. output device 707; storage device 708 including, eg, magnetic tape, hard disk, etc.; and communication device 709. Communication means 709 may allow electronic device 700 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 7 shows an electronic device 700 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and is provided on the web front-end and the server side, the computer program includes a computer program for executing the method shown in the flowchart. code.
  • the computer program may be downloaded and installed from the network via the communication device 709, or from the storage device 708, or from the ROM 702.
  • the processing device 701 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the foregoing two, and may be provided on the web side and the server side.
  • the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Erasable Programmable Read-Only Memory (EPROM or flash memory for short), optical fiber, Portable Compact Disk Read-Only Memory (CD-ROM for short), optical storage device, magnetic storage device, or Any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium can be transmitted by any suitable medium, including but not limited to: electric wire, optical cable, radio frequency (RF for short), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, causes the electronic device to execute the methods shown in the above-mentioned embodiments.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external A computer (eg using an internet service provider to connect via the internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • Embodiments of the present disclosure also provide a computer program, where the computer program is stored in a readable storage medium, at least one processor of an electronic device can read the computer program from the readable storage medium, and at least one processor executes the computer program, so that The electronic device executes the method provided by any of the foregoing embodiments.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field-Programmable Gate Arrays (FPGA), Application-Specific Integrated Circuits (Field-Programmable Gate Arrays, ASIC), proprietary standard Products (Application Specific Standard Product, ASSP), System-on-a-chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • FPGA Field-Programmable Gate Arrays
  • ASIC Application-Specific Integrated Circuits
  • ASSP Application Specific Standard Product
  • SOC System-on-a-chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本公开实施例提供一种视频合成方法、装置、电子设备及存储介质,该方法通过web前端接收用户对待处理视频的操作,并记录操作信息为草稿;将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成,实现了通过web端合成视频的目的。

Description

视频合成方法、装置、电子设备及存储介质
本申请要求于2021年01月29日提交中国专利局、申请号为202110130072.8、申请名称为“视频合成方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及视频处理技术领域,尤其涉及一种视频合成方法、装置、电子设备及存储介质。
背景技术
为了提高用户观看视频的体验,通常会采用视频编辑软件对视频进行添加音频、图片、特效等多种编辑处理,并在视频上传前对视频进行合成处理,从而使得在播放时能够重现对视频的编辑处理效果。
但是,现有的视频合成只能通过应用程序实现,而web(浏览器)端无法实现视频合成。
发明内容
本公开实施例提供一种视频合成方法、装置、电子设备、存储介质、计算机程序产品及计算机程序,以克服web端无法实现视频合成的问题。
第一方面,本公开实施例提供一种视频合成方法,应用于全球广域网web前端,所述方法包括:接收用户对待处理视频的操作,并记录操作信息为草稿;将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成。
第二方面,本公开实施例提供一种视频合成方法,应用于服务端,所述方法包括:接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
第三方面,本公开实施例提供一种视频合成装置,应用于web前端,所述装置包括:处理模块,用于接收用户对待处理视频的操作,并记录操作信息为草稿;发送模块,用于将所述草稿发送至服务端,其中,所述草稿用于服务端对待处理视频进行处 理后进行视频合成。
第四方面,本公开实施例提供一种视频合成装置,应用于服务端,所述装置包括:接收模块,用于接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;合成模块,用于根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
第五方面,本公开实施例提供一种电子设备,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如第一方面以及第一方面各种可能的涉及所述的视频合成方法。
第六方面,本公开实施例提供一种电子设备,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如第二方面以及第二方面各种可能的涉及所述的视频合成方法。
第七方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的涉及所述的信息显示方法。
第八方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第二方面以及第二方面各种可能的涉及所述的信息显示方法。
第九方面,本公开实施例提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序,所述计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序,使得所述电子设备执行上述第一方面以及第一方面各种可能的涉及所述的信息显示方法,或第二方面以及第二方面各种可能的涉及所述的信息显示方法。
第十方面,本公开实施例提供了一种计算机程序,所述计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质中读取上述计算机程序,所述至少一个处理器执行所述计算机程序,使得所述电子设备执行如上述第一方面以及第一方面各种可能的涉及所述的信息显示方法,或第二方面以及第二方面各种可能的涉及所述的信息显示方法。
本实施例提供的视频合成方法、装置、电子设备及存储介质,该方法通过web前 端接收用户对待处理视频的操作,并记录操作信息为草稿;将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成,实现了通过web端合成视频的目的。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种系统架构图;
图2为本公开实施例提供的视频合成方法流程示意图一;
图3为本公开实施例提供的视频合成方法流程示意图二;
图4为本公开实施例提供的一种web端分层结构示意图;
图5为本公开实施例提供的应用于web前端的视频合成装置的结构框图;
图6为本公开实施例提供的应用于服务端的视频合成装置的结构框图;
图7为本公开实施例提供的电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
为了提高用户观看视频的体验,通常会采用视频编辑软件对视频进行添加音频、图片、特效等多种编辑处理,并在视频上传前对视频进行合成处理,从而使得在播放时能够重现对视频的编辑处理效果,但是,现有的视频合成只能通过应用程序实现,而web(浏览器)端无法实现视频合成。
针对上述问题,本公开的技术构思在于,通过在web前端将用户对视频的操作信息存储为草稿,并将草稿发送到web前端对应的后台服务器,使得web端的后台服务器根据草稿还原视频,并进行视频合成。
参考图1,图1为本公开实施例提供的一种系统架构图,如图1所示,本公开示例提供的系统架构示意图包括web前端1和服务端2,服务端2可为Linux系统,其中,web前端1和服务端2共同配合实现下述实施例的视频合成方法。
参考图2,图2为本公开实施例提供的视频合成方法流程示意图一。如图2所示,该视频合成方法包括:
S101:web前端接收用户对待处理视频的操作,并记录操作信息为草稿。
具体来说,用户可以在web前端中,对待处理视频进行添加、编辑等各种操作处理,例如对音视频的增删改(裁剪和移动位置)、贴纸的增删改、以及文字、特效等的增删改等,获得操作信息。可选的,操作信息包括视频获取信息和视频编辑信息等,具体来说,视频获取信息包括待处理视频的链接地址,视频编辑信息包括增删改等参数;将处理过程中的各种视频处理参数记录到草稿中,可选的,所述草稿为对象简谱(JavaScript Object Notation,简称JSON)字符串格式。
在本公开的一个实施例中,所述方法还包括:显示对待处理视频进行操作的操作效果。
具体来说,在web端可以包括小视频播放窗口,当用户添加待处理视频时,小视频播放窗口可以显示对应的待处理播放窗口,当用户对待处理视频进行各种编辑操作时,小视频播放窗口可以显示对应的处理效果,从而使得用户可以预先了解编辑后的视频效果图。
S102、web前端将所述草稿发送至服务端。
其中,所述草稿用于对待处理视频进行处理后进行视频合成。
具体来说,web前端获取到草稿后,通过网络将草稿发送到web前端对应的后台服务端。
相对应的,在服务端侧,会接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息。
S103、根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
具体来说,服务端获取到草稿后,会根据草稿对待处理视频的进行处理,并将处理后的待处理视频进行视频合成,得到视频文件。
可选的,所述服务端会生成所述视频文件的链接地址,并将所述链接地址发送给所述web前端;相对应的,web前端接收服务端返回的链接地址,其中,所述链接地址为进行所述视频合成后得到的视频文件的地址;接收用户对所述链接地址的下载请 求,并根据所述下载请求下载所述视频文件。
具体来说,服务端获得视频合成后的视频文件后,会生成对应的链接地址,并且将链接地址发送给web前端,web前端接收到该链接地址后在web前端展示,当有用户点击该链接地址进行下载时,会获取到对应的合成后的视频。
本公开实施例提供一种视频合成方法,通过web前端接收用户对待处理视频的操作,并记录操作信息为草稿;将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成,实现了通过web端视频合成的功能。
在上述图2所示实施例的基础上,图3为本公开实施例提供的视频合成方法流程示意图二。所述操作信息包括视频获取信息和视频编辑信息;如图3所示,该视频合成方法包括:
S201、web前端部署用于形成草稿的第一执行环境。
具体来说,所述第一执行环境包括web前端视频处理组件和web前端草稿框架;所述web前端视频处理组件用于构建处理所述待处理视频的操作环境;所述web前端草稿框架用于构建形成草稿的操作环境。
S202、web前端获取所述待处理视频,并对所述待处理视频进行编辑处理。
S203、记录视频获取信息和视频编辑信息为草稿。
S204、将所述草稿发送至服务端。
S205、部署用于执行所述草稿的第二执行环境。
其中,所述第二执行环境包括服务端视频处理组件和服务端草稿框架;所述服务端视频处理组件用于构建还原处理所述待处理视频的操作环境;所述服务端草稿框架用于构建所述草稿所适用的操作环境,需要说明的是,web前端和服务端的代码环境不同,所以需要在服务端重新部署代码环境。
S206、服务端根据所述视频获取信息获取对应的待处理视频。
S207、服务端根据所述视频编辑信息对所述待处理视频进行编辑处理。
S208、服务端进行视频合成,得到视频文件。
在本实施例中,步骤S204与上述实施例中步骤S102的一致,详细论述请参考步骤S102的论述,这里不再赘述。
与上述实施例的区别在于,本实施例进一步限定了步骤101和步骤103的具体实现方式。在本实施方式中,web前端部署用于形成草稿的第一执行环境;获取所述待处理视频,并对所述待处理视频进行编辑处理;记录视频获取信息和视频编辑信息为 草稿;将所述草稿发送至服务端;服务端部署用于执行所述草稿的第二执行环境;服务端根据所述视频获取信息获取对应的待处理视频;根据所述视频编辑信息对所述待处理视频进行编辑处理;服务端进行视频合成,得到视频文件。
具体来说,首先需要初始化web前端的操作环境,包括加载web前端视频处理组件,以便于可以对待处理视频进行添加、编辑等处理操作,同时还需要构建草稿框架,以便于在草稿框架中填充视频获取信息、视频编辑信息参数,形成草稿;然后通过网络将该草稿传递到服务端;初始化服务端的操作环境,包括服务端视频处理组件和服务端草稿框架,然后根据接收到的草稿中的视频获取信息,下载待处理视频,并根据草稿中的视频编辑信息对待处理视频进行编辑,然后进行视频合成,形成视频文件。
需要说明的是,因web端和服务端的代码环境不同,当在web前端形成的草稿不带有代码环境,例如Json字符串,因此在服务端需要重新构建服务端的代码环境,以便于对草稿进行一个还原。
可选的,视频获取信息为待处理视频的Json字符串格式的链接地址,服务端根据该链接地址去下载对应的待处理视频。
为了便于了解,先对本公开实施例进行进一步的说明。首先在web前端,接收用户的操作指令,初始化web前端的视频处理组件和草稿框架;然后接收用户的添加视频指令,在web前端上添加对应的待处理视频,并记录待处理视频的地址到草稿框架中,然后接收用户的视频编辑操作指令,对待处理视频进行裁剪、排序、旋转等操作,同时记录视频编辑信息到草稿框架中,形成Json字符串格式的草稿;然后通过网络将Json字符串的草稿传送到后台服务端,后台服务端首先初始化服务端视频处理组件和草稿框架;然后根据草稿中的视频获取信息下载对应的待处理视频,并根据视频编辑信息恢复对待处理视频的编辑处理,例如裁剪、排序、旋转等;然后进行视频合成,形成视频文件以及对应的链接地址,并将视频文件的链接地址返回给web前端,web前端可以接受用户对该链接地址的下载请求,下载视频。
参考图4,图4为本公开实施例提供的一种web端分层结构示意图,如图4所示,该web端分层结构从下到上依次包括:算法处理(algorithm,简称ALG)层、语音视频处理引擎VESDK层、业务逻辑层(Business)层、平台(Platform)层、以及应用(App)层。
其中,ALG层为最底层的算法处理,比如FFMPEG用于处理视频的每一帧;EffectSDK用于对每一帧视频添加特效等;VESDK层包括VE API和VE Lib,VE Lib 包括Editor、Recorder以及Utils,其中,Editor用于视频编辑,例如裁剪、排序、旋转等;Recorder用于录制、拍摄等,Utils用于处理常用工具;VE API包括VE public API,VE public API用于抽象处理VE lib的数据,并提供外部接口调用;Business API中包括Edit(C++),Resource(C++),用于初始化web端操作环境,并提供外部接口调用;Platform API平台层,例如本公开示例应用于web端,采用js处理;APP层为应用层,与用户进行交互。
需要说明的是,相比与现有技术中的web端无法导出草稿的问题,本公开实施例给出的web端的分层结构中的Business API层包括了Resource(C++),可以实现草稿的相关操作,并提供了草稿导出的功能。同理,服务端的分层结构与图4类似,此处不再赘述。
现结合图4所示的web端分层结构,对本公开实施例进行进一步的说明。
首先,初始化Business API层中的Edit(C++),Edit(C++)控制VE Lib层中的Editor初始化,以实现初始化web端的视频处理组件;同时初始化Business API层中Resource(C++),以构建草稿框架;初始化完成后,用户通过App层添加待处理视频,并向下层控制在Business API层的Resource(C++)的草稿中记录所添加的视频,还可以控制底层的VE Lib层中的Editor更新以预览显示待处理视频;当对待处理视频进行裁剪等编辑处理时,用户通过App层发起裁剪等编辑处理指令,Business API层的Edit(C++)控制VE Lib层中的Editor对视频进行裁剪等编辑处理;然后从Business API层的Resource(C++)中导出json字符串的草稿,并将其发送到服务端;因json字符串不携带操作环境,所以在服务端也会首先初始化Business API层中的Edit(C++),Edit(C++)控制VE Lib层中的Editor初始化,以实现加载服务端视频处理组件;同时初始化Business API层中Resource(C++),以构建服务端草稿框架;然后Business API层中Resource(C++)接收上层传递的Json字符串,进行解析,控制Business API层中的Edit(C++),Edit(C++)控制VE Lib层中的Editor对视频进行下载,并进行还原处理;还原处理完成后,Business API层中Resource(C++)发出合成视频指令,通知VE Lib层中的Editor进行视频合成,形成视频文件,同时向web前端返回视频文件的链接地址。
本公开实施例提供一种视频合成方法,通过web前端部署用于形成草稿的第一执行环境;获取所述待处理视频,并对所述待处理视频进行编辑处理;记录视频获取信息和视频编辑信息为草稿;将所述草稿发送至服务端;服务端部署用于执行所述草稿 的第二执行环境;服务端根据所述视频获取信息获取对应的待处理视频;根据所述视频编辑信息对所述待处理视频进行编辑处理;服务端进行视频合成,得到视频文件,实现了web端视频合成的功能。
本公开实施例提供了一种视频合成方法,应用于全球广域网web前端,所述方法包括:接收用户对待处理视频的操作,并记录操作信息为草稿;将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成。
根据本公开的一个或多个实施例,所述接收用户对待处理视频的操作,并记录操作信息为草稿,包括:获取所述待处理视频,并对所述待处理视频进行编辑处理;记录视频获取信息和视频编辑信息为草稿。
根据本公开的一个或多个实施例,所述记录操作信息为草稿之前,还包括:部署用于形成草稿的第一执行环境。
根据本公开的一个或多个实施例,所述方法还包括:接收服务端返回的链接地址,其中,所述链接地址为进行所述视频合成后得到的视频文件的地址;接收用户对所述链接地址的下载请求,并根据所述下载请求下载所述视频文件。
根据本公开的一个或多个实施例,所述方法还包括:显示对待处理视频进行操作的操作效果。
根据本公开的一个或多个实施例,所述草稿为对象简谱JSON字符串格式。
本实施例提供的视频合成方法,与上述实施例的实现原理和技术效果类似,本实施例此处不再赘述。
本公开实施例还提供了一种视频合成方法,应用于服务端,所述方法包括:接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
根据本公开的一个或多个实施例,所述操作信息包括视频获取信息和视频编辑信息,所述根据所述草稿对待处理视频进行处理,包括:根据所述视频获取信息获取对应的待处理视频;根据所述视频编辑信息对所述待处理视频进行编辑处理。
根据本公开的一个或多个实施例,所述根据所述草稿对待处理视频进行处理之前,还包括;部署用于执行所述草稿的第二执行环境。
根据本公开的一个或多个实施例,所述方法还包括:生成所述视频文件的链接地址,并将所述链接地址发送给所述web前端。
本实施例提供的视频合成方法,与上述实施例的实现原理和技术效果类似,本实 施例此处不再赘述。
对应于上文实施例的视频合成方法,图5为本公开实施例提供的应用于web前端的视频合成装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图5,所述视频合成装置包括:处理模块10和发送模块20。
其中,处理模块10,用于接收用户对待处理视频的操作,并记录操作信息为草稿;发送模块20,用于将所述草稿发送至服务端,其中,所述草稿用于服务端对待处理视频进行处理后进行视频合成。
在本公开的一个实施例中,所述处理模块10具体用于:获取所述待处理视频,并对所述待处理视频进行编辑处理;记录视频获取信息和视频编辑信息为草稿。
在本公开的一个实施例中,所述处理模块10还用于:部署用于形成草稿的第一执行环境。
在本公开的一个实施例中,所述处理模块10还用于:接收服务端返回的链接地址,其中,所述链接地址为进行所述视频合成后得到的视频文件的地址;接收用户对所述链接地址的下载请求,并根据所述下载请求下载所述视频文件。
在本公开的一个实施例中,所述处理模块10还用于:显示对待处理视频进行操作的操作效果。
本实施例提供的视频合成装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图6为本公开实施例提供的应用于服务端的视频合成装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图6,所述视频合成装置包括:接收模块30和合成模块40。
其中,接收模块30,用于接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;合成模块40,用于根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
在本公开的一个实施例中,所述操作信息包括视频获取信息和视频编辑信息,所述合成模块40,具体用于:根据所述视频获取信息获取对应的待处理视频;根据所述视频编辑信息对所述待处理视频进行编辑处理。
在本公开的一个实施例中,所述合成模块40,还用于:部署用于执行所述草稿的第二执行环境。
在本公开的一个实施例中,所述合成模块40,还用于:生成所述视频文件的链接 地址,并将所述链接地址发送给所述web前端。
本实施例提供的视频合成装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
参考图7,图7为本公开实施例提供的电子设备的结构示意图,如图7所示,该电子设备700可以设置在web前端,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如第一方面以及第一方面各种可能的设计所述的视频合成方法。
与图7结构类似,本公开示例还提出了一种电子设备,设置在服务端侧,包括:至少一个处理器和存储器;所述存储器存储计算机执行指令;所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如第二方面以及第二方面各种可能的设计所述的视频合成方法。
该电子设备700可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(Read Only Memory,简称ROM)702中的程序或者从存储装置708加载到随机访问存储器(Random Access Memory,简称RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有电子设备700操作所需的各种程序和数据。处理装置701、ROM702以及RAM703通过总线704彼此相连。输入/输出(Input/Output,简称I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具 备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,设置在web前端和服务端侧,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合,可以设置在web端侧和服务端侧。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Portable Compact Disk Read-Only Memory,简称CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,简称RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算 机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
本公开实施例还提供了一种计算机程序,计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从可读存储介质中读取计算机程序,至少一个处理器执行计算机程序,使得电子设备执行上述任一实施例提供的方法。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)、专用集成电路(Field-Programmable Gate Array,简称ASIC)、专用标准产品(Application Specific Standard Product,简称ASSP)、片上系统(System-on-a-chip,简称SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,简称CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供 指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (17)

  1. 一种视频合成方法,其特征在于,应用于全球广域网web前端,所述方法包括:
    接收用户对待处理视频的操作,并记录操作信息为草稿;
    将所述草稿发送至服务端,其中,所述草稿用于对待处理视频进行处理后进行视频合成。
  2. 根据权利要求1所述的方法,其特征在于,所述接收用户对待处理视频的操作,并记录操作信息为草稿,包括:
    获取所述待处理视频,并对所述待处理视频进行编辑处理;
    记录视频获取信息和视频编辑信息为草稿。
  3. 根据权利要求1或2所述的方法,其特征在于,所述记录操作信息为草稿之前,还包括:
    部署用于形成草稿的第一执行环境。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    接收服务端返回的链接地址,其中,所述链接地址为进行所述视频合成后得到的视频文件的地址;
    接收用户对所述链接地址的下载请求,并根据所述下载请求下载所述视频文件。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    显示对待处理视频进行操作的操作效果。
  6. 一种视频合成方法,其特征在于,应用于服务端,所述方法包括:
    接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;
    根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
  7. 根据权利要求6所述的方法,其特征在于,所述操作信息包括视频获取信息和视频编辑信息,所述根据所述草稿对待处理视频进行处理,包括:
    根据所述视频获取信息获取对应的待处理视频;
    根据所述视频编辑信息对所述待处理视频进行编辑处理。
  8. 根据权利要求6或7所述的方法,其特征在于,所述根据所述草稿对待处理视频进行处理之前,还包括;
    部署用于执行所述草稿的第二执行环境。
  9. 根据权利要求6至8中任一项所述的方法,其特征在于,所述方法还包括:
    生成所述视频文件的链接地址,并将所述链接地址发送给所述web前端。
  10. 一种视频合成装置,其特征在于,应用于全球广域网web前端,所述装置包 括:
    处理模块,用于接收用户对待处理视频的操作,并记录操作信息为草稿;
    发送模块,用于将所述草稿发送至服务端,其中,所述草稿用于服务端对待处理视频进行处理后进行视频合成。
  11. 一种视频合成装置,其特征在于,应用于服务端,所述装置包括:
    接收模块,用于接收全球广域网web前端发送的草稿,所述草稿记录用户对待处理视频的操作信息;
    合成模块,用于根据所述草稿对待处理视频进行处理,并进行视频合成,得到视频文件。
  12. 一种电子设备,其特征在于,包括:至少一个处理器和存储器;
    所述存储器存储计算机执行指令;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1至5任一项所述的视频合成方法。
  13. 一种电子设备,其特征在于,包括:至少一个处理器和存储器;
    所述存储器存储计算机执行指令;
    所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求6至9任一项所述的视频合成方法。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至5任一项所述的视频合成方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求6至9任一项所述的视频合成方法。
  16. 一种计算机程序产品,其特征在于,包括计算机程序,所述计算机程序在被处理器执行时实现如权利要求1至5任一项所述的视频合成方法,或执行如权利要求6至9任一项所述的视频合成方法。
  17. 一种计算机程序,其特征在于,所述计算机程序在被处理器执行时实现如权利要求1至5任一项所述的视频合成方法,或执行如权利要求6至9任一项所述的视频合成方法。
PCT/CN2022/072326 2021-01-29 2022-01-17 视频合成方法、装置、电子设备及存储介质 WO2022161200A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/263,518 US20240087608A1 (en) 2021-01-29 2022-01-17 Video synthesis method and apparatus, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110130072.8 2021-01-29
CN202110130072.8A CN114827490A (zh) 2021-01-29 2021-01-29 视频合成方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022161200A1 true WO2022161200A1 (zh) 2022-08-04

Family

ID=82525419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072326 WO2022161200A1 (zh) 2021-01-29 2022-01-17 视频合成方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US20240087608A1 (zh)
CN (1) CN114827490A (zh)
WO (1) WO2022161200A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20040150663A1 (en) * 2003-01-14 2004-08-05 Samsung Electronics Co., Ltd. System and method for editing multimedia file using internet
CN104796767A (zh) * 2015-03-31 2015-07-22 北京奇艺世纪科技有限公司 一种云视频编辑方法和系统
CN107295084A (zh) * 2017-06-26 2017-10-24 深圳水晶石数字科技有限公司 一种基于云端的视频编辑系统及方法
CN108965397A (zh) * 2018-06-22 2018-12-07 中央电视台 云端视频编辑方法及装置、编辑设备及存储介质
CN109121009A (zh) * 2018-08-17 2019-01-01 百度在线网络技术(北京)有限公司 视频处理方法、客户端和服务器
CN111063007A (zh) * 2019-12-17 2020-04-24 北京思维造物信息科技股份有限公司 图像生成方法、装置、设备和存储介质
CN112261416A (zh) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 基于云的视频处理方法、装置、存储介质与电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20040150663A1 (en) * 2003-01-14 2004-08-05 Samsung Electronics Co., Ltd. System and method for editing multimedia file using internet
CN104796767A (zh) * 2015-03-31 2015-07-22 北京奇艺世纪科技有限公司 一种云视频编辑方法和系统
CN107295084A (zh) * 2017-06-26 2017-10-24 深圳水晶石数字科技有限公司 一种基于云端的视频编辑系统及方法
CN108965397A (zh) * 2018-06-22 2018-12-07 中央电视台 云端视频编辑方法及装置、编辑设备及存储介质
CN109121009A (zh) * 2018-08-17 2019-01-01 百度在线网络技术(北京)有限公司 视频处理方法、客户端和服务器
CN111063007A (zh) * 2019-12-17 2020-04-24 北京思维造物信息科技股份有限公司 图像生成方法、装置、设备和存储介质
CN112261416A (zh) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 基于云的视频处理方法、装置、存储介质与电子设备

Also Published As

Publication number Publication date
CN114827490A (zh) 2022-07-29
US20240087608A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
WO2022048478A1 (zh) 多媒体数据的处理方法、生成方法及相关设备
WO2021073315A1 (zh) 视频文件的生成方法、装置、终端及存储介质
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2021073368A1 (zh) 视频文件的生成方法、装置、终端及存储介质
WO2020220773A1 (zh) 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质
US20220310125A1 (en) Method and apparatus for video production, device and storage medium
US11849211B2 (en) Video processing method, terminal device and storage medium
WO2023284437A1 (zh) 媒体文件处理方法、装置、设备、可读存储介质及产品
WO2023005831A1 (zh) 一种资源播放方法、装置、电子设备和存储介质
US20220394333A1 (en) Video processing method and apparatus, storage medium, and electronic device
CN111435600B (zh) 用于处理音频的方法和装置
US20240005961A1 (en) Video processing method and apparatus, and electronic device and storage medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
WO2024046284A1 (zh) 绘制动画生成方法、装置、设备、可读存储介质及产品
US20240064367A1 (en) Video processing method and apparatus, electronic device, and storage medium
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
US20240103802A1 (en) Method, apparatus, device and medium for multimedia processing
WO2022161200A1 (zh) 视频合成方法、装置、电子设备及存储介质
WO2022227859A1 (zh) 信息显示方法、装置、设备及存储介质
WO2023024983A1 (zh) 视频录制方法、设备、存储介质及程序产品
WO2022194025A1 (zh) 互动视频连接方法、装置、电子设备及存储介质
WO2023241283A1 (zh) 用于视频编辑的方法及设备
CN112017261A (zh) 贴纸生成方法、装置、电子设备及计算机可读存储介质
CN112153439A (zh) 互动视频处理方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22745076

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18263518

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/11/2023)