Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are used merely to distinguish one device, module, or unit from another device, module, or unit, and are not intended to limit the order or interdependence of the functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The video generation method, device, electronic equipment and medium provided by the disclosure aim to solve the technical problems in the prior art.
The following describes the technical solutions of the present disclosure and how the technical solutions of the present disclosure solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The video generation method provided by the disclosure can be applied to an application environment as shown in fig. 1. Specifically, the terminal device 101 is communicatively connected to the first server 102. The terminal equipment acquires a material address to be edited, wherein the material address to be edited comprises a video storage address to be edited; the terminal equipment acquires partial video clips of the video to be edited according to the video storage address to be edited; the method comprises the steps that a terminal device receives a plurality of video editing instructions aiming at partial video clips of a video to be edited, wherein the plurality of video editing instructions have an execution sequence; the terminal equipment generates an editing record according to the plurality of video editing instructions, the execution sequence of the plurality of video editing instructions and the addresses of the materials to be edited corresponding to the plurality of video editing instructions; and when receiving the sending instruction, the terminal equipment sends the editing record to the first server so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
As will be appreciated by those skilled in the art, as used herein, a "terminal device" may be a cell phone, tablet computer, PDA (Personal Digital Assistant ), MID (Mobile Internet Device, mobile internet device), etc.; the first server is a server, and the server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
Referring to fig. 2, an embodiment of the present disclosure provides a video generating method, which may be applied to the foregoing terminal device, and the method includes:
step S201: and acquiring a material address to be edited, wherein the material address to be edited comprises a video storage address to be edited.
And obtaining the material to be edited according to the material address to be edited. The address of the material to be edited may include a web address of the material to be edited. When the address of the material to be edited is obtained, the user can input the address of the material to be edited to the terminal device. Specifically, an application program may be installed on the terminal device, and the user inputs the material to be edited to the application program of the terminal device, for example, the user inputs the website of the material to be edited into the application program.
It will be appreciated that the type of material to be edited is not limited, and for example, the material to be edited may include video material, audio material, special effect material, text material, etc. The material is stored in other servers, and the terminal equipment can communicate with the other servers to acquire the material to be edited according to the address of the material to be edited. The manner in which the other servers store the material to be edited is not limited.
The video storage address to be edited is the storage address corresponding to the video material. The video storage address to be edited may be a video website to be edited. And storing the video to be edited in the server corresponding to the video storage address to be edited.
Step S202: and acquiring partial video clips of the video to be edited according to the video storage address to be edited.
The length of the video to be edited is not limited. The embodiment of the disclosure can acquire partial video clips of the video to be edited according to the video storage address to be edited. Specifically, a part of the video clip of the video to be edited may be obtained in a form of buffering and simultaneously playing. For example, a user caches and views a portion of a video clip through app (Application), such as a video player.
The video to be edited may include one or more. When the partial video clips of the video to be edited are acquired, a plurality of partial video clips of one video to be edited can be acquired, and a plurality of different partial video clips of the video to be edited can also be acquired. If the video to be edited includes a video to be edited a, a video to be edited B and a video to be edited C, when the partial video segments are obtained, a partial video segment A1 and a partial video segment A2 of the video to be edited a may be obtained, and a partial video segment B1 of the video to be edited B and a partial video segment C1 of the video to be edited C may be obtained.
Step S203: a plurality of video editing instructions for a portion of a video clip of a video to be edited are received, the plurality of video editing instructions having an execution order.
The user can operate the terminal device so that the terminal device sequentially receives a plurality of video editing instructions for a part of video clips of the video to be edited, and when the user operates the terminal device to generate the editing instructions, the editing instructions have an execution sequence, and the specific execution sequence is not limited. Editing instructions may include clipping instructions, insertion instructions, rendering instructions, effect addition instructions, and the like. For example, the user may sequentially input a clip instruction, a special effect addition instruction, and a rendering instruction for a portion of a video clip of a video to be edited, and the terminal device may sequentially receive the clip instruction, the special effect addition instruction, and the rendering instruction having an execution order.
The clipping instruction is used for clipping the video to be edited, the inserting instruction can insert subtitles or music into the video to be edited, the rendering instruction is used for rendering the video to be edited, and the special effect adding instruction is used for adding special effects to the video to be edited.
Step S204: and generating an editing record according to the plurality of video editing instructions, the execution sequence of the plurality of video editing instructions and the addresses of the materials to be edited corresponding to the plurality of video editing instructions.
The editing record is used for recording a record for editing a part of the video clips of the video to be edited. The editing record comprises a plurality of video editing instructions, an execution sequence of the video editing instructions, addresses of materials to be edited corresponding to the video editing instructions, and the like. And determining the editing operation required to be performed on the partial video clips of the video to be edited according to the editing instruction.
Step S205: and when receiving the sending instruction, sending the editing record to the first server so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
After the user finishes inputting the editing instruction of the part of the video clips of the video to be edited and generates the complete editing record, the user can operate the terminal device according to the need, so that the terminal device receives a sending instruction, and the sending instruction is used for enabling the terminal device to send the editing record to the first server. The first server may communicate with other servers to obtain the material to be edited according to the material address to be edited. After the first server receives the editing record, the original video corresponding to the material address to be edited can be edited according to the editing instruction in the editing record to generate the target video.
The original video corresponding to the material address to be edited, namely the original video corresponding to the video storage address to be edited in the material address to be edited. The same video can correspond to videos with different definition, such as standard definition video, high definition video, original video and the like. The video to be edited may be a video with lower definition corresponding to the video storage address to be edited, such as standard definition video, and the original video is a video with highest definition corresponding to the video storage address to be edited. When a video is buffered and viewed by a video player, a video with a standard definition image quality or a video with a high definition image quality can be selected.
According to the video generation method provided by the embodiment of the invention, the storage address of the video to be edited is firstly obtained to obtain the partial video fragment of the video to be edited, wherein the definition of the video to be edited is lower, only the partial video fragment of the video to be edited is obtained, the complete downloading of the video to be edited is not needed, the data flow consumption of a terminal can be saved, and when the partial video fragment is obtained, whether the video can be used can be determined, then the editing record is generated according to a plurality of video editing instructions with the execution sequence and corresponding addresses of materials to be edited which are received for the partial video fragment of the video to be edited, the editing record is sent to the first server, the first server edits the original video corresponding to the addresses of the materials to be edited according to the editing record to generate the target video, the generated target video is high in definition, the video generation speed is high, the video with large occupied editing space in the terminal equipment is not needed, and the requirement on the terminal performance is reduced.
Referring to fig. 3, optionally, obtaining a portion of a video clip of a video to be edited according to a video storage address to be edited includes:
s301: and receiving a preview instruction of the video to be edited corresponding to the video storage address to be edited, wherein the preview instruction comprises video preview starting time.
When a user needs to acquire a part of video clips of the video to be edited, the part of video clips of the video to be edited can be previewed while the part of video clips of the video to be edited is cached in a manner of watching the streaming video. The user can operate the terminal equipment to enable the terminal equipment to receive the preview instruction of the video to be edited corresponding to the video storage address to be edited. If the terminal device comprises a touch screen, a playing progress bar of the video to be edited is displayed on the touch screen, and the user can obtain a preview instruction of the video to be edited corresponding to the video storage address to be edited by clicking the position of the playing progress bar displayed on the touch screen. The preview instruction generated when the user clicks the position of the playing progress bar displayed on the touch screen comprises video preview starting time, namely, from which starting time of the video to be played, if the video to be edited comprises 2 hours, the user wants to preview and play part of the video clip of the video to be edited from the 10 th minute position of the video to be edited, then the user clicks the 10 th minute position of the playing progress bar of the video to be edited, the terminal device can receive the preview instruction of the video to be edited corresponding to the storage address of the video to be edited, and the preview instruction comprises the 10 th minute of the video to be edited.
It will be appreciated that the preview instruction may also include a video preview duration for determining a duration of the video to be edited that needs to be previewed. If the video preview duration is 2 minutes, the video preview start time plus the video preview duration is the video preview end time. If the user still wants to continue watching the previewed video to be edited after previewing the video to be edited, the terminal can be controlled again, so that the terminal receives a new preview instruction.
S302: and sending the preview instruction to a second server corresponding to the video storage address to be edited, so that the second server sends partial video fragments of the video to be edited from the video preview starting time.
And after receiving the preview instruction, the second server sends a part of video clips of the video to be edited from the video preview starting time after receiving the preview instruction.
When the preview instruction comprises a video preview duration, the second server transmits a part of video clips of the video preview duration of the video to be edited from the video preview start time.
S303: and receiving and caching part of the video clips of the video to be edited, and playing the cached part of the video clips of the video to be edited, wherein the definition of the video to be edited is lower than that of the original video corresponding to the storage address of the video to be edited.
The definition of the partial video clips of the video to be edited received by the terminal equipment is lower than the definition of the original video corresponding to the storage address of the video to be edited, so that the data flow consumed by the terminal equipment for obtaining the partial video clips of the video to be edited is less, the data flow of the terminal equipment is saved, when the duration of the video to be edited is longer, only the partial video clips of the video to be edited are received and cached, the data flow of the terminal equipment is further saved, and the cost of the terminal equipment is saved.
Optionally, the video generating method further includes:
and generating an edited preview video result and playing the preview video result in real time according to the video clips and the video editing instructions of the video to be edited.
In the embodiment of the disclosure, the terminal equipment edits a part of video clips of the video to be edited according to the video editing instruction locally so as to generate a preview video result and play the preview video result in real time. The definition of the partial video clips of the video to be edited, which is acquired by the terminal equipment, is lower than the definition of the original video corresponding to the storage address of the video to be edited, the storage space occupied by the acquired partial video clips of the video to be edited is small, the performance requirement on the terminal equipment can be reduced when the partial video clips of the video to be edited are edited, the edited preview video result is generated and played in real time, the effect of the editing instruction performed by the user can be primarily judged, the corresponding editing instruction is timely modified, and the target video finally obtained by the first server meets the user requirement.
Referring to fig. 4, optionally, the video editing instructions include special effects addition instructions; when receiving an effect adding instruction for receiving a part of video clips of the video to be edited, the video generating method further comprises the following steps:
s401: and responding to the special effect adding instruction, sending a special effect request to a third server corresponding to the special effect address to be edited, and receiving a special effect to be edited, which is returned by the third server and corresponds to the special effect request.
The special effect adding instruction corresponds to the special effect address to be edited, and the special effect adding instruction is used for adding special effects on part of video fragments of the video to be edited. If a pendant is added, specifically, for example, a character is included in a part of a video clip of a video to be edited, a special effect of 'cat ear' is added on the head of the character.
And when receiving an effect adding instruction for receiving a part of video clips of the video to be edited, sending an effect request to a third server corresponding to the effect address to be edited. When the third server receives the special effect request, the third server returns the special effect to be edited corresponding to the special effect request. The specific effect to be edited is not limited to the embodiment of the disclosure.
S402: and generating and playing the preview special effect video clips according to the special effect to be edited and the partial video clips of the video to be edited.
After receiving the special effect to be edited, which corresponds to the special effect request, returned by the third server, the terminal equipment generates and plays a preview special effect video clip according to the special effect to be edited and a part of video clips of the video to be edited. How to synthesize the special effects with the video is the prior art, and the embodiments of the present disclosure will not be described. The preview special effect video clip is played, so that a user can see whether the special effect is proper or not, and whether the special effect needs to be adjusted or not is determined.
Optionally, before sending the special effect request to the third server corresponding to the special effect address to be edited, the video generating method further includes:
and generating a corresponding special effect request according to the special effect adding instruction and the related information of the terminal equipment.
The information about the terminal device may be the capabilities of the terminal. The special effect request is related to the relevant information of the terminal device. When the special effect adding instruction is the same, if the related information of the terminal device is different, the generated special effect request may be different. If the equipment performance of the terminal equipment is good, the special effect corresponding to the special effect request is a first type of special effect, and if the equipment performance of the terminal equipment is poor, the special effect corresponding to the special effect request is a second type of special effect. The first kind of special effects are normal special effects, and the second kind of special effects are simple special effects. The playing simple special effect has lower requirement on the equipment performance of the terminal equipment than the normal special effect.
Optionally, generating a corresponding special effect request according to the special effect adding instruction and the related information of the terminal device includes:
if the model of the terminal equipment is in a preset white list, generating a special effect request for indicating a third server to return a first type of special effect matched with the special effect adding instruction;
and if the model of the terminal equipment is located outside the preset white list, generating a special effect request for indicating the third server to return a second type of special effect matched with the special effect adding instruction.
The terminal device may preset a white list, where the white list includes models of various terminal devices. After the model of the terminal device is determined, the performance of the terminal device is also determined. The model in the white list is the model of the terminal equipment which accords with the preset performance. If the model of the terminal equipment is located in a preset white list, indicating that the performance of the terminal equipment meets the requirement, generating a special effect request for indicating the third server to return the first special effect matched with the special effect adding instruction, and if the model of the terminal equipment is located outside the preset white list, generating a special effect request for indicating the third server to return the second special effect matched with the special effect adding instruction. The first kind of special effects are normal special effects, and the second kind of special effects are simple special effects. According to the scheme of the embodiment of the disclosure, through a white list mechanism, the type of the special effect request returned by the third server can be better determined, so that the terminal equipment can play the special effect suitable for the performance of the terminal equipment.
Optionally, the sending the editing record to the first server, so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video, including:
and sending the editing record to the first server, so that the first server edits the original video corresponding to the video storage address to be edited and the first special effects corresponding to the special effect address to be edited according to the editing record to generate the target video.
In the embodiment of the disclosure, the editing instruction corresponding to the editing record includes an effect adding instruction, and when the first server receives the editing record, the first server edits the original video corresponding to the video storage address to be edited and the first special effect corresponding to the special effect address to be edited according to the editing record to generate the target video. It can be understood that the editing instructions may further include other types of instructions, and the first server edits the original video corresponding to the video storage address to be edited according to the editing record, so as to generate the target video meeting the requirements.
Referring to fig. 5, an embodiment of the present disclosure provides a video generating apparatus 50, which may implement the video generating method of the above embodiment, and the video generating apparatus 50 may include:
The material address obtaining module 501 is configured to obtain a material address to be edited, where the material address to be edited includes a video storage address to be edited;
the video segment obtaining module 502 is configured to obtain a partial video segment of the video to be edited according to the video storage address to be edited;
an editing instruction receiving module 503, configured to obtain a partial video clip of the video to be edited according to the video storage address to be edited;
the record generating module 504 is configured to generate an edit record according to the plurality of video editing instructions, an execution sequence of the plurality of video editing instructions, and addresses of materials to be edited corresponding to the plurality of video editing instructions;
and the sending module 505 is configured to send the editing record to the first server when receiving the sending instruction, so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
According to the video generation device provided by the embodiment of the disclosure, the storage address of the video to be edited is firstly obtained to obtain the partial video fragment of the video to be edited, wherein the definition of the video to be edited is lower, only the partial video fragment of the video to be edited is obtained, the complete downloading of the video to be edited is not needed, the data flow consumption of a terminal can be saved, and when the partial video fragment is obtained, whether the video can be used can be determined, then the editing record is generated according to a plurality of video editing instructions with the execution sequence and corresponding addresses of materials to be edited which are received for the partial video fragment of the video to be edited, the editing record is sent to the first server, the first server edits the original video corresponding to the addresses of the materials to be edited according to the editing record to generate the target video, the generated target video is high in definition, the video generation speed is high, the video with large occupied editing space is not needed in the terminal equipment, and the requirement on the performance of the terminal is reduced.
The video clip obtaining module 502 may include:
the preview instruction acquisition unit is used for receiving a preview instruction of the video to be edited corresponding to the video storage address to be edited, wherein the preview instruction comprises video preview starting time;
the instruction sending unit is used for sending the preview instruction to a second server corresponding to the video storage address to be edited, so that the second server sends part of video clips of the video to be edited from the video preview starting time;
the video acquisition unit is used for receiving and caching part of video clips of the video to be edited and playing the cached part of video clips of the video to be edited, wherein the definition of the video to be edited is lower than that of the original video corresponding to the storage address of the video to be edited.
The video generating apparatus 50 may further include:
and the editing module is used for generating an edited preview video result and playing the preview video result in real time according to the partial video clips of the video to be edited and the video editing instruction.
The video editing instruction comprises an effect adding instruction; upon receiving an effect addition instruction for receiving a partial video clip for a video to be edited, the video generating apparatus 50 may further include:
The special effect request module is used for responding to the special effect adding instruction, sending a special effect request to a third server corresponding to the special effect address to be edited, and receiving a special effect to be edited, which is returned by the third server and corresponds to the special effect request;
and the special effect editing module is used for generating and playing a preview special effect video fragment according to the special effect to be edited and the partial video fragment of the video to be edited.
The video generating apparatus 50 may further include:
and the request generation module is used for generating a corresponding special effect request according to the special effect adding instruction and the related information of the terminal equipment.
Wherein the request generation module may further include:
the first request generation unit is used for generating a special effect request for indicating the third server to return a first type of special effect matched with the special effect adding instruction if the model of the terminal equipment is located in a preset white list;
and the second request generating unit is used for generating a special effect request for indicating the third server to return a second type of special effect matched with the special effect adding instruction if the model of the terminal equipment is located outside a preset white list.
The sending module is specifically configured to:
and sending the editing record to the first server, so that the first server edits the original video corresponding to the video storage address to be edited and the first special effects corresponding to the special effect address to be edited according to the editing record to generate the target video.
Referring to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in the drawings is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
An electronic device includes: a memory and a processor, where the processor may be referred to as a processing device 601 hereinafter, the memory may include at least one of a Read Only Memory (ROM) 602, a Random Access Memory (RAM) 603, and a storage device 608 hereinafter, as shown in detail below:
as shown, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various suitable actions and processes according to programs stored in a Read Only Memory (ROM) 602 or programs loaded from a storage 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While an electronic device 600 having various means is shown, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a material address to be edited, wherein the material address to be edited comprises a video storage address to be edited; acquiring partial video clips of the video to be edited according to the video storage address to be edited; receiving a plurality of video editing instructions for a partial segment of a video to be edited, the plurality of video editing instructions having an execution order; generating an editing record according to the plurality of video editing instructions, the execution sequence of the plurality of video editing instructions and the addresses of the materials to be edited corresponding to the plurality of video editing instructions; and when receiving the sending instruction, sending the editing record to the first server so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of a module or unit is not limited to the unit itself in some cases, and for example, a receiving module may also be described as "a unit that obtains at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a video generation method including:
acquiring a material address to be edited, wherein the material address to be edited comprises a video storage address to be edited;
acquiring partial video clips of the video to be edited according to the video storage address to be edited;
receiving a plurality of video editing instructions for a partial segment of a video to be edited, the plurality of video editing instructions having an execution order;
generating an editing record according to the plurality of video editing instructions, the execution sequence of the plurality of video editing instructions and the addresses of the materials to be edited corresponding to the plurality of video editing instructions;
and when receiving the sending instruction, sending the editing record to the first server so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
According to one or more embodiments of the present disclosure, acquiring a partial video clip of a video to be edited according to a video storage address to be edited includes:
receiving a preview instruction of a video to be edited corresponding to a video storage address to be edited, wherein the preview instruction comprises video preview starting time;
the preview instruction is sent to a second server corresponding to the video storage address to be edited, so that the second server sends partial video clips of the video to be edited from the video preview starting time;
And receiving and caching part of the video clips of the video to be edited, and playing the cached part of the video clips of the video to be edited, wherein the definition of the video to be edited is lower than that of the original video corresponding to the storage address of the video to be edited.
In accordance with one or more embodiments of the present disclosure, the video generation method further includes:
and generating an edited preview video result and playing the preview video result in real time according to the video clips and the video editing instructions of the video to be edited.
According to one or more embodiments of the present disclosure, the material addresses to be edited include special effects addresses to be edited, and the video editing instructions include special effects addition instructions; when receiving an effect adding instruction for receiving a part of video clips of the video to be edited, the video generating method further comprises the following steps:
responding to the special effect adding instruction, sending a special effect request to a third server corresponding to the special effect address to be edited, and receiving a special effect to be edited, which is returned by the third server and corresponds to the special effect request;
and generating and playing the preview special effect video clips according to the special effect to be edited and the partial video clips of the video to be edited.
According to one or more embodiments of the present disclosure, before sending the special effect request to the third server corresponding to the special effect address to be edited, the video generating method further includes:
And generating a corresponding special effect request according to the special effect adding instruction and the related information of the terminal equipment.
According to one or more embodiments of the present disclosure, generating a corresponding special effect request according to a special effect adding instruction and related information of a terminal device includes:
if the model of the terminal equipment is in a preset white list, generating a special effect request for indicating a third server to return a first type of special effect matched with the special effect adding instruction;
and if the model of the terminal equipment is located outside the preset white list, generating a special effect request for indicating the third server to return a second type of special effect matched with the special effect adding instruction.
According to one or more embodiments of the present disclosure, sending an editing record to a first server, so that the first server edits an original video corresponding to a material address to be edited according to the editing record to generate a target video, including:
and sending the editing record to the first server, so that the first server edits the original video corresponding to the video storage address to be edited and the first special effects corresponding to the special effect address to be edited according to the editing record to generate the target video.
According to one or more embodiments of the present disclosure, there is provided a video generating apparatus including:
The material address acquisition module is used for acquiring a material address to be edited, wherein the material address to be edited comprises a video storage address to be edited;
the video segment acquisition module is used for acquiring partial video segments of the video to be edited according to the video storage address to be edited;
the editing instruction receiving module is used for acquiring partial video clips of the video to be edited according to the video storage address to be edited;
the record generation module is used for generating an editing record according to the plurality of video editing instructions, the execution sequence of the plurality of video editing instructions and the addresses of the materials to be edited corresponding to the plurality of video editing instructions;
and the sending module is used for sending the editing record to the first server when receiving the sending instruction, so that the first server edits the original video corresponding to the material address to be edited according to the editing record to generate the target video.
In accordance with one or more embodiments of the present disclosure, a video clip acquisition module may include:
the preview instruction acquisition unit is used for receiving a preview instruction of the video to be edited corresponding to the video storage address to be edited, wherein the preview instruction comprises video preview starting time;
the instruction sending unit is used for sending the preview instruction to a second server corresponding to the video storage address to be edited, so that the second server sends part of video clips of the video to be edited from the video preview starting time;
The video acquisition unit is used for receiving and caching part of video clips of the video to be edited and playing the cached part of video clips of the video to be edited, wherein the definition of the video to be edited is lower than that of the original video corresponding to the storage address of the video to be edited.
According to one or more embodiments of the present disclosure, the video generating apparatus may further include:
and the editing module is used for generating an edited preview video result and playing the preview video result in real time according to the partial video clips of the video to be edited and the video editing instruction.
According to one or more embodiments of the present disclosure, the material addresses to be edited include special effects addresses to be edited, and the video editing instructions include special effects addition instructions; upon receiving an effect addition instruction for receiving a partial video clip for a video to be edited, the video generating apparatus may further include:
the special effect request module is used for responding to the special effect adding instruction, sending a special effect request to a third server corresponding to the special effect address to be edited, and receiving a special effect to be edited, which is returned by the third server and corresponds to the special effect request;
and the special effect editing module is used for generating and playing a preview special effect video fragment according to the special effect to be edited and the partial video fragment of the video to be edited.
According to one or more embodiments of the present disclosure, the video generating apparatus may further include:
and the request generation module is used for generating a corresponding special effect request according to the special effect adding instruction and the related information of the terminal equipment.
In accordance with one or more embodiments of the present disclosure, the request generation module may further include:
the first request generation unit is used for generating a special effect request for indicating the third server to return a first type of special effect matched with the special effect adding instruction if the model of the terminal equipment is located in a preset white list;
and the second request generating unit is used for generating a special effect request for indicating the third server to return a second type of special effect matched with the special effect adding instruction if the model of the terminal equipment is located outside a preset white list.
The sending module is specifically configured to:
and sending the editing record to the first server, so that the first server edits the original video corresponding to the video storage address to be edited and the first special effects corresponding to the special effect address to be edited according to the editing record to generate the target video.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
A memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: the video generation method according to any of the above embodiments is performed.
According to one or more embodiments of the present disclosure, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the video generation method of any of the above embodiments.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.