Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information interacted among a plurality of devices in the embodiments of the present disclosure are only for illustrative purposes, and are not intended to limit the scope of the messages or information, which is intended to solve the above technical problems of the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
In one embodiment, a method for processing video is provided, as shown in fig. 1, the method includes:
step S101, receiving an uploading instruction aiming at a target video;
the embodiment of the disclosure can be applied to a terminal, an application client having functions of synthesizing videos and uploading the synthesized videos can be installed in the terminal, and a user can edit target videos, such as cutting, adding special effects and the like, in the application client, and then synthesize the edited videos to obtain the edited videos. The target video may be a video to be synthesized, the video to be synthesized may be a video already stored in the terminal, may also be a video shot by the user through the application client, or may also be another video, which is not limited in this disclosure.
Step S102, synthesizing a target video based on an uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
in the disclosed embodiment, two SDKs (Software Development Kit) may be set: editing the SDK and uploading the SDK. The editing SDK and the uploading SDK respectively perform data interaction with the application program client in a buffer mode, receive scheduling of the application program client, edit the SDK for producing data, namely synthesizing the target video, and upload the SDK for consuming data, namely acquiring synthesized video data in real time and uploading the synthesized video data.
And step S103, repeatedly executing the steps of synthesizing the video data in real time and synchronously uploading the synthesized video data to a preset server until the synthesized video data of all the target videos are obtained and the synthesized video data are uploaded to the preset server.
Since the acquisition of the synthesized video data is real-time and the synchronous uploading of the synthesized video data to the server is also real-time, the acquisition of the synthesized video data and the synchronous uploading of the synthesized video data to the server need to be repeated until the target video is completely synthesized to obtain complete synthesized video data and the complete synthesized video data is completely uploaded to the server.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
In another embodiment, a method for processing video is provided, as shown in fig. 2, the method includes:
step S201, receiving an uploading instruction aiming at a target video;
the embodiment of the disclosure can be applied to a terminal, an application client having functions of synthesizing videos and uploading the synthesized videos can be installed in the terminal, and a user can edit target videos, such as cutting, adding special effects and the like, in the application client, and then synthesize the edited videos to obtain the edited videos. The target video may be a video to be synthesized, the video to be synthesized may be a video already stored in the terminal, may also be a video shot by the user through the application client, or may also be another video, which is not limited in this disclosure.
Wherein, this terminal station can have following characteristics:
(1) on a hardware architecture, a device has a central processing unit, a memory, an input unit and an output unit, that is, the device is often a microcomputer device having a communication function. In addition, various input modes such as a keyboard, a mouse, a touch screen, a microphone, a camera and the like can be provided, and input can be adjusted as required. Meanwhile, the equipment often has a plurality of output modes, such as a telephone receiver, a display screen and the like, and can be adjusted according to needs;
(2) on a software system, the device must have an operating system, such as Windows Mobile, Symbian, Palm, Android, iOS, and the like. Meanwhile, the operating systems are more and more open, and personalized application programs developed based on the open operating system platforms are infinite, such as a communication book, a schedule, a notebook, a calculator, various games and the like, so that the requirements of personalized users are met to a great extent;
(3) in terms of communication capacity, the device has flexible access mode and high-bandwidth communication performance, and can automatically adjust the selected communication mode according to the selected service and the environment, thereby being convenient for users to use. The device can support GSM (Global System for Mobile Communication), WCDMA (Wideband Code Division Multiple Access), CDMA2000(Code Division Multiple Access), TDSCDMA (Time Division-Synchronous Code Division Multiple Access), Wi-Fi (Wireless-Fidelity), WiMAX (world interoperability for Microwave Access), etc., thereby adapting to various systems of networks, not only supporting voice service, but also supporting various Wireless data services;
(4) in the aspect of function use, the equipment focuses more on humanization, individuation and multi-functionalization. With the development of computer technology, devices enter a human-centered mode from a device-centered mode, and the embedded computing, control technology, artificial intelligence technology, biometric authentication technology and the like are integrated, so that the human-oriented purpose is fully embodied. Due to the development of software technology, the equipment can be adjusted and set according to individual requirements, and is more personalized. Meanwhile, the device integrates a plurality of software and hardware, and the function is more and more powerful.
In practical application, a publishing page is preset in the application program client, a functional button, such as a "publishing" button, for synthesizing a target video and uploading the synthesized video can be arranged in the publishing page, when a user selects the target video in the publishing page and then clicks the "publishing" button, the application program client can synthesize the target video and upload the synthesized data in real time, so that the synthesis and uploading of the synthesized data are realized.
Step S202, synthesizing a target video based on an uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
in the disclosed embodiment, two SDKs (Software Development Kit) may be set: editing the SDK and uploading the SDK. As shown in fig. 3, the editing SDK and the uploading SDK perform data interaction with the application client in a buffer form, respectively, receive scheduling of the application client, edit the SDK for producing data, that is, synthesize the target video, upload the SDK for consuming data, that is, obtain synthesized video data in real time, and upload the synthesized video data.
In a preferred embodiment of the present disclosure, after synthesizing the target video, the method further includes:
storing the synthesized video data to a preset storage space;
the method comprises the steps of acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server, and comprises the following steps:
acquiring synthesized video data from a storage space;
and uploading the synthesized video data to a preset server.
Specifically, after receiving the upload instruction, the application client may call the editing SDK and the upload SDK at the same time. When the editing SDK is called to synthesize the target video, the synthesized video data can be transmitted to the application program client in real time in a buffer mode, and the application program client receives the video data and then stores the video data to a preset storage space, such as caching. When the uploading SDK is called, the uploading SDK can acquire the synthesized video data from the storage space in a buffer mode and upload the acquired video data to a preset server.
Further, each time the editing SDK sends a copy of the synthesized video data to the client, a notification is sent to the uploading SDK, and after receiving the notification, the uploading SDK can go to the storage space to obtain the video data which is not uploaded.
In a preferred embodiment of the present disclosure, synthesizing a target video includes:
acquiring target video data of a target video;
synthesizing the target video data to obtain corresponding synthesized data, and marking the data size of the synthesized data and the offset of the synthesized data; the offset is used to characterize the position offset of the synthesized data in the target video.
Specifically, when the SDK composite video data is edited, there is a composite rate, for example, 2M size data is composited per second, and the composite rate is affected by factors such as hardware performance of the terminal, a composite algorithm, and the like. The editing SDK may obtain target video data with a composite rate from a target video, then composite the target video data to obtain corresponding composite data, and mark a data size of the composite data and an offset of the composite data.
The offset is used for representing the position offset of the synthesized data in the video to be uploaded. Wherein the location offset may be an address offset of the data. For example, the composition rate of the editing SDK is 2M per second, and at the 1 st second, the editing SDK obtains 2M data in the target video, and composites the 2M data to obtain corresponding composited data, and then marks that the size of the composited data is 2M, and the composited data is the 0 th to 2 nd M data in the video to be uploaded.
Alternatively, the positional offset may be a time offset of the data. For example, the composition rate of the editing SDK is 2M per second, and at the 1 st second, the editing SDK obtains 2M data in the target video, and performs composition on the 2M data to obtain corresponding synthesized data, and then marks that the synthesized data size is 2M, and the synthesized data is the data of the 0 th to 1 st seconds in the video to be uploaded.
Of course, other types of offsets are also applicable to the embodiment of the present disclosure, and may be set according to actual requirements in practical applications, which is not limited by the embodiment of the present disclosure.
Furthermore, after continuously receiving multiple copies of synthesized video data, the application client can splice the received multiple copies of synthesized video data, so that the synthesized video data can be conveniently subjected to fragment processing. For example, the composition rate of the editing SDK is 2M per second, and after 3 seconds, the application client receives 3 parts of 2M composited video data, so that the application client can sequentially splice the 3 parts of video data based on the offset of the 3 parts of video data to obtain one part of 6M composited video data.
Still further, the embodiments of the present disclosure may determine whether the composition is abnormal based on an offset of the video data that has been composed. Specifically, in practical applications, the editing SDK may be synthesized according to the sequence of the target video data in the target video, so that if there is no abnormal synthesis, the offset of the synthesized video data is not beyond the range of the target video, and if the offset of the synthesized video data received by the application client is beyond the range of the target video, it may be determined that the synthesis has failed, and at this time, the uploading SDK is notified to terminate the uploading.
In a preferred embodiment of the present disclosure, before the step of uploading the video data to the preset server, the method further includes:
carrying out fragment processing on the synthesized video data, and marking the ID and uploading state of each fragment; the upload status includes any one of uploaded and not uploaded;
the method for uploading the video data to the preset server comprises the following steps:
determining the ID of the fragment which is not uploaded in the uploading state;
acquiring video data of the uploaded fragments according to the IDs of the uploaded fragments;
uploading the video data of the uploaded fragments to a server;
when the uploading is successful, the fragmentation state is changed from the non-uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
The uploading SDK needs to decouple file input because it involves logic such as fragment uploading, fragment retry, dynamic adjustment of fragment size, etc., so an on-demand claim scheme is employed, each fragment including fragment ID, video data, data size of video data, and uploading status. The client of the application program needs to record the fragment ID transmitted to the SDK for data, and when the needed fragment ID does not exist, the client continues to read the video data and record the fragment at the end of the last fragment; when the required fragment ID exists, the video data corresponding to the fragment is read.
Specifically, the application program client divides the spliced synthesized data into pieces, marks the ID and the uploading state of each piece, acquires the uploading state of each piece according to the sequence of the IDs when the SDK is uploaded to acquire the pieces, acquires the video data corresponding to the pieces of which the uploading state is not uploaded, and uploads the video data of the pieces of which the uploading state is not uploaded to the server, wherein when the uploading is successful, the piece state is changed from the uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
For example, at the current moment, the application client has spliced to obtain 6M video data, then the 6M video data is fragmented based on the size of 1M to obtain 6 fragments, the IDs and the uploading states of the 6 fragments are respectively marked, when the video data is obtained by uploading the SDK, the uploading state of each fragment is obtained according to the fragment ID, then the video data of the fragment with the uploading state being not uploaded is uploaded to the server, and when the uploading is successful, the uploading state of the fragment is changed to the uploaded state.
Further, in practical application, when the server receives the fragment, it will return a message of successful transmission, and when the terminal receives the message, it can determine that the upload is successful. If uploading of a certain fragment fails, the whole fragment can be retransmitted.
Further, when the application program client side fragments the video data, verification information of the fragments can be generated based on the video data of the fragments, the verification information and the fragments are uploaded to the server together, after the server receives the fragments, the verification information can be generated based on the video data of the fragments, the generated verification is compared with the verification information uploaded by the uploading SDK, if the verification information and the verification information are completely the same, the fragments can be judged to be correct, otherwise, the fragments can be judged to be wrong, the fragments need to be retransmitted, and the terminal is informed to retransmit the fragments.
In practical application, a plurality of fragments may be uploaded in a serial manner, for example, sequentially according to the IDs of the fragments, or in a parallel manner, for example, simultaneously through a plurality of threads.
Step S203, the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to a preset server are repeatedly executed until the synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server;
since the acquisition of the synthesized video data is real-time and the synchronous uploading of the synthesized video data to the server is also real-time, the acquisition of the synthesized video data and the synchronous uploading of the synthesized video data to the server need to be repeated until the target video is completely synthesized to obtain complete synthesized video data and the complete synthesized video data is completely uploaded to the server.
Step S204, when the synthesized video data of all the target videos are obtained, generating file header data of the target videos;
and step S205, uploading the header data to a preset server.
In practical applications, a complete video file includes two parts, i.e., a file header and a file body, data in the file body is complete video data, and data in the file header is related information of the video file, including an index of an offset. Specifically, after the editing SDK completes the synthesis of all the target videos, the offset of all the video data can be known, so that after the complete synthesized video data of the target videos is obtained, the header data of the target videos is generated, and then the header data is sent to the application client. When the application program client receives the header data, if the fragments which are not uploaded exist, the fragments which are not uploaded are continuously uploaded until all the fragments are completely uploaded, and then the header data is uploaded; and if all the fragments are completely uploaded, uploading header data. And after receiving the header data, the server splices the header data and each fragment to obtain a synthesized video. Thus, the total time for composition and uploading is almost the time consuming of the two, but not the time consuming sum of the two, for example, 8 seconds are required for composition of the target video, and 12 seconds are required for uploading the composited video, and in the embodiment of the present disclosure, 12-13 seconds are required for composition and uploading instead of 20 seconds; for another example, the target video needs 20 seconds for composition, and the post-composition video needs 10 seconds for uploading, so in the embodiment of the present disclosure, the composition and uploading need 20 to 21 seconds in total, instead of 30 seconds.
Further, in practical applications, the editing SDK may be synthesized according to the sequence of the target video data in the target video, so if there is no abnormal synthesis, the offset of the synthesized video data does not exceed the range of the target video, and if the synthesized video data received by the application client covers video data other than the header data, it may also be determined that the synthesis fails, and at this time, the uploading SDK is notified to terminate the uploading.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to still another embodiment of the present disclosure, and as shown in fig. 4, the apparatus of this embodiment may include:
a receiving module 401, configured to receive an upload instruction for a target video;
the processing module 402 is configured to synthesize a target video based on an upload instruction, acquire synthesized video data in real time, and upload the synthesized video data to a preset server in synchronization;
and repeatedly calling the processing module until the synthesized video data of all the target videos are obtained and all the synthesized video data are uploaded to a preset server.
In a preferred embodiment of the present disclosure, the method further includes:
the generating module is used for generating file header data of the target video when the synthesized video data of all the target videos are obtained;
and the processing module is also used for uploading the file header data to a preset server.
In a preferred embodiment of the present disclosure, the method further includes:
the storage module is used for storing the synthesized video data into a preset storage space after synthesizing the target video;
the processing module comprises:
a first obtaining sub-module, configured to obtain synthesized video data from a storage space;
and the uploading sub-module is used for uploading the synthesized video data to a preset server.
In a preferred embodiment of the present disclosure, the processing module includes:
the second obtaining submodule is used for obtaining target video data of the target video;
the synthesis submodule is used for carrying out synthesis processing on the target video data to obtain corresponding synthesized data;
a marking submodule for marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
In a preferred embodiment of the present disclosure, the method further includes:
the fragment module is used for carrying out fragment processing on the synthesized video data and marking the ID and uploading state of each fragment before the step of uploading the video data to the preset server; the upload status includes any one of uploaded and not uploaded;
the processing module comprises:
the determining submodule is used for determining the ID of the fragment which is not uploaded in the uploading state;
the third obtaining submodule is used for obtaining the video data of the uploaded fragments according to the ID of the uploaded fragments;
the sending submodule is used for uploading the video data of the uploaded fragments to a server;
the updating submodule is used for changing the fragmentation state from the non-uploading state to the uploading state when the uploading is successful; and when the uploading fails, the sending submodule is repeatedly called until the uploading is successful, and the fragmentation state is changed from the non-uploading state to the uploading state.
The video processing apparatus of this embodiment can execute the video processing methods shown in the first and second embodiments of the present disclosure, and the implementation principles thereof are similar, and are not described herein again.
In the embodiment of the disclosure, a terminal receives an upload instruction for a target video, synthesizes the target video based on the upload instruction, acquires synthesized video data in real time, synchronously uploads the synthesized video data to a preset server, repeatedly executes the steps of acquiring the synthesized video data in real time and synchronously uploads the synthesized video data to the preset server until synthesized video data of all the target videos are acquired and all the synthesized video data are uploaded to the preset server. By the mode, after the user initiates the uploading instruction in the terminal, the terminal can simultaneously execute synthesis and uploading, so that the terminal can upload synthesized data to the server in real time when synthesizing the target video, the synthesis and uploading of the target video is realized, the total time of synthesis and uploading is reduced, and the user experience is improved.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 501 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)502, a Random Access Memory (RAM)503 and a storage device 508 hereinafter, which are specifically shown as follows:
as shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving an uploading instruction aiming at a target video; synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server; and repeating the steps of synthesizing the target video, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to the preset server until the complete synthesized video data of the target video is acquired and uploaded to the preset server.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a video processing method, comprising:
receiving an uploading instruction aiming at a target video;
synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time, and synchronously uploading the synthesized video data to a preset server;
and repeating the steps of acquiring the synthesized video data in real time and synchronously uploading the synthesized video data to the preset server until the synthesized video data of all the target videos are acquired and the synthesized video data are all uploaded to the preset server.
Preferably, the method further comprises the following steps:
when synthesized video data of all target videos are acquired, generating file header data of the target videos;
and uploading the file header data to a preset server.
Preferably, after the synthesizing the target video, the method further includes:
storing the synthesized video data to a preset storage space;
the method comprises the steps of acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server, and comprises the following steps:
acquiring synthesized video data from a storage space;
and uploading the synthesized video data to a preset server.
Preferably, the synthesizing of the target video comprises:
acquiring target video data of a target video;
synthesizing the target video data to obtain corresponding synthesized data, and marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
Preferably, before the step of uploading the video data to the preset server, the method further comprises:
carrying out fragment processing on the synthesized video data, and marking the ID and uploading state of each fragment; the upload status includes any one of uploaded and not uploaded;
the method for uploading the video data to the preset server comprises the following steps:
determining the ID of the fragment which is not uploaded in the uploading state;
acquiring video data of the uploaded fragments according to the IDs of the uploaded fragments;
uploading the video data of the uploaded fragments to a server;
when the uploading is successful, the fragmentation state is changed from the non-uploading state to the uploading state; and when the uploading fails, repeatedly executing the step of uploading the video data of the uploaded fragments to the server until the uploading is successful, and changing the fragment state from the uploading failure to the uploading failure.
According to one or more embodiments of the present disclosure, [ example two ] there is provided the apparatus of example one, further comprising:
the receiving module is used for receiving an uploading instruction aiming at a target video;
the processing module is used for synthesizing the target video based on the uploading instruction, acquiring synthesized video data in real time and synchronously uploading the synthesized video data to a preset server;
and repeatedly calling the processing module until the synthesized video data of all the target videos are obtained and all the synthesized video data are uploaded to a preset server.
Preferably, the method further comprises the following steps:
the generating module is used for generating file header data of the target video when the synthesized video data of all the target videos are obtained;
and the processing module is also used for uploading the file header data to a preset server.
Preferably, the method further comprises the following steps:
the storage module is used for storing the synthesized video data into a preset storage space after synthesizing the target video;
the processing module comprises:
a first obtaining sub-module, configured to obtain synthesized video data from a storage space;
and the uploading sub-module is used for uploading the synthesized video data to a preset server.
Preferably, the processing module comprises:
the second obtaining submodule is used for obtaining target video data of the target video;
the synthesis submodule is used for carrying out synthesis processing on the target video data to obtain corresponding synthesized data;
a marking submodule for marking the data size of the synthesized data and the offset of the synthesized data; the offset is used for representing the position offset of the synthesized data in the video to be uploaded.
Preferably, the method further comprises the following steps:
the fragment module is used for carrying out fragment processing on the synthesized video data and marking the ID and uploading state of each fragment before the step of uploading the video data to the preset server; the upload status includes any one of uploaded and not uploaded;
the processing module comprises:
the determining submodule is used for determining the ID of the fragment which is not uploaded in the uploading state;
the third obtaining submodule is used for obtaining the video data of the uploaded fragments according to the ID of the uploaded fragments;
the sending submodule is used for uploading the video data of the uploaded fragments to a server;
the updating submodule is used for changing the fragmentation state from the non-uploading state to the uploading state when the uploading is successful; and when the uploading fails, the sending submodule is repeatedly called until the uploading is successful, and the fragmentation state is changed from the non-uploading state to the uploading state.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.