Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant disclosure and are not limiting of the disclosure. It should be noted that, for the convenience of description, only the parts relevant to the related disclosure are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the video processing method or video processing apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 may be a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a web browser application, a live application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal apparatuses 101, 102, 103 are hardware, various electronic apparatuses having a communication function are possible. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for information streams displayed on the terminal devices 101, 102, 103. The background server can analyze and process the received data of the file to be uploaded and the like, and feed back the processing result to the terminal equipment or send the processing result to other terminal equipment.
It should be noted that the video processing method provided in the embodiment of the present application is generally executed by the terminal devices 101, 102, and 103, and accordingly, the video processing apparatus is generally disposed in the terminal devices 101, 102, and 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a video processing method according to the present disclosure is shown. The video processing method comprises the following steps:
step 201, in response to detecting a first preset operation related to video modification, determining modified content corresponding to the first preset operation.
In this embodiment, an execution subject of the video processing method (e.g., the terminal device shown in fig. 1) may determine, in response to detecting the first preset operation, modified content corresponding to the first preset operation. The first preset operation corresponds to the modification content, and may indicate what kind of modification the execution subject is to modify the target video. The first preset operation may be various operations performed on the execution subject, such as clicking, sliding, and the like. The first preset operation may be an operation for executing an entity button of the subject or displayed preset content. The execution body may be pre-stored with a corresponding relationship between the first preset operation and the modified content. Then, the above-mentioned execution may determine the modified content corresponding to the first preset operation after detecting the first preset operation. The modified content corresponding to different first preset operations may be different.
In some optional implementations of the embodiment, the modification content includes at least one of: cutting and splicing video tracks, and cutting, rotating and combining video pictures.
In these optional implementations, the execution subject may perform at least one of the modifications to the video track (videoTrack) and the modifications to the video picture (videoFrame). The execution body may clip each video track, for example, select a portion of the video track from the entire duration of the video. Different segments of video can also be connected to achieve splicing of video tracks. In addition, the local part of the picture can be selected by cutting the video picture. The rotation is to rotate a picture in the video. The video frames are combined by combining at least two frames, such as an image of a sheep with a video frame that appears green grass.
The implementation modes can record various modified contents so as to obtain accurate video meeting the requirements of users when video transcoding is carried out subsequently.
In step 202, in response to detecting a second preset operation related to video editing, editing content corresponding to the second preset operation is determined.
In this embodiment, the executing entity may determine, when the second preset operation is detected, the editing content corresponding to the second preset operation. The editing content is the content edited by the video. Such as repeating video content, adding a cover page of the video, and/or adding sound effects, etc. It should be noted that there may be a partial overlap between the modified content and the edited content. For example, cropping of the video track may be included.
Specifically, the second preset operation here corresponds to the editing content, and may indicate what kind of editing the execution subject is to perform. The second preset operation may be various operations performed on the execution subject, such as clicking, sliding, and the like. The second preset operation may be an operation for executing an entity button of the subject or displayed preset content. The execution body may be pre-stored with a corresponding relationship between the second preset operation and the editing content. Then, the execution subject may determine the editing content corresponding to the second preset operation after detecting the second preset operation. The editing content corresponding to different second preset operations may be different.
In some alternative implementations of this embodiment, editing the content includes adding the editing content.
In these alternative implementations, the added editing content is content of an editing operation that adds an effect to the video. The added content may include sound effects, animated effects, and/or human face effects, among others. For example, the special effect of the face may be to enlarge eyes or increase beard, etc.
The implementation modes can record and add the editing content so as to obtain the accurate video meeting the requirements of the user when the video is transcoded subsequently.
And step 203, performing video transcoding on the target video based on the modified content and the edited content to obtain a file to be uploaded.
In this embodiment, the execution main body may perform video transcoding on the target video based on the modified content and the edited content, to obtain a file to be uploaded, and to be uploaded to the server. The video transcoding here refers to converting a compressed video code stream into another video code stream to modify, edit, etc. the video, so as to meet the editing requirements of the user, and at the same time, facilitate uploading and utilization of the server. Specifically, the execution subject may modify the target video according to the modified content and edit the target video according to the edited content.
In some optional implementations of this embodiment, step 203 may include:
and in response to the detection of the video publishing operation or the editing confirmation operation, performing video transcoding on the target video in the background based on the modified content and the edited content.
In these alternative implementations, the execution subject may perform video transcoding in the background of the execution subject when a video publishing operation or an editing confirmation operation is detected. Specifically, the video distribution operation and the edit confirmation operation may be various operations performed on the execution subject, such as clicking, sliding, and the like. The video distribution operation and the edit confirmation operation may be operations for executing an entity button of the subject or displayed preset content, for example, the video distribution operation may be an operation performed on an area indicating video distribution in a page for video distribution. The edit confirmation operation is used to indicate that the user has completed determining what editing to make on the video. For example, it may be "next" to click on the edit interface display.
The implementation modes carry out transcoding in the background, so that the waiting time of the user can be saved, and the user is prevented from being forced to wait for the completion of transcoding.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the video processing method according to the present embodiment. In the application scenario of fig. 3, the execution subject 301 may determine, in response to detecting a first preset operation 302 regarding video modification (an operation of a user clicking a modification option displayed on a screen on an import page), a modified content 304 corresponding to the first preset operation: the video picture is rotated. In response to detection of a second preset operation 303 (an operation of a user clicking an editing option displayed on the screen on an editing page) regarding video editing, the editing content 305 corresponding to the second preset operation is determined: and adding a cover. And performing video transcoding on the target video based on the modified content 304 and the edited content 305 to obtain a file to be uploaded 306.
The method provided by the above embodiment of the present disclosure can perform video transcoding after all edits are performed on the video, instead of performing video transcoding after modification and performing video transcoding after editing. Therefore, the transcoding time can be shortened, and meanwhile, the execution main body can also carry out video transcoding and uploading in a local background so as to save the waiting time of a user in video transcoding.
With further reference to fig. 4, a flow 400 of yet another embodiment of a video processing method is shown. The flow 400 of the video processing method comprises the following steps:
in step 401, in response to detecting a first preset operation related to video modification, modified content corresponding to the first preset operation is determined.
In this embodiment, an execution subject of the video processing method (e.g., the terminal device shown in fig. 1) may determine, in response to detecting the first preset operation, modified content corresponding to the first preset operation. The first preset operation corresponds to the modification content, and may indicate what kind of modification the execution subject is to modify the target video. The first preset operation may be various operations performed on the execution subject, such as clicking, sliding, and the like. The first preset operation may be an operation for executing an entity button of the subject or displayed preset content.
Step 402, in response to detecting a second preset operation related to video editing, determining editing content corresponding to the second preset operation.
In this embodiment, the executing entity may determine, when the second preset operation is detected, the editing content corresponding to the second preset operation. The editing content is the content edited by the video. Such as repeating video content, adding a cover page of the video, and/or adding sound effects, etc.
Step 403, generating and displaying the preview video based on the modified content, the added editing content and the target video.
In this embodiment, the execution body may generate and display a preview video based on the modified content, the edited content, and the target video. In this way, a preview screen can be displayed to the user so that the user can visually see the result of the editing done, and further editing of the video can be better performed.
And step 404, performing video transcoding on the target video based on the modified content and the edited content to obtain a file to be uploaded.
In this embodiment, the execution main body may perform video transcoding on the target video based on the modified content and the edited content, to obtain a file to be uploaded, and to be uploaded to the server. The video transcoding here refers to converting a compressed video code stream into another video code stream to modify, edit, etc. the video, so as to meet the editing requirements of the user, and at the same time, facilitate uploading and utilization of the server.
The embodiment can display the preview picture to the user, so that the user can visually see the editing result and further edit the video better.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a video processing apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the video processing apparatus 500 of the present embodiment includes: a first determining unit 501, a second determining unit 502 and a transcoding unit 503. The first determining unit 501 is configured to determine, in response to detecting a first preset operation related to video modification, modified content corresponding to the first preset operation; a second determining unit 502 configured to determine, in response to detection of a second preset operation regarding video editing, editing content corresponding to the second preset operation; and the transcoding unit 503 is configured to perform video transcoding on the target video based on the modified content and the edited content, so as to obtain a file to be uploaded.
In some embodiments, the first determining unit 501 of the video processing apparatus 500 may determine, in response to detecting the first preset operation, the modified content corresponding to the first preset operation. The first preset operation corresponds to the modification content, and may indicate what kind of modification the execution subject is to modify the target video.
In some embodiments, the second determining unit 502 may determine, in a case where the second preset operation is detected, the editing content corresponding to the second preset operation.
In some embodiments, the transcoding unit 503 may perform video transcoding on the target video based on the modified content and the edited content, to obtain a file to be uploaded, so as to be uploaded to the server. The video transcoding here refers to converting a compressed video code stream into another video code stream to modify, edit, etc. the video, so as to meet the editing requirements of the user, and at the same time, facilitate uploading and utilization of the server.
In some optional implementations of the embodiment, the modification content includes at least one of: cutting and splicing video tracks, and cutting, rotating and combining video pictures.
In some alternative implementations of this embodiment, editing the content includes adding the editing content.
In some optional implementations of this embodiment, the transcoding unit is further configured to: and in response to the detection of the video publishing operation or the editing confirmation operation, performing video transcoding on the target video in the background based on the modified content and the edited content.
In some optional implementations of this embodiment, the apparatus further includes: a display unit configured to generate and display a preview video based on the modified content, the added editing content, and the target video.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., the terminal device of fig. 1) 600 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to detecting a first preset operation related to video modification, determining modified content corresponding to the first preset operation; in response to detecting a second preset operation related to video editing, determining editing content corresponding to the second preset operation; and performing video transcoding on the target video based on the modified content and the edited content to obtain a file to be uploaded.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of a unit does not form a limitation on the unit itself under certain conditions, for example, the transcoding unit can also be described as a unit for video transcoding a target video based on modified content and edited content to obtain a file to be uploaded.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.