WO2023083064A1 - 视频处理方法、装置、电子设备及可读存储介质 - Google Patents

视频处理方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2023083064A1
WO2023083064A1 PCT/CN2022/129165 CN2022129165W WO2023083064A1 WO 2023083064 A1 WO2023083064 A1 WO 2023083064A1 CN 2022129165 W CN2022129165 W CN 2022129165W WO 2023083064 A1 WO2023083064 A1 WO 2023083064A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video frame
target
processing mode
target video
Prior art date
Application number
PCT/CN2022/129165
Other languages
English (en)
French (fr)
Inventor
何思羽
任士博
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023083064A1 publication Critical patent/WO2023083064A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the technical field of the Internet, and in particular to a video processing method, device, electronic equipment and readable storage medium.
  • application programs applications, APPs
  • video editing functions are especially popular among people.
  • the application programs that can provide video editing functions can implement many video editing operations, for example, smart image cutout, and such video editing operations need to perform corresponding processing on each video frame image of the video.
  • a stream processing framework is generally used for such video editing operations. For example, when performing smart matting, encoding, matting processing, rendering, and display are performed frame by frame.
  • the present disclosure provides a video processing method, device, electronic equipment and readable storage medium.
  • the present disclosure provides a video processing method, including:
  • the first thread executes the target video editing operation on the video frame at the specified video frame position in the video to be processed and pre-executes the target video frame for each video frame after the specified video frame position A video editing operation, obtaining each target video frame obtained by executing the target video editing operation;
  • the second thread renders each of the target video frames to display each of the target video frames; wherein, the position of the video frame where the target video editing operation is being performed is ahead of the target being displayed A video frame position corresponding to the video frame in the video to be processed.
  • the specified video frame position is determined according to the video frame position located when the video editing instruction is acquired; wherein the video editing instruction is used to instruct to execute the target on the video to be processed Video editing operation: the video frame position positioned when the video editing instruction is obtained is a preset video frame position or a video frame position specified by the first jump instruction.
  • the specified video frame position is determined according to the video frame position specified by the second jump instruction, and the second jump instruction is used to execute the The instruction following the video editing instruction for the target video editing operation.
  • the method also includes:
  • the video editing processing mode is the first processing mode or the second processing mode
  • the first processing mode is for the key A mode for executing the target video editing operation frame by frame
  • the second processing mode is a mode for executing the target video editing operation frame by frame.
  • the determining the video editing processing mode corresponding to the first thread according to the specified video frame position includes:
  • the specified video frame position is the preset video frame position, then determine that the video editing processing mode corresponding to the first thread is the first processing mode;
  • the specified video frame position is the video frame position specified by the first jump instruction, then determine that the video editing processing mode corresponding to the first thread is the first processing mode or the second processing mode.
  • the determining the video editing processing mode corresponding to the first thread according to the specified video frame position includes:
  • the method also includes:
  • Describe target video editing operation obtain and execute each target video frame that described target video editing operation obtains, comprise:
  • the target video editing operation is performed on the video frame of the video to be processed.
  • the method also includes:
  • the video editing processing mode corresponding to the first thread is the second processing mode.
  • the method also includes:
  • the first thread is not interrupted, so that the first thread A thread continues to perform the target video editing operation on the video to be processed.
  • the method also includes:
  • the video editing processing mode In response to the playback instruction, switch the video editing processing mode to a third processing mode, and perform the target video editing on the video frame of the video to be processed according to the third processing mode through the first thread Operation; wherein, the third processing mode is a mode of determining the video frame position for performing the target video editing operation according to the playback speed.
  • the video editing processing mode in response to the playback instruction, is switched to a third processing mode, and the first thread processes the video to be processed according to the third processing mode.
  • the video frame of the video performs the targeted video editing operations, including:
  • the first thread starts from the position of the next target video frame to be played, and processes the video to be processed according to the third processing mode
  • the video frame performs the target video editing operation.
  • the video editing processing mode in response to the playback instruction, is switched to a third processing mode, and the first thread processes the video to be processed according to the third processing mode.
  • the video frame of the video performs the targeted video editing operations, including:
  • the video editing processing mode in response to the playback instruction, is switched to a third processing mode, and the first thread processes the video to be processed according to the third processing mode.
  • the video frame of the video performs the targeted video editing operations, including:
  • the target video editing operation is performed on the video frame of the video to be processed according to the third processing mode.
  • the method also includes:
  • the method also includes:
  • the first thread After performing the target video editing operation on the last video frame of the video to be processed, the first thread starts from the position of the initial video frame of the video to be processed, and performs the processing according to the fourth processing mode. Performing the target video editing operation on the video frame of the video to be processed;
  • the fourth processing mode is a mode in which the target video editing operation is sequentially performed on video frames that do not have corresponding target video frames in the video to be processed.
  • the present disclosure provides a video processing device, including:
  • the first processing module is used to start from the specified video frame position through the first thread, perform the target video editing operation on the video frame of the specified video frame position in the video to be processed and perform the target video editing operation on each video frame after the specified video frame position Performing the target video editing operation on the video frame in advance, and acquiring each target video frame obtained by performing the target video editing operation;
  • the second processing module is used to render each of the target video frames through a second thread in response to the playback instruction, so as to display each of the target video frames; wherein, the position of the video frame that is executing the target video editing operation is ahead The video frame position corresponding to the target video frame being displayed in the video to be processed.
  • the present disclosure provides an electronic device, including: a memory and a processor;
  • the memory is configured to store computer program instructions
  • the processor is configured to execute the computer program instructions, so that the electronic device implements the video processing method according to any one of the first aspect.
  • the present disclosure provides a readable storage medium, including: computer program instructions; when the computer program instructions are executed by at least one processor of an electronic device, the electronic device implements the electronic device according to any one of the first aspect.
  • a readable storage medium including: computer program instructions; when the computer program instructions are executed by at least one processor of an electronic device, the electronic device implements the electronic device according to any one of the first aspect.
  • the present disclosure provides a computer program product.
  • the computer program product When the computer program product is executed by a computer, the computer implements the video processing method according to any one of the first aspect.
  • the present disclosure provides a video processing method, device, electronic equipment, and readable storage medium, wherein the method includes: starting from a specified video frame position through a first thread, performing target video editing operations on the video to be processed, and obtaining the target video Each target video frame obtained by the editing operation; in response to the play instruction, the second thread renders each target video frame to display each target video frame. .
  • FIG. 1 is a flowchart of a video processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 3 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 4 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 6 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 7 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 8 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 9 is a flowchart of a video processing method provided by another embodiment of the present disclosure.
  • FIG. 10 is a schematic framework diagram of a video processing device provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of the life cycle of each module in the embodiment shown in FIG. 10 provided by an embodiment of the present disclosure
  • FIG. 12 is a schematic structural diagram of a video processing device provided by an embodiment of the present disclosure.
  • Fig. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the video processing method provided in the present disclosure may be executed by a video processing device, wherein the video processing device may be implemented by any software and/or hardware.
  • the video processing device may be a tablet computer, a mobile phone (such as a folding screen mobile phone, a large-screen mobile phone, etc.), a wearable device, a vehicle-mounted device, or an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device , notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), smart TV, smart screen, high-definition TV, 4K TV, smart speaker, smart projector, etc.
  • IOT Internet of Things
  • Video editing operations are time-consuming, but better preview and playback effects need to be provided.
  • the video editing operation needs to follow up and align the operation instructions input by the user in real time, for example, the user input jump instruction (also called seek instruction), update instruction (also called refresh instruction), play instruction, variable speed playback commands, pause playback commands, etc.
  • the user input jump instruction also called seek instruction
  • update instruction also called refresh instruction
  • play instruction variable speed playback commands
  • pause playback commands etc.
  • the video editing processing method provided by the present disclosure can be used to ensure the preview effect of the user.
  • the video editing application program (hereinafter referred to as the application program) installed in the electronic device is taken as an example to describe the video processing method provided by the present disclosure in detail.
  • FIG. 1 is a flowchart of a video processing method provided by an embodiment of the present disclosure. Referring to Figure 1, the method provided in this embodiment includes:
  • the video to be processed is the material that needs to perform the target video editing operation.
  • the disclosure does not limit the duration, storage format, resolution, video content, acquisition method, etc. of the video to be processed.
  • the first thread is a thread for performing target video editing operations on the video to be processed, and the first thread may also be called an algorithm thread.
  • the algorithm thread may call an algorithm instance corresponding to the target video editing operation, so as to implement the target video editing operation on the video to be processed.
  • the present disclosure does not limit the manner of determining the first thread.
  • the present disclosure does not limit the type of target video editing operations.
  • the target video editing operations may include: smart cutout, adding stickers, filters, and the like.
  • the specified video frame position may be the position of any video frame in the video to be processed.
  • the specified video frame position may be the video frame position positioned when the video editing instruction is acquired, or may also be the video frame position specified by the user through a trigger operation.
  • the video editing instruction is used to instruct to perform a target video editing operation for each video frame of the video to be processed.
  • the present disclosure does not limit the implementation manner of acquiring the video editing instruction.
  • the user may input the video editing instruction to the application program by operating a corresponding control provided by the application program.
  • the user can select a video to be processed through the material selection page displayed by the application program, and import it into the application program for video editing.
  • the application program may display a video editing page on the user interface, wherein the video editing page may include controls corresponding to various video editing operations, including: a target control corresponding to a target video editing operation (such as smart cutout);
  • a target control corresponding to a target video editing operation (such as smart cutout)
  • the application program receives the user's trigger operation (such as a click operation) on the target control, it generates a video editing instruction for instructing each video frame of the video to be processed to perform smart cutout, and the application program responds to the video editing instruction.
  • the video frame position of starts to perform the target video editing operation on the video to be processed.
  • the application when importing a video to be processed, the application can locate the preset video frame position by default. Since the subsequent user does not input any trigger operation, the positioned video frame position will not change.
  • the video frame position positioned during the instruction is still the above-mentioned preset video frame position, that is, the specified video frame position is the preset video frame position.
  • the preset video frame position may be any video frame of the video to be processed, for example, the preset video frame position is the position of the initial video frame of the video to be processed.
  • the user can select a video to be processed through the material selection page displayed by the application program, and import it into the application program for video editing.
  • the application program may display a video editing page on the user interface, wherein the video editing page may include controls corresponding to various video editing operations, including: a target control corresponding to a target video editing operation (such as smart cutout);
  • a target control corresponding to a target video editing operation (such as smart cutout)
  • the application program receives the user's trigger operation (such as a click operation) on the target control, it generates a video editing instruction for instructing each video frame of the video to be processed to perform smart cutout, and the application program responds to the video editing instruction.
  • the video frame position of starts to perform the target video editing operation on the video to be processed.
  • the user may also input a first jump command for jumping the positioned position from a preset video frame position to a video frame position indicated by the first jump command.
  • the specified video frame position is the video frame position indicated by the first jump instruction. It should be noted that, the user may input one or more first jump commands, and usually the video frame position indicated by the last jump command is used as the designated video frame position.
  • the first thread has started to execute the target video editing operation from a preset video frame position, and during the execution of the target video editing operation, The user may input a second jump instruction, and the application program responds to the second jump instruction, and the first thread needs to execute the target video editing operation from the video frame position indicated by the second jump instruction.
  • the specified video frame position is the video frame position indicated by the second jump instruction.
  • the user may input multiple second jump commands, and the first thread needs to respond to each second jump command respectively, and execute the target video editing operation from the video frame position indicated by the second jump command.
  • the application program supports the user to preview each target video frame obtained by performing the target video editing operation, that is, the application program supports the user to preview and play each target video frame.
  • the second thread when the application program receives a play instruction, in response to the play instruction, the second thread renders each target video frame according to a play speed, so as to play each target video frame.
  • the first thread executes the target video editing operation on the video to be processed
  • the target video editing operation is performed on the video frame at the specified video frame position to obtain the target video frame corresponding to the specified video frame position
  • the first thread The second thread can automatically render the target video frame corresponding to the specified video frame position, so as to display the target video frame corresponding to the specified video frame position. In this way, it is ensured that the user previews the processing effect corresponding to the specified video frame position before the application program receives the playback instruction.
  • the second thread can automatically start rendering each target video frame from the specified video frame position, that is, the user does not need to trigger a playback instruction.
  • the method provided in this embodiment ensures that the position of the video frame where the target video editing operation is currently being executed is always ahead of the position of the target video frame currently being displayed in the video to be processed, ensuring that the target video frame has already been executed.
  • the target video frame of the video editing operation is available for preview, so it can solve the problem of the user's preview stuttering.
  • the position of the video frame currently performing the target video editing operation is ahead of the position of the target video frame currently being displayed in the video to be processed:
  • the video to be processed has 100 frames, in sequence from frame 1 to frame 100, the target video frame corresponding to the 20th frame is currently being displayed, and the video frame position of the target video editing operation being executed is the 30th frame, which means it is currently being executed
  • the video frame position of the target video editing operation is ahead of the position of the currently displayed target video frame in the video to be processed.
  • the first thread starts from the specified video frame position, executes the target video editing operation on the video to be processed, and obtains each target video frame obtained by performing the target video editing operation; in response to the playback instruction, through the second thread
  • Each target video frame is rendered to display each target video frame.
  • This disclosure pre-executes the target video editing operation on each video frame after the specified video frame position, so as to ensure that the video frame position where the target video editing operation is being performed is always ahead of the target video frame being displayed in the video frame to be processed location, so as to solve the problem of preview freeze during the process of performing the target video editing operation on the video to be processed, and ensure the user's preview needs;
  • the rendering thread time does not match, and it is easy to block the rendering thread. Therefore, this disclosure solves the problem of blocking the rendering thread due to the execution of target video editing operations in the streaming framework by decoupling the execution of video editing processing and rendering, and executing them by different threads. It is beneficial to improve the preview effect.
  • the video processing method provided by the present disclosure can provide at least the following video editing processing modes.
  • the application program receives a video editing instruction, it can determine the first The video editing processing mode corresponding to the thread.
  • the preset strategy can be, but not limited to, the current video frame position of the video to be processed, the overall progress of the target video editing operation of the video to be processed, the playback status (pause playback status or playback status), and the user's trigger operation. or multiple factors.
  • the application program supports at least the following four video editing processing modes, namely: the first processing mode, the second processing mode, the third processing mode and the fourth processing mode.
  • the first processing mode is a mode for performing target video editing operations on the key frames of the video to be processed, and the first processing mode may also be called preprocessing mode, anchor mode or anchor mode or other names.
  • the second processing mode is a mode for performing the target video editing operation frame by frame, and the second processing mode may also be called a jump processing mode, seek mode or other names.
  • the third processing mode is a mode for determining the video frame position for executing the target video editing operation based on the playback progress, and the third processing mode may also be called a playback processing mode, playback mode or other names.
  • the fourth processing mode is a mode in which target video editing operations are sequentially performed on video frames that do not have corresponding target video frames in the video to be processed. That is, the fourth processing mode is a mode in which the target video editing operation is sequentially performed on video frames for which no target video editing operation has been performed in a direction from the first video frame to the last video frame of the video to be processed.
  • the fourth processing mode may also be called self-spreading processing mode, adaptive mode and other names.
  • the application program may determine the video editing processing mode corresponding to the target video editing operation performed by the first thread in different scenarios based on the aforementioned preset policy.
  • Fig. 2 is a flowchart of a video processing method provided by an embodiment of the present disclosure. Referring to Figure 2, the method provided in this embodiment includes:
  • S201 Determine a video editing processing mode corresponding to a first thread according to a specified video frame position, where the video editing processing mode is a first processing mode or a second processing mode.
  • the specified video frame position can be a preset video frame position, or the video frame position indicated by the first jump command, or can also be the position indicated by the second jump command. Video frame position. In different situations, the video editing processing mode corresponding to the first thread will be different.
  • the first processing mode is a mode for performing target video editing operations on key frames (ie, I frames).
  • I frame also known as intra picture, is an important frame in inter-frame compression coding. During the encoding process, some video frame sequences are compressed into I frames; some are compressed into P frames; and some are compressed into B frames. When decoding, the complete image can be reconstructed only according to the data of the I frame, without referring to the data of other video frames.
  • the second processing mode is a mode in which target video editing operations are performed frame by frame on the video to be processed.
  • the second processing mode is a video editing processing mode provided in the present disclosure for responding to a seek command input by a user.
  • a possible implementation manner if the specified video frame position is a preset video frame position (such as the position of the initial video frame of the video to be processed), then determine that the video editing processing mode corresponding to the first thread is the first processing mode .
  • the target video editing operation By performing the target video editing operation on the key frame of the video to be processed, when the user previews the special effect processing result, the complete image can be reconstructed according to the target video frame corresponding to the key frame, so the preview effect of the user can be better guaranteed.
  • the video editing processing mode corresponding to the first thread is the first processing mode or the second processing mode.
  • the video editing processing mode corresponding to the first thread can be the first processing mode or the second processing mode; if the specified video frame position is the video frame position indicated by the first jump instruction, then the video editing processing mode corresponding to the first thread may be the second processing mode.
  • the application program starts from the specified video frame position, executes the target video editing operation on each key frame in the video to be processed in turn, and stores the target video editing performed by each key frame Operate on the resulting target video frame.
  • the application starts from the specified video frame position, executes the target video editing operation frame by frame on the video to be processed, and stores the target video editing operation obtained by executing the target video editing operation for each video frame video frame.
  • Step S203 in this embodiment is similar to step S102 in the embodiment shown in FIG. 1 , and reference may be made to the detailed description of the embodiment shown in FIG. 1 . For the sake of brevity, details are not repeated here.
  • the video editing processing mode corresponding to the first thread is determined, and the corresponding video in the video to be processed is determined according to the determined video editing processing mode.
  • frame to perform the target video editing operation to ensure that the video frame position where the target video editing operation is being performed is always ahead of the video frame position of the target video frame being displayed in the video frame to be processed, thereby solving the problem of performing target video editing on the pending video
  • the present disclosure solves the problem of blocking the rendering thread due to the target video editing operation being executed in the streaming framework by decoupling the execution of video editing processing and rendering, and executes them by different threads, which is beneficial to improve the preview effect.
  • Fig. 3 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 3, the method of this embodiment includes:
  • S201 in the embodiment shown in FIG. 2 can be implemented through S301 to S306 in this embodiment.
  • S304 and S305 are executed. If the first thread is currently executing the target video editing operation on the video to be processed, S306 is executed.
  • the method provided by this embodiment analyzes whether there is a corresponding target video frame at the specified video frame position, and whether there is currently a process for performing target video editing operations on the video to be processed (that is, whether the current first thread is currently performing the video editing operation on the video to be processed).
  • Target video editing operation determine whether to switch the video editing processing mode, so as to reduce the waste of computing resources caused by frequently updating the decoder, and improve the processing efficiency of executing the target video editing operation.
  • analyzing whether there is a corresponding target video frame at the specified video frame position is because in some cases, the target video editing operation may have been performed on the video to be processed before, so some or all video frames in the video to be processed A corresponding target video frame exists.
  • the target video editing operation as smart cutout as an example, assume that after the user imports the video to be processed, he generates video editing instructions by operating the controls corresponding to the smart cutout; the application performs smart cutout on the video to be processed according to the video editing instructions; When the image matting is not completely completed, the user inputs a cancel command to cancel the smart matting of the video to be processed. Since the smart matting is not completed for all videos to be processed when the smart matting is canceled, some video frames have corresponding target video frames (that is, matting results).
  • the first thread starts from the specified video frame position according to the determined video editing processing mode, and performs the target video editing operation on the video to be processed.
  • the implementation manner of determining the video editing processing mode corresponding to the first thread according to the specified video frame position can refer to the foregoing, and will not be repeated here.
  • the first thread adopts the first processing mode or the second processing mode, which can be determined according to the specified video frame position being the preset video frame position or the video frame position indicated by the first jump instruction.
  • the process can be continued, so that there is no need to update the decoder, reducing Minimized the resource consumption caused by updating the decoder caused by special effects processing mode switching.
  • the above-mentioned designated video frame position is the video frame position indicated by the second jump instruction
  • the first thread is performing the target video editing operation on the video to be processed according to a video editing processing mode, and the video frame indicated by the second jump instruction If there is a corresponding target video frame at the position, there is no need to switch the video editing processing mode, that is, the first thread will not be interrupted.
  • determining the video editing processing mode can be realized by referring to the method described above; if the specified video frame position is the second The video frame position indicated by the jump instruction can determine that the video editing processing mode is the second processing mode.
  • step S307 in this embodiment is similar to step S203 in the embodiment shown in FIG. 2 , and reference may be made to the detailed description of the embodiment shown in FIG. 2 . For the sake of brevity, details are not repeated here.
  • the method provided in this embodiment determines whether to switch the video editing process by analyzing whether there is a corresponding target video frame at the specified video frame position, and whether there is currently a process of performing the target video editing operation on the video to be processed Mode, not only can meet the user's preview needs, solve the user's preview freeze problem, but also reduce the waste of resources caused by frequent update of the decoder, and improve the processing efficiency of executing target video editing.
  • Fig. 4 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 4, the method provided in this embodiment includes:
  • Steps S401 and S402 in this embodiment are respectively similar to those in the embodiment S101 and S102 shown in FIG. 1 , and reference may be made to the detailed description of the embodiment shown in FIG. 1 . For the sake of brevity, details are not repeated here.
  • the third processing mode is a mode of determining the video frame position for executing the target special effect according to the playback speed of the preview.
  • step S401 to S402 based on the determined video editing processing mode, target video editing operations have been performed on the 1st to 10th video frames to obtain corresponding target video frames.
  • the application program receives the playback instruction, the second thread starts to render the target video frames corresponding to the 1st to 10th video frames in sequence, so as to play the target videos respectively corresponding to the 1st to 10th video frames at the set playback speed frame, assuming that the current playback reaches the 10th frame, and it is detected that there is no corresponding target video frame in the next video frame to be played (that is, the 11th frame).
  • the video editing processing mode switch the video editing processing mode to the third processing mode, so that the first thread determines the video frame position where the target video editing operation is performed according to the playback speed, so as to ensure that the video frame position where the target video editing operation is being performed is always ahead of the video frame being displayed.
  • the video editing processing mode is the third processing mode
  • the preview playback speed the preset processing time of a single video frame to execute the target video editing operation (that is, the average time-consuming of the target video editing operation)
  • the current One or more factors such as the position of the video frame where the target video editing operation is performed, the video frame on which the target video editing operation has been performed, etc., dynamically determine which video frames are ahead of the playback progress to perform the target video editing operation.
  • the 15th frame can be targeted according to the order of video frames
  • the 16th video frame executes the target video editing operation (of course, it is not limited to the 16th frame, it can also be other subsequent video frames with a certain interval from the 15th frame, such as the 18th frame), and it is determined that there is no corresponding target in the 16th video frame video frame, perform the target video editing operation on the 16th video frame, after performing the target video editing operation on the 16th video frame, continue to flexibly determine the video frame position to perform the target video editing operation according to the third processing mode.
  • the third processing mode is a video editing processing mode that responds to playback instructions.
  • the third processing mode can ensure that the target video frame is being executed.
  • the video frame position of the editing operation is always ahead of the corresponding video frame position of the target video frame being played in the video to be processed, that is, ahead of the playback progress.
  • the switching timing may be determined by one or more of the following factors.
  • the randomness of user operations For example, the user may continuously input a play command and a pause play command. 2. Whether there is currently a process of performing target video editing operations on the video to be processed. 3. The target video frame obtained by executing the target video editing operation.
  • steps S401 to S402 the target video editing operations have been performed on the 1st to 10th video frames based on the determined video editing processing mode, and the corresponding video frames of the 1st to 10th video frames are obtained Target video frame.
  • Situation (1) Assume that the application program receives the playback instruction, updates the decoder, and immediately switches the video editing processing mode to the third processing mode; and starts playing the target video frame from the position of the first frame. Then, the application program receives a pause playback instruction. Since the interval between the playback instruction and the pause playback instruction is short, the second thread may not render to the 10th frame. Interrupting the process of the current video editing processing mode, that is, interrupting the first thread, re-updating the decoder, and switching the video editing processing mode will bring a large consumption of computing resources, which will lead to a decrease in video processing efficiency.
  • Situation (2) During playback, detect in real time whether there is a corresponding target video frame at the position of the next video frame to be played. If it is determined that there is no corresponding target video frame at the position of the next video frame to be played, then switch the video editing processing mode, during the playback of the first frame to the tenth frame, you can continue to perform target video editing operations on the video to be processed according to the original video editing processing mode, that is, without interrupting the process of currently executing target video editing operations , which can improve the utilization of computing resources.
  • the application program receives the pause playback instruction, the playback progress does not reach the progress of the target video editing operation. For example, when playing to the 8th frame, the application program receives a pause playback instruction, and in the process of playing the 1st frame to the 8th frame, the application program also executes the target on the 11th frame and the 12th frame through the original video editing processing mode. Video editing operations.
  • the application program may determine the switching timing of the video editing processing mode based on the above one or more factors in response to the playback instruction.
  • a playback instruction is received, and in response to the playback instruction, the video editing processing mode can be switched to the third processing mode, so that the video to be processed can be edited.
  • the target video editing operation is performed on the video frame.
  • the third processing mode it can ensure that the position of the video frame that is performing the target video editing operation is always ahead of the playback progress during the playback process, so as to solve the problems of severe freeze and blurred screen when the user previews the playback. Improving video processing efficiency is beneficial to improving user experience.
  • the second thread pauses the playback, and the first thread continues to perform the target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the second thread stops rendering the target video frame, that is, pauses the playback of the target video frame.
  • the process may not be interrupted, that is, the first thread continues to perform the target video editing operation on the video to be processed, that is, continues to be treated according to the third processing mode Processes video frames of a video to perform targeted video editing operations.
  • computing resources of the electronic device can be effectively used to perform video editing, and the utilization rate of computing resources is improved, thereby improving video processing efficiency.
  • step S403 introduces several possible implementations corresponding to step S403 through the embodiments shown in FIG. 5 to FIG. 7 .
  • Fig. 5 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 5, the method of this embodiment includes:
  • Steps S501 and S502 in this embodiment are similar to those in the embodiment S401 and S402 shown in FIG. 4 respectively. Refer to the detailed description of the embodiment shown in FIG. 4 . For the sake of brevity, details are not repeated here.
  • the first thread when the playback instruction is obtained, the first thread may be performing the target video editing operation on a certain video frame.
  • the position of the video frame; the application may have just finished executing the target video editing operation on a certain video frame, but has not started to execute the target video editing operation on the next video frame. In this case, switch the video editing processing mode of the video frame
  • the position is the position of the next video frame where the target video editing operation is to be performed.
  • the application obtains the playback instruction, and can immediately switch the video editing processing mode corresponding to the first thread to the third processing mode, and from the position of the video frame that determines to switch the video editing processing mode, the video to be processed according to the third processing mode Video frames perform targeted video editing operations.
  • the playback instruction is acquired; in response to the playback instruction, through the first thread, starting from the video frame position where the target video editing operation is being executed when the playback instruction is acquired , performing a target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the implementation mode provided by this embodiment is logically simple and does not require complex judgments.
  • the third processing mode it can ensure that the video frame position where the target video editing operation is being performed is always ahead of the playback progress during playback, so as to solve the problems of user preview playback freezes and blurry screens, improve video processing efficiency, and benefit Can improve user experience.
  • the second thread pauses the playback, and the first thread continues to perform the target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the second thread stops rendering the target video frame, that is, pauses the playback of the target video frame.
  • the process may not be interrupted, that is, the first thread continues to perform the target video editing operation on the video to be processed, that is, continues to be treated according to the third processing mode Processes video frames of a video to perform targeted video editing operations.
  • computing resources of the electronic device can be effectively used to perform video editing, and the utilization rate of computing resources is improved, thereby improving video processing efficiency.
  • Fig. 6 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 6, the method provided in this embodiment includes:
  • Steps S601 and S602 of this embodiment are similar to those of S401 and S402 in the embodiment shown in FIG. 4 , and reference may be made to the detailed description of the embodiment shown in FIG. 4 . For the sake of brevity, details are not repeated here.
  • the maximum duration that the current video editing processing mode can continue to execute according to the position and playback speed of the video frame where the target video editing operation is currently being performed, determine the maximum duration that the current video editing processing mode can continue to execute; then, according to the determined maximum duration and the preset single video frame Execute the processing duration of the target video editing operation, determine the number S of video frames that can be processed by using the current video editing processing mode within the maximum duration; then, start from the video frame position where the target video editing operation is currently being executed consecutively S video frames , determine the switching position of the video editing processing mode.
  • the switching position of the video editing processing mode may be any video frame in the S video frames consecutively starting from the video frame position where the target video editing operation is currently being performed.
  • the switching timing is too late and the processing progress of the target video editing operation cannot meet the rendering progress, you can set the distance between the current The video frame closer to the video frame where the target video editing operation is performed is the switching position.
  • the maximum duration may be calculated according to the previous frame of the video frame currently performing the target video editing operation, Thereby improving the accuracy of calculation results.
  • the playback instruction is acquired; in response to the playback instruction, the time length, playback speed, and current target video editing operation required for a single video frame are executed.
  • the video frame position of the video editing operation flexibly determines the switching position of the video editing processing mode; when the target video editing operation progress reaches the switching position, the video editing processing mode is switched to the third processing mode, which improves the flexibility of switching the video editing processing mode sex.
  • the third processing mode it can ensure that the video frame position where the target video editing operation is being performed is always ahead of the playback progress during playback, so as to solve the problems of user preview playback freezes and blurry screens, improve video processing efficiency, and benefit Can improve user experience.
  • the second thread pauses the playback, and the first thread continues to perform the target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the second thread stops rendering the target video frame, that is, pauses the playback of the target video frame.
  • the process may not be interrupted, that is, the first thread continues to perform the target video editing operation on the video to be processed, that is, continues to be treated according to the third processing mode Processes video frames of a video to perform targeted video editing operations.
  • computing resources of the electronic device can be effectively used to perform video editing, and the utilization rate of computing resources is improved, thereby improving video processing efficiency.
  • Fig. 7 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 7, the method provided in this embodiment includes:
  • Steps S701 and S702 of this embodiment are respectively similar to those of S401 and S402 in the embodiment shown in FIG. 4 . Reference may be made to the detailed description of the embodiment shown in FIG. 4 . For the sake of brevity, details are not repeated here.
  • performing the target video editing operation according to the third processing mode does not mean that the next video frame to be played is the next video frame to be played.
  • a video frame position corresponds to a video frame.
  • the next video frame to be played is the 10th frame of the video frame to be processed, but there is no corresponding target video frame in the 10th frame, then starting from the 10th frame, the first thread executes the target video editing according to the third processing mode
  • the determined video frame position to perform the target video editing operation may be the 13th frame, the 15th frame, etc. in the video to be processed, so as to ensure that the video frame position where the target video editing operation is being performed is ahead of the playback progress.
  • a playback instruction is received; in response to the preview playback instruction, when it is detected that there is no corresponding target video frame in the next video frame to be played, the video editing is switched.
  • processing mode which can minimize resource consumption caused by frequent switching of video editing processing modes due to frequent user operations.
  • the playback processing mode it can ensure that the video frame position where the target video editing operation is being performed is always ahead of the playback progress during playback, so as to solve the problems of user preview playback freezes, blurred screens, etc., improve video processing efficiency, and facilitate Improve user experience.
  • the second thread pauses the playback, and the first thread continues to perform the target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the second thread stops rendering the target video frame, that is, pauses the playback of the target video frame.
  • the process may not be interrupted, that is, the first thread continues to perform the target video editing operation on the video to be processed, that is, continues to be treated according to the third processing mode Processes video frames of a video to perform targeted video editing operations.
  • computing resources of the electronic device can be effectively used to perform video editing, and the utilization rate of computing resources is improved, thereby improving video processing efficiency.
  • Fig. 8 is a flowchart of a video processing method provided by another embodiment.
  • the application program supports video frame position jumping, therefore, in the process of the application program playing each target video frame, the jump instruction can also be obtained, that is, on the basis of any one of the above-mentioned Figures 1 to 7, it can also execute The method of this embodiment.
  • the method of this embodiment includes:
  • the second thread renders the target video frame corresponding to the video frame position indicated by the jump instruction, and displays the target video frame corresponding to the video frame position indicated by the third jump instruction.
  • the user inputs the third jump command, and the application program can switch the playback status to pause playback; in other cases, during the playback process, the user enters the third jump command, and the application program
  • the target video frame may also be played from the video frame position indicated by the third jump instruction.
  • the application program can display the target video frame corresponding to the video frame position indicated by the third jump instruction. Caton phenomenon will occur. Based on this, the application can continue the process currently in progress.
  • the video frame position indicated by the third jump instruction has a corresponding target video frame, but currently there is no process of performing the target special effect on the video to be processed. Therefore, the video frame position indicated by the third jump instruction can be Whether there is a corresponding target video frame frame by frame is determined frame by frame, until starting from the determined first video frame position where no corresponding target video frame exists, the target video editing operation of the video line to be processed is performed according to the second processing mode.
  • the video editing processing mode can be switched to the third processing mode to ensure that the target video frame is played from the video frame position indicated by the third jump instruction.
  • the video frame position where the target video editing operation is being performed is always ahead of the current playback progress, so that the playback does not freeze and meets the user's preview needs.
  • the implementation manner of determining the switching timing of the video editing processing mode can be implemented in the manner shown in any one of the above-mentioned FIG. 5 to FIG. I won't repeat them here.
  • the method provided in this embodiment determines whether it is necessary to switch the video editing processing mode by analyzing whether there is a corresponding target video frame at the video frame position indicated by the third jump instruction and whether there is currently a process in which the target video editing operation is being executed. .
  • the playback status is also considered to ensure real-time display effect.
  • Fig. 9 is a flowchart of a video processing method provided by another embodiment of the present disclosure. Referring to Figure 9, the method of this embodiment includes:
  • Steps S901 and S902 in this embodiment are respectively similar to those in the embodiment S101 and S102 shown in FIG. 1 , and reference may be made to the detailed description of the embodiment shown in FIG. 1 . For the sake of brevity, details are not repeated here.
  • the fourth processing mode is a mode in which target video editing operations are sequentially performed on video frames that do not have corresponding target video frames in the video to be processed.
  • the fourth processing mode is an adaptive video editing processing mode.
  • the application When the application is executed to the last video frame position of the video to be processed according to any one or more of the video editing processing modes from the first processing mode to the third processing mode, it can switch to the fourth processing mode to make full use of computing resources, A target video editing operation is performed sequentially on video frames that do not have a corresponding target video frame in the video to be processed.
  • the video to be processed includes one video segment and the video to be processed includes multiple video segments:
  • the video to be processed includes a video segment
  • the video editing processing mode is switched to the fourth processing mode, from the starting video frame position of the video segment A1 , according to the order of the video frames in the video segment A1 , the target video editing operation is sequentially performed on the video frames for which no target video editing operation is performed.
  • 1 frame of video clip A includes 100 frames, assuming that the target video editing operation is performed on the 50th frame to the 100th frame according to the second processing mode; when receiving the target video editing operation on the 100th frame, switch to In the fourth processing mode, starting from the first frame, the target video editing operation is performed on the first frame to the forty-ninth frame frame by frame.
  • 1 frame of video clip A includes 100 frames, assuming that the target video editing operation is performed on the 1st frame, the 5th frame, the 10th frame, the 15th frame, and the 20th frame according to the first processing mode; start from the 30th frame , switch to the second processing mode, and perform the target video editing operation on the 30th frame to the 100th frame; when it is detected that the target video editing operation is performed on the 100th frame, then switch to the fourth processing mode, from the 1st frame Initially, the target video editing operation is performed on the 2nd to 4th frames, the 6th to 9th frames, the 11th to 14th frames, and the 16th to 19th frames where no target video editing operation has been performed.
  • the video to be processed includes multiple video clips
  • the video to be processed includes multiple video clips, and the multiple video clips are respectively recorded as video clips B 1 to B N .
  • the video editing processing mode is switched to the fourth processing mode, from the initial video frame position of the video segment B n , according to the video segment B
  • the sequence of the video frames in n is to execute the target video editing operation on the video frames for which the target video editing operation is not performed in turn.
  • N is an integer greater than or equal to 2.
  • n is an integer greater than or equal to 1 and less than or equal to N.
  • the fourth processing mode can be preferentially executed in the video clip. special effects processing tasks.
  • each video segment corresponds to a decoder
  • the target video editing operation task of the fourth processing mode is executed in the video segment, which can reduce the consumption of computing resources caused by updating the decoder due to video segment switching.
  • executing the target video editing operation task of the fourth processing mode in the video segment can make full use of computing resources and improve video processing efficiency.
  • the video to be processed includes 3 video clips, which are respectively video clips B 1 , B 2 , and B 3 .
  • the target video editing operation is being performed on the video segment B2 according to the second processing mode, and when it is detected that the target video editing operation has been performed on the last video frame of the video segment B2 , then switch to the fourth processing mode, from the video segment Starting from the initial video frame position of B2 , the target video editing operation is sequentially performed on the video frames in the video segment B2 that do not have corresponding target video frames.
  • the video segment B2 When all the video frames in the video segment B2 have corresponding target video frames, the video segment B2 can be marked as "completed". If no other operation instruction is received, the target video editing operation can be performed on the video segment B1 and the video segment B3 .
  • the video segment B2 has completed the target video editing operation, and the next video segment to be processed is the video segment B1 as an example for illustration.
  • the target video editing operation may not have been performed on the video segment B1 , and any video frame in the video segment B1 does not have a corresponding target video frame.
  • the first processing mode may be adopted firstly, and the target video editing operation is sequentially performed on each video frame of the video segment B 1 .
  • the target video editing operation is sequentially performed on the video frames in the video segment B1 in which there is no corresponding target video frame.
  • the target video editing operation may have been performed on the video segment B1 , and some video frames in the video segment B1 have corresponding target video frames.
  • video segment B 1 starting from the start video frame position of video segment B 1 , adopt the fourth processing mode to sequentially perform target video editing on the video frames in video segment B 1 that do not have corresponding target video frames operate.
  • next video segment to be processed is video segment B3
  • its implementation is similar to the implementation of the next video segment to be processed as video segment B1 , which can be referred to the foregoing detailed introduction, and will not be repeated here.
  • the video editing processing mode is switched to the first In the four-processing mode, target video editing operations are performed sequentially on video frames that do not have corresponding target video frames in the video to be processed.
  • Fig. 10 is a schematic diagram of a frame of a video processing device provided by an embodiment of the present disclosure. Referring to the video processing device shown in FIG. 10 , it includes three physical layers, namely: layer (graph layer), interaction layer and algorithm processing layer.
  • layer graph layer
  • interaction layer interaction layer
  • algorithm processing layer algorithm processing layer
  • Layers include: decoding thread and rendering thread; where, decoding thread includes: decoding control unit (decoder reader unit); rendering thread includes: video preprocessing unit (clip preprocess unit).
  • the decoding control unit is used to update the algorithm processing tasks being executed in the algorithm processing layer according to the operation instructions input by the user, such as jump instructions, play instructions, etc.; and synchronously update the rendering position.
  • the video preprocessing unit is used to judge whether there is a corresponding target video frame at the current video frame position; if it exists, read the target video frame, and upload the texture for rendering and display on the screen; if not, execute according to the previous one
  • the video frame of the target video editing operation is read, the data of the corresponding target video frame is read, and the texture is uploaded for rendering and display on the screen.
  • the interaction layer includes: task scheduling module and cache control module.
  • the task scheduling module is used to generate video editing tasks and deliver video editing tasks to the algorithm processing layer.
  • the task scheduling module can generate video editing tasks based on the operation instructions input by the user, and can also generate video editing tasks based on the instruction returned by the algorithm processing layer to switch the video editing processing mode to the self-spreading processing mode. Therefore, the task scheduling module can be regarded as task scheduling through external triggering strategy and self-spreading strategy.
  • the algorithmic task is the minimum processing unit. Every time a video editing operation of some kind of editing (clip) is added, an algorithmic task, that is, an algorithmic task (task Param) object will be created, and a task identifier (task ID) will be returned.
  • An algorithmic task object uniquely corresponds to a task ID symbol.
  • the algorithm task (task Param) object is a structure.
  • the algorithm task records all the parameters needed for algorithm processing on a clip and holds a reference to an algorithm processing instance (task Process wrapper).
  • the algorithm processing instance (task Process wrapper) holds a one-to-one correspondence with the Lab algorithm model instance and manages the corresponding content.
  • the algorithm processing instance maintains a main file (MANE File), which is used to record the relevant information (such as timestamp information indicating the position of the video frame, such as PTS information) of all algorithm results (ie, the target video frame) of the path file, and provides an interface Inquire.
  • MANE File main file
  • the data volume of the Lab algorithm model instance is relatively large.
  • the Lab algorithm model instance in the smart map is about 100 megabytes.
  • OOM out of memory
  • the cache control module is used to return the target video frame to the rendering thread according to the target video frame stored in the cache; when there is no corresponding target video frame in the buffer area, return the target video frame in the external storage space to the rendering thread Access path; if the target video frame does not exist in the external storage space, return NULL to the rendering thread.
  • the algorithm layer may also be called an algorithm interaction layer, an intermediate layer, and other names, which are not limited in the present disclosure.
  • the algorithm processing layer includes: an algorithm task management module, which provides external access interfaces.
  • the algorithm task management module provides interfaces for adding, deleting, and querying special effect processing tasks and progress.
  • the algorithm task management module itself also holds algorithm task (task Param) objects, algorithm processing instances and algorithm threads.
  • the essence of the algorithm thread is a thread object, which is a thread selected from the thread pool, and executes the special effect processing task in the corresponding message queue.
  • an algorithm thread When an algorithm thread performs a video editing task, it may include the following steps:
  • Step 1 Update the corresponding decoder.
  • Step 2 correcting the total number of frames corresponding to the video editing task.
  • Step 3 judging whether the number of processed frames is equal to the total number of frames, or judging whether the starting position of the target video editing operation is the last video frame of the video to be processed.
  • Step 4 determine whether the video frame already has a corresponding target video frame; if yes, skip performing the target video editing operation on the video frame; if not, perform step 5.
  • Step 5 Determine whether the video frame is a video frame that needs to be discarded according to a corresponding video editing processing mode.
  • the video editing processing mode mentioned here may be any one of playback processing mode, jump processing mode, preprocessing mode and self-spreading processing mode.
  • playback processing mode any one of playback processing mode, jump processing mode, preprocessing mode and self-spreading processing mode.
  • the video editing processing mode is the playback processing mode, then determine whether the video frame is a video frame that needs to be discarded according to the playback speed, and if it is a video frame that needs to be discarded, it is not necessary to perform the target video editing operation on it; if not For video frames that need to be discarded, go to step 6.
  • each video frame is a video frame that does not need to be discarded.
  • the key frame is a video frame that does not need to be discarded, and step 6 is performed; other frames are video frames that need to be discarded.
  • video frames without corresponding target video frames are video frames that do not need to be discarded; video frames with corresponding target video frames are video frames that need to be discarded.
  • Step 6 Perform target video editing operations on the video frames to obtain corresponding target video frames.
  • Step 7 storing the target video frame corresponding to the video frame.
  • the target video frame corresponding to the video frame can be stored in the external storage space and the cache at the same time.
  • the rendering thread is rendering, if there is a corresponding target video frame in the cache, the data can be read directly from the cache; if there is no corresponding target video frame in the cache, the corresponding target video frame is read from the external storage space. In this manner, data input/output times can be reduced, and video processing efficiency can be improved.
  • the relevant information of the target video frame can be passed to the callback service, so that the callback service calls the service thread to perform related operations.
  • the callback service can be used to feed back the execution progress of the current target video editing to the service thread.
  • the algorithm thread can perform target video operations on the video to be processed based on a variety of video editing and processing modes, ensuring that the execution progress of the algorithm thread is always ahead of the progress of the rendering thread, thereby solving the problem of user preview freeze .
  • the algorithm thread can execute independently without affecting the encoding thread and rendering thread of the upper layer, realizing the connection between encoding thread, algorithm thread and rendering thread.
  • the parallel processing of the algorithm solves the problem of blocking the rendering thread due to the time-consuming processing of the algorithm thread, thus solving the problem of the user's preview freeze.
  • FIG. 11 is a schematic diagram of the life cycle of each thread in the embodiment shown in FIG. 10 provided by an embodiment of the present disclosure. As shown in Figure 11, it includes: frame drawing thread, rendering thread, business thread and algorithm thread.
  • the frame drawing thread is used to execute synchronous frame drawing; the rendering thread is used to render the target video frame for playback; the business thread is used to control the algorithm thread to execute the target according to the determined video editing processing mode for each video frame of the video to be processed
  • the video editing operation is used to control the pause and start of playback; the algorithm thread is used to perform the target video editing operation on the corresponding video frame according to the instruction of the business thread.
  • the business thread When receiving a video editing instruction to perform a target video editing operation for each video frame of the video to be processed, first, the business thread sets the video editing processing mode to preprocessing mode or jump processing mode, and the control algorithm thread is started, and according to The set video editing processing mode runs algorithms to perform targeted video editing operations on video frames of the video to be processed. Referring to FIG. 11 , in this process, corresponding target video frames need to be buffered sequentially.
  • the business thread When playback is required, the business thread first suspends the algorithm thread, and sets the video editing processing mode to the playback processing mode, and controls the algorithm thread to run the algorithm according to the playback processing mode, so as to perform the target video editing operation on the video frame to be processed. Referring to FIG. 11 , during this process, the corresponding target video frames still need to be sequentially buffered. Synchronously, and the service thread controls the rendering thread to read the corresponding target video frame from the cache for rendering and displaying for the user to preview.
  • the service thread controls the rendering thread to pause the playback.
  • the business thread When jumping to the video frame position, the business thread first suspends the algorithm thread, and sets the video editing processing mode to the jump processing mode, controls the algorithm thread to run the algorithm according to the jump processing mode, and starts from the final jump video frame position, treats Processes video frames of a video to perform targeted video editing operations. Referring to FIG. 11 , during this process, it is still necessary to start synchronously from the position of the final jump video frame, and buffer the corresponding target video frame. Synchronously, the service thread controls the rendering thread to display the target video frame at the corresponding video frame position.
  • the business thread sets the video editing processing mode to self-spreading processing mode, and the control algorithm thread starts from the position of the first video frame of the video to be processed, and sequentially corrects the unexecuted
  • the video frame of the target video editing operation performs the target video editing operation. Referring to FIG. 11 , during this process, the corresponding target video frames still need to be sequentially buffered.
  • the service thread controls the algorithm thread to stop running the algorithm, and reclaims the resources occupied by the algorithm thread. Afterwards, the rendering thread and algorithm thread are recycled.
  • Fig. 12 is a schematic structural diagram of a video processing device provided by an embodiment of the present disclosure.
  • the video processing device 1200 provided in this embodiment includes:
  • the first processing module 1210 is configured to start from the specified video frame position through the first thread, perform the target video editing operation on the video frame at the specified video frame position in the video to be processed and perform the target video editing operation on the video frames after the specified video frame position
  • the target video editing operation is performed on each video frame in advance, and each target video frame obtained by performing the target video editing operation is acquired.
  • the second processing module 1220 is configured to render each of the target video frames through a second thread in response to a playback instruction, so as to display each of the target video frames; wherein, the position of the video frame where the editing operation of the target video is being executed A corresponding video frame position in the video to be processed is ahead of the target video frame being displayed.
  • the video processing apparatus 1200 may further include: a display module 1240 .
  • the display module 1240 is configured to display each target video frame.
  • the specified video frame position is determined according to the video frame position located when the video editing instruction is acquired; wherein the video editing instruction is used to instruct to execute the target on the video to be processed Video editing operation: the video frame position positioned when the video editing instruction is obtained is a preset video frame position or a video frame position specified by the first jump instruction.
  • the video processing apparatus 1200 may further include: an acquisition module 1230 .
  • the acquiring module 1230 is configured to acquire a video editing instruction, and acquire a first jump instruction.
  • the specified video frame position is determined according to the video frame position specified by the second jump instruction, and the second jump instruction is used to execute the The instruction following the video editing instruction for the target video editing operation.
  • the acquiring module 1230 is also configured to acquire the second jump instruction.
  • the first processing module 1210 starts from the specified video frame position through the first thread, and Execute the target video editing operation on the video frame at the specified video frame position in the video and pre-execute the target video editing operation on each video frame after the specified video frame position, and obtain the result obtained by executing the target video editing operation Before each target video frame, it is also used to determine the video editing processing mode corresponding to the first thread according to the specified video frame position; wherein, the video editing processing mode is the first processing mode or the second processing mode, The first processing mode is a mode for performing the target video editing operation on key frames, and the second processing mode is a mode for performing the target video editing operation frame by frame.
  • the first processing module 1210 is specifically configured to determine that the video editing processing mode corresponding to the first thread is the specified video frame position if the specified video frame position is the preset video frame position. The first processing mode; if the specified video frame position is the video frame bit position specified by the first jump instruction, then determine that the video editing processing mode corresponding to the first thread is the first processing mode or the Second processing mode.
  • the first processing module 1210 is specifically configured to determine whether there is a corresponding target video frame at the specified video frame position; if there is no corresponding target video frame at the specified video frame position, then according to the specified
  • the specified video frame position is the preset video frame position or the video frame position specified by the first jump instruction, and the video editing processing mode corresponding to the first thread is determined.
  • the first processing module 1210 is specifically configured to determine that the video editing processing mode corresponding to the first thread is the second processing mode if there is a corresponding target video frame at the specified video frame position model.
  • the second processing module 1220 is specifically configured to determine whether there is a corresponding target video frame frame by frame backward from the specified video frame position, until determining the first video frame position where there is no corresponding target video frame; by The first thread, according to the second processing mode, starts from the first video frame position where no corresponding target video frame exists, and performs the target video editing operation on the video frame of the video to be processed.
  • the first processing module 1210 starts from the specified video frame position through the first thread and treats Processing the video frame at the specified video frame position in the video, performing the target video editing operation and performing the target video editing operation on each video frame after the specified video frame position in advance, obtaining and executing the target video editing operation to obtain Before each target video frame, it is also used to determine that the video editing processing mode corresponding to the first thread is the second processing mode when it is determined that there is no corresponding target video frame at the specified video frame position.
  • the first processing Module 1210 does not interrupt the first thread, so that the first thread continues to perform the target video editing operation on the video to be processed.
  • the first processing module 1210 is further configured to switch the video editing processing mode to a third processing mode in response to the playback instruction, and through the first thread, according to the third processing mode,
  • the processing mode executes the target video editing operation on the video frame of the video to be processed; wherein, the third processing mode is a mode of determining the video frame position for performing the target video editing operation according to the playback speed.
  • the first processing module 1210 is specifically configured to detect in real time whether there is a next target video frame to be played in response to the playback instruction; when it is detected that there is no next target video frame to be played , using the first thread, starting from the position of the next target video frame to be played, and performing the target video editing operation on the video frame of the video to be processed according to the third processing mode.
  • the first processing module 1210 is specifically configured to, in response to the playback instruction, use the first thread to obtain the video frame position at which the target video editing operation is being performed when the playback instruction is acquired. Initially, the target video editing operation is performed on the video frame of the video to be processed according to the third processing mode.
  • the first processing module 1210 is specifically configured to, in response to the playback instruction, execute the processing duration of the target video editing operation according to the preset single video frame, the playback speed, and the currently executing the
  • the video frame position of the target video editing operation determines the switching position corresponding to the third processing mode; through the first thread, starting from the switching position, the video of the video to be processed is processed according to the third processing mode frame to perform the target video editing operation.
  • the first processing module 1210 is further configured to, in response to the pause playback instruction, suspend the rendering of the target video frame by the second thread;
  • the third processing mode executes the target video editing operation on the video frame of the video to be processed.
  • the acquiring module 1230 is also configured to acquire the pause playback instruction.
  • the first processing module 1210 is further configured to, after performing the target video editing operation on the last video frame of the video to be processed, through the first thread, from the video to be processed Starting from the position of the starting video frame of the video to be processed, the target video editing operation is performed on the video frame of the video to be processed according to the fourth processing mode; A video frame corresponding to a target video frame executes a mode of the target video editing operation.
  • the video processing device provided in this embodiment can be used to implement the technical solutions of any of the foregoing method embodiments, and its implementation principles and technical effects are similar, and reference can be made to the detailed description of the foregoing method embodiments. For the sake of brevity, details are not repeated here.
  • FIG. 13 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • an electronic device 1300 provided in this embodiment includes: a memory 1301 and a processor 1302 .
  • the memory 1301 may be an independent physical unit, and may be connected with the processor 1302 through a bus 1303 .
  • the memory 1301 and the processor 1302 may also be integrated together, implemented by hardware, and the like.
  • the memory 1301 is used to store program instructions, and the processor 1302 invokes the program instructions to execute the operations of any one of the above method embodiments.
  • the foregoing electronic device 1300 may also include only the processor 1302 .
  • the memory 1301 for storing programs is located outside the electronic device 1300, and the processor 1302 is connected to the memory through circuits/wires for reading and executing the programs stored in the memory.
  • the processor 1302 may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP) or a combination of CPU and NP.
  • CPU central processing unit
  • NP network processor
  • the processor 1302 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (application-specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD) or a combination thereof.
  • the aforementioned PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) or any combination thereof.
  • the memory 1301 may include a volatile memory (volatile memory), such as a random-access memory (random-access memory, RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory) ), a hard disk (hard disk drive, HDD) or a solid-state drive (solid-state drive, SSD); the memory can also include a combination of the above-mentioned types of memory.
  • volatile memory such as a random-access memory (random-access memory, RAM
  • non-volatile memory such as a flash memory (flash memory)
  • HDD hard disk drive
  • solid-state drive solid-state drive
  • the present disclosure also provides a readable storage medium, including: computer program instructions; when the computer program instructions are executed by at least one processor of the electronic device, the video processing method shown in any one of the above method embodiments is implemented.
  • the present disclosure also provides a computer program product.
  • the computer program product When the computer program product is executed by a computer, the computer implements the video processing method shown in any one of the above method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开涉及一种视频处理方法、装置、电子设备及可读存储介质,其中,该方法包括:通过第一线程从指定的视频帧位置开始,对待处理视频执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧;响应于播放指令,通过第二线程对各目标视频帧进行渲染,以展示各目标视频帧。通过预先对指定的视频帧位置之后的各视频帧执行目标视频编辑操作,保证正在执行目标视频编辑操作的视频帧位置始终领先于正在展示的目标视频帧的视频帧位置,从而解决预览卡顿的问题,保证用户的预览需求;此外,通过将执行目标视频编辑处理与渲染进行解耦,由不同的线程执行,解决了流式框架中由于执行目标视频编辑操作阻塞渲染线程的问题,有利于提高预览效果。

Description

视频处理方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请是以申请号为202111347351.6,申请日为2021年11月15日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及互联网技术领域,尤其涉及一种视频处理方法、装置、电子设备及可读存储介质。
背景技术
随着互联网技术的不断发展,各种各样的应用程序(application,APP)被开发出来,其中,具有视频编辑功能的应用程序尤其受到人们的喜爱。目前,能够提供视频编辑功能的应用程序能够实现很多的视频编辑操作,例如,智能抠图,这类视频编辑操作需要对视频的每一个视频帧图像进行相应的处理。
相关技术中,针对这类视频编辑操作通常采用流式处理框架。例如,进行智能抠图时,会逐帧进行编码、抠图处理、渲染以及显示。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种视频处理方法、装置、电子设备及可读存储介质。
第一方面,本公开提供了一种视频处理方法,包括:
通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧;
响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
作为一种可能的实施方式,所述指定的视频帧位置是根据获取视频编辑指令时定位的视频帧位置确定的;其中,所述视频编辑指令用于指示对所述待处理视频执行所述目标视频编辑操作;获取所述视频编辑指令时定位的视频帧位置为预设的视频帧位置或者第一跳转指令指定的视频帧位置。
作为一种可能的实施方式,所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的,且所述第二跳转指令为用于对所述待处理视频执行所述目标视频编辑操作的视频编辑指令之后的指令。
作为一种可能的实施方式,若所述指定的视频帧位置是根据获取视频编辑指令时定位的视频帧位置确定的;所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧之前,所述方法还包括:
根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式;其中,所述视频编辑处理模式为第一处理模式或者第二处理模式,所述第一处理模式为对关键帧执行所述目标视频编辑操作的模式,所述第二处理模式为逐帧执行所述目标视频编辑操作的模式。
作为一种可能的实施方式,所述根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式,包括:
若所述指定的视频帧位置为所述预设的视频帧位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式;
若所述指定的视频帧位置为第一跳转指令指定的视频帧位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式或者所述第二处理模式。
作为一种可能的实施方式,所述根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式,包括:
确定所述指定的视频帧位置是否存在相应目标视频帧;
若所述指定的视频帧位置不存在相应目标视频帧,则根据所述指定的视频帧位置为所述预设的视频帧位置或者所述第一跳转指令指定的视频帧位 置,确定所述第一线程对应的视频编辑处理模式。
作为一种可能的实施方式,所述方法还包括:
若所述指定的视频帧位置存在相应目标视频帧,则确定所述第一线程对应的视频编辑处理模式为所述第二处理模式;
所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧,包括:
从所述指定的视频帧位置开始向后逐帧确定是否存在相应目标视频帧,直至确定第一个不存在相应目标视频帧的视频帧位置;
通过所述第一线程,按照所述第二处理模式,从所述第一个不存在相应目标视频帧的视频帧位置开始,对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,若所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的;所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧之前,所述方法还包括:
若所述指定的视频帧位置不存在相应目标视频帧,则确定所述第一线程对应的视频编辑处理模式为第二处理模式。
作为一种可能的实施方式,所述方法还包括:
若所述指定的视频帧位置存在相应目标视频帧,且当前所述第一线程正在对所述待处理视频执行所述目标视频编辑操作,则不打断所述第一线程,使得所述第一线程继续对所述待处理视频执行所述目标视频编辑操作。
作为一种可能的实施方式,所述方法还包括:
响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,通过所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;其中,所述第三处理模式为根据播放速度确定执行所述目标视频编辑操作的视频帧位置的模式。
作为一种可能的实施方式,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作,包括:
响应于所述播放指令,实时检测是否存在要播放的下一个目标视频帧;
当检测到不存在要播放的下一个目标视频帧时,由所述第一线程,从所述要播放的下一个目标视频帧的位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作,包括:
响应于所述播放指令,通过所述第一线程,从获取所述播放指令时正在执行所述目标视频编辑操作的视频帧位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作,包括:
响应于所述播放指令,根据预设的单个视频帧执行所述目标视频编辑操作的处理时长、播放速度以及当前正在执行所述目标视频编辑操作的视频帧位置,确定所述第三处理模式对应的切换位置;
通过所述第一线程,从所述切换位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,所述方法还包括:
获取暂停播放指令;
响应于所述暂停播放指令,暂停所述第二线程对所述目标视频帧进行渲染;且所述第一线程继续按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,所述方法还包括:
对所述待处理视频的最后一个视频帧执行所述目标视频编辑操作后,通过所述第一线程,从所述待处理视频的起始视频帧的位置开始,按照第四处 理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;
其中,所述第四处理模式为依次对所述待处理视频中不存在相应目标视频帧的视频帧执行所述目标视频编辑操作的模式。
第二方面,本公开提供一种视频处理装置,包括:
第一处理模块,用于通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧;
第二处理模块,用于响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
第三方面,本公开提供一种电子设备,包括:存储器和处理器;
所述存储器被配置为存储计算机程序指令;
所述处理器被配置为执行所述计算机程序指令,使得所述电子设备实现如第一方面任一项所述的视频处理方法。
第四方面,本公开提供一种可读存储介质,包括:计算机程序指令;所述计算机程序指令被电子设备的至少一个处理器执行时,使得所述电子设备实现如第一方面任一项所述的视频处理方法。
第五方面,本公开提供一种计算机程序产品,当所述计算机程序产品被计算机执行时,使得所述计算机实现如第一方面任一项所述的视频处理方法。
本公开提供一种视频处理方法、装置、电子设备及可读存储介质,其中,该方法包括:通过第一线程从指定的视频帧位置开始,对待处理视频执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧;响应于播放指令,通过第二线程对各目标视频帧进行渲染,以展示各目标视频帧。。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一实施例提供的视频处理方法的流程图;
图2为本公开另一实施例提供的视频处理方法的流程图;
图3为本公开另一实施例提供的视频处理方法的流程图;
图4为本公开另一实施例提供的视频处理方法的流程图;
图5为本公开另一实施例提供的视频处理方法的流程图;
图6为本公开另一实施例提供的视频处理方法的流程图;
图7为本公开另一实施例提供的视频处理方法的流程图;
图8为本公开另一实施例提供的视频处理方法的流程图;
图9为本公开另一实施例提供的视频处理方法的流程图;
图10为本公开一实施例提供的视频处理装置的框架示意图;
图11为本公开一实施例提供的图10所示实施例中各模块的生命周期示意图;
图12为本公开一实施例提供的视频处理装置的结构示意图;
图13为本公开一实施例提供的电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
相关技术中采用流式处理框架进行智能抠图处理的过程中,若用户进行预览播放,极易出现严重的显示卡顿问题,严重影响用户体验。
本公开提供的视频处理方法可以由视频处理装置来执行,其中,视频处 理装置可以通过任意的软件和/或硬件的方式实现。示例性地,视频处理装置可以是平板电脑、手机(如折叠屏手机、大屏手机等)、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能电视、智慧屏、高清电视、4K电视、智能音箱、智能投影仪等物联网(the internet of things,IOT)设备,本公开对电子设备的具体类型不作任何限制。
需要说明的是,本公开提供的视频处理方法至少可以应用于具有以下特性的场景:
1、需要对待处理视频的每个视频帧进行视频编辑操作,生成并保存相应的目标视频帧(即视频编辑处理结果),用于渲染显示。
2、视频编辑操作较为耗时,但需要提供更好的预览播放效果。
3、视频编辑操作需要实时跟进对齐用户输入的操作指令,例如,用户的输入的跳转指令(也可以称为seek指令)、更新指令(也可以称为refresh指令)、播放指令、变速播放指令、暂停播放指令等等。
针对具有上述特性的场景,均可以采用本公开提供的视频编辑处理方法,以保证用户预览效果。
为了更加清楚地介绍本公开提供的视频处理方法,在下述实施例中,以电子设备中安装了视频编辑应用程序(下文简称为:应用程序)为例,详细说明本公开提供的视频处理方法。
图1为本公开一实施例提供的视频处理方法的流程图。参照图1所示,本实施例提供的方法包括:
S101、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
待处理视频为需要执行目标视频编辑操作的素材。本公开对于待处理视频的时长、存储格式、分辨率、视频内容以及获取方式等等均不做限定。
第一线程为用于对待处理视频执行目标视频编辑操作的线程,第一线程 也可以称为算法线程。算法线程可以调用与目标视频编辑操作对应的算法实例,以实现对待处理视频执行目标视频编辑操作。此外,本公开对于确定第一线程的方式不做限定。
其中,本公开对于目标视频编辑操作的类型不作限定,示例性地,目标视频编辑操作可以但不限于包括:智能抠图、添加贴纸、滤镜等。
指定的视频帧位置可以为待处理视频中任一视频帧的位置。示例性地,指定的视频帧位置可以为获取视频编辑指令时定位的视频帧位置,或者,也可以为用户通过触发操作指定的视频帧位置。
视频编辑指令用于指示对待处理视频的每一个视频帧执行目标视频编辑操作。本公开对于获取视频编辑指令的实现方式不作限定,例如,用户可以通过操作应用程序提供的相应控件向应用程序输入视频编辑指令。
下面以不同场景为例,示例性地介绍本步骤的实现方式:
一种可能的实施方式,用户可以通过应用程序显示的素材选择页面,选择待处理视频,并导入应用程序中进行视频编辑。应用程序可以在用户界面上显示视频编辑页面,其中,视频编辑页面中可以包括多种不同的视频编辑操作对应的控件,其中包括:目标视频编辑操作(如智能抠图)对应的目标控件;当应用程序接收到用户针对目标控件的触发操作(如点击操作)时,生成用于指示对待处理视频的每一个视频帧执行智能抠图的视频编辑指令,应用程序响应于该视频编辑指令,由指定的视频帧位置开始对待处理视频执行目标视频编辑操作。
该场景中,导入待处理视频时,应用程序可以默认定位至预设的视频帧位置,由于后续用户并未输入任何触发操作,因此,定位的视频帧位置也不会发生变化,当获取视频编辑指令时定位的视频帧位置仍为上述预设的视频帧位置,即指定的视频帧位置为预设的视频帧位置。其中,预设的视频帧位置可以为待处理视频的任一视频帧,例如,预设的视频帧位置为待处理视频的起始视频帧的位置。
另一种可能的实施方式,用户可以通过应用程序显示的素材选择页面,选择待处理视频,并导入应用程序中进行视频编辑。应用程序可以在用户界面上显示视频编辑页面,其中,视频编辑页面中可以包括多种不同的视频编 辑操作对应的控件,其中包括:目标视频编辑操作(如智能抠图)对应的目标控件;当应用程序接收到用户针对目标控件的触发操作(如点击操作)时,生成用于指示对待处理视频的每一个视频帧执行智能抠图的视频编辑指令,应用程序响应于该视频编辑指令,由指定的视频帧位置开始对待处理视频执行目标视频编辑操作。其中,用户输入针对目标控件的触发操作之前,还可以输入第一跳转指令,用于将定位的位置从预设的视频帧位置跳转至第一跳转指令所指示的视频帧位置。
该场景中,指定的视频帧位置即为第一跳转指令所指示的视频帧位置。需要说明的是,用户可以输入一个或者多个第一跳转指令,通常以最后一个跳转指令指示的视频帧位置作为指定的视频帧位置。
另一种可能的实施方式,在上述第一种可能的实施方式的基础上,第一线程已开始从预设的视频帧位置开始执行目标视频编辑操作,在执行目标视频编辑操作的过程中,用户可以输入第二跳转指令,则应用程序响应于第二跳转指令,第一线程需要从第二跳转指令指示的视频帧位置开始执行目标视频编辑操作。
该场景中,指定的视频帧位置即为第二跳转指令指示的视频帧位置。其中,用户可以输入多个第二跳转指令,第一线程需要分别响应每个第二跳转指令,从第二跳转指令指示的视频帧位置开始执行目标视频编辑操作。
S102、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
在第一线程对待处理视频执行目标视频编辑操作的过程中,应用程序支持用户预览执行了目标视频编辑操作得到的各目标视频帧,即应用程序支持用户预览播放各目标视频帧。
一种可能的实施方式,应用程序接收到播放指令时,响应于该播放指令,由第二线程按照播放速度对各目标视频帧进行渲染,以播放各目标视频帧。
需要说明的是,在第一线程对待处理视频执行目标视频编辑操作的过程中,当对该指定视频帧位置的视频帧执行目标视频编辑操作得到指定的视频 帧位置对应的目标视频帧时,第二线程可自动对指定的视频帧位置对应的目标视频帧进行渲染,以展示指定的视频帧位置对应的目标视频帧。从而保证在应用程序接收到播放指令之前,用户预览指定的视频帧位置对应的处理效果。
另一种可能的实施方式,在第一线程对待处理视频执行目标视频编辑操作的过程中,当对该指定视频帧位置的视频帧执行目标视频编辑操作得到指定的视频帧位置对应的目标视频帧时,第二线程可自动从指定的视频帧位置开始对各目标视频帧进行渲染,即,无需用户触发播放指令。
无论采用上述何种方式,本实施例提供的方法通过保证当前正在执行目标视频编辑操作的视频帧位置始终领先于当前正在展示的目标视频帧在待处理视频中的位置,保证有已经执行了目标视频编辑操作的目标视频帧供预览,因此,能够解决用户预览卡顿的问题。
为了使本方案更加清楚,这里通过一示例对当前正在执行目标视频编辑操作的视频帧位置领先于当前正在展示的目标视频帧在待处理视频中的位置的含义进行说明:假设,待处理视频有100帧,按照先后顺序依次为第1帧至第100帧,当前正在展示的是第20帧对应的目标视频帧,正在执行目标视频编辑操作的视频帧位置为第30帧,则表示当前正在执行目标视频编辑操作的视频帧位置领先于当前正在展示的目标视频帧在待处理视频中的位置。
本实施例提供的方法,通过第一线程从指定的视频帧位置开始,对待处理视频执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧;响应于播放指令,通过第二线程对各目标视频帧进行渲染,以展示各目标视频帧。本公开通过预先对指定的视频帧位置之后的各视频帧执行目标视频编辑操作,保证正在执行目标视频编辑操作的视频帧位置始终领先于正在展示的目标视频帧在待处理视频帧中的视频帧位置,从而解决了在对待处理视频执行目标视频编辑操作的过程中会发生预览卡顿的问题,保证用户的预览需求;此外,由于执行目标视频编辑操作较为耗时,若算法处理的耗时与渲染线程耗时不匹配,容易出现阻塞渲染线程,因此,本公开通过将执行视频编辑处理与渲染进行解耦,由不同的线程执行,解决了流式框架中由于执行目标视频编辑操作阻塞渲染线程的问题,有利于提高预览效果。
在图1所示实施例的基础上,本公开提供的视频处理方法可以至少提供一下几种视频编辑处理模式,当应用程序接收到视频编辑指令时,可以根据预先设定的策略,确定第一线程对应的视频编辑处理模式。其中,预先设定的策略可以但不限与待处理视频当前的视频帧位置、待处理视频执行目标视频编辑操作的整体进度、播放状态(暂停播放状态或者播放状态)以及用户的触发操作等一个或者多个因素相关。
其中,应用程序至少支持一下四种视频编辑处理模式,分别为:第一处理模式、第二处理模式、第三处理模式以及第四处理模式。
其中,第一处理模式为对待处理视频的关键帧执行目标视频编辑操作的模式,第一处理模式也可以称为预处理模式、锚点模式或者ancher mode等其他名称。
第二处理模式为逐帧执行目标视频编辑操作的模式,第二处理模式也可以称为跳转处理模式、seek mode等其他名称。
第三处理模式为基于播放进度确定执行目标视频编辑操作的视频帧位置的模式,第三处理模式也可以称为播放处理模式、playback mode等其他名称。
第四处理模式为依次对待处理视频中不存在相应目标视频帧的视频帧执行目标视频编辑操作的模式。即,第四处理模式为按照从待处理视频的起始视频帧至最后一个视频帧的方向,依次对未执行目标视频编辑操作的视频帧执行目标视频编辑操作的模式。第四处理模式也可以称为自蔓延处理模式、adaptive mode等其他名称。
应用程序可以基于上述预先设定的策略,确定不同场景下第一线程执行目标视频编辑操作对应的视频编辑处理模式。
下面通过几个具体实施例,详细介绍不同场景中,第一线程以何种视频编辑处理模式对待处理视频执行目标视频编辑操作。
图2为本公开一实施例提供的视频处理方法的流程图。参照图2所示,本实施例提供的方法包括:
S201、根据指定的视频帧位置,确定第一线程对应的视频编辑处理模式,所述视频编辑处理模式为第一处理模式或者第二处理模式。
结合图1所示实施例所述,指定的视频帧位置可以为预设的视频帧位置, 也可以为第一跳转指令指示的视频帧位置,或者,还可以是第二跳转指令指示的视频帧位置。在不同的情况下,第一线程对应的视频编辑处理模式会有所差异。
其中,第一处理模式是对关键帧(即I帧)执行目标视频编辑操作的模式。I帧(I frame)又称为内部画面(intra picture),它是帧间压缩编码里的重要帧。在编码的过程中,部分视频帧序列压缩成为I帧;部分压缩成P帧;还有部分压缩成B帧。解码时,仅根据I帧的数据就可以重构完整图像,不需要参考其他视频帧的数据。
第二处理模式,是逐帧对待处理视频执行目标视频编辑操作的模式。其中,第二处理模式是本公开提供的一种用于响应用户输入的seek指令一种视频编辑处理模式。
一种可能的实施方式,若指定的视频帧位置是预设的视频帧位置(如待处理视频的起始视频帧的位置),则确定第一线程对应的视频编辑处理模式为第一处理模式。
通过对待处理视频的关键帧执行目标视频编辑操作,若用户预览特效处理结果时,根据关键帧对应的目标视频帧便能够重构完整的图像,因此,能够较好地保证用户预览效果。
另一种可能的实施方式,若指定的视频帧位置为跳转指令指示的视频帧位置,则第一线程对应的视频编辑处理模式为第一处理模式或者第二处理模式。示例性地,若指定的视频帧位置为第一跳转指令指示的视频帧位置,则第一线程对应的视频编辑处理模式可以为第一处理模式或者第二处理模式;若指定的视频帧位置为第一跳转指令指示的视频帧位置,则第一线程对应的视频编辑处理模式可以为第二处理模式。
S202、通过第一线程,按照确定的视频编辑处理模式,从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧。
若确定的视频编辑处理模式为第一处理模式,则应用程序从指定的视频帧位置开始,依次对待处理视频中的各关键帧执行目标视频编辑操作,并存 储每个关键帧执行了目标视频编辑操作得到的目标视频帧。
若确定的视频编辑处理模式为第二处理模式,则应用程序从指定的视频帧位置开始,逐帧对待处理视频执行目标视频编辑操作,并存储每个视频帧执行了目标视频编辑操作得到的目标视频帧。
S203、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例中步骤S203与图1所示实施例中步骤S102类似,可参照图1所示实施例的详细描述,简明起见,此处不再赘述。
本实施例提供的方法,在视频编辑场景中,通过获取并分析指定的视频帧位置,从而确定第一线程对应的视频编辑处理模式,并按照确定的视频编辑处理模式对待处理视频中相应的视频帧执行目标视频编辑操作,以保证正在执行目标视频编辑操作的视频帧位置始终领先于正在展示的目标视频帧在待处理视频帧中的视频帧位置,从而解决了在对待处理视频执行目标视频编辑操作的过程中会发生预览卡顿的问题,保证用户的预览需求;此外,由于执行目标视频编辑操作较为耗时,若算法处理的耗时与渲染线程耗时不匹配,容易出现阻塞渲染线程,因此,本公开通过将执行视频编辑处理与渲染进行解耦,由不同的线程执行,解决了流式框架中由于执行目标视频编辑操作阻塞渲染线程的问题,有利于提高预览效果。
图3为本公开另一实施例提供的视频处理方法的流程图。参照图3所示,本实施例的方法包括:
在图2所示实施例的基础上,图2所示实施例中的S201可通过本实施例中的S301至S306实现。
S301、确定指定的视频帧位置是否存在相应目标视频帧。
若指定的视频帧位置不存在相应的目标视频帧,则执行S302;若指定的视频帧位置存在相应的目标视频帧,则执行S303。
S302、根据指定的视频帧位置,确定第一线程对应的视频编辑处理模式。
其中,确定第一线程对应的视频编辑处理模式之后,执行S305。
S303、确定当前第一线程是否对待处理视频执行目标视频编辑操作。
若当前第一线程未对待处理视频执行目标视频编辑操作,则执行S304和S305。若当前第一线程正在对待处理视频执行目标视频编辑操作,则执行S306。
S304、确定视频编辑处理模式为第二处理模式。
S305、通过第一线程,按照确定的视频编辑处理模式,对待处理视频执行目标视频编辑操作,获得执行了目标视频编辑操作的各目标视频帧。
S306、通过第一线程继续对待处理视频执行目标视频编辑操作。
在实际的场景中,第一线程开始按照一视频编辑处理模式执行目标视频编辑操作,或者,切换视频编辑处理模式时,均需要更新解码器,而更新解码器需要耗费较多的计算资源。因此,本实施例提供的方法通过分析指定的视频帧位置是否存在对应的目标视频帧、以及当前是否存在针对待处理视频执行目标视频编辑操作的进程(即当前第一线程是否正在对待处理视频执行目标视频编辑操作),确定是否切换视频编辑处理模式,从而实现减小频繁更新解码器带来的计算资源浪费,进而提高执行目标视频编辑操作的处理效率。
其中,分析指定的视频帧位置是否存在相应的目标视频帧,是由于在一些情况下,之前可能已经对待处理视频执行过目标视频编辑操作,因此,待处理视频中的部分视频帧或者全部视频帧存在相应的目标视频帧。以目标视频编辑操作为智能抠图为例,假设用户导入待处理视频后,通过操作智能抠图对应的控件,生成视频编辑指令;应用程序根据视频编辑指令对待处理视频进行智能抠图;在智能抠图未全部完成的情况下,用户输入撤销指令,取消对待处理视频进行智能抠图。由于撤销智能抠图时,待处理视频未全部完成智能抠图,因此,有部分视频帧存在相对应的目标视频帧(即抠图结果)。
一种可能的情况下,若指定的视频帧位置不存在相对应的目标视频帧,无论当前第一线程是否正在对待处理视频执行目标视频编辑操作,则需要根据指定的视频帧位置,确定第一线程对应的视频编辑处理模式,第一线程按照确定的视频编辑处理模式从指定的视频帧位置开始,对待处理视频执行目标视频编辑操作。
其中,根据指定的视频帧位置确定第一线程对应的视频编辑处理模式的 实现方式可参照前文所述,此处不再赘述。
另一种可能的情况下,若指定的视频帧位置存在相应的目标视频帧,且当前不存在针对待处理视频执行目标特效的进程,由于视频编码是按照视频帧的先后顺序执行的,因此,可以从指定的视频帧位置的下一帧开始,逐帧确定是否存在相应的目标视频帧,直至确定第一个不存在相应的目标视频帧的视频帧位置,并通过第一线程,按照第一处理模式或者第二处理模式,从第一个不存在相应的目标视频帧的视频帧位置开始,对待处理视频执行目标视频编辑操作。
该场景下,第一线程采用第一处理模式或者第二处理模式,可根据指定的视频帧位置为预设的视频帧位置或者第一跳转指令指示的视频帧位置确定。
另一种可能的情况下,若指定的视频帧位置存在相应的目标视频帧,且假设当前存在针对待处理视频执行目标特效的进程,则可以继续执行该进程,这样则无需更新解码器,减小了特效处理模式切换导致更新解码器带来的资源消耗。例如,上述指定的视频帧位置为第二跳转指令指示的视频帧位置时,可能第一线程正在按照一视频编辑处理模式对待处理视频执行目标视频编辑操作,第二跳转指令指示的视频帧位置存在相应目标视频帧,则无需切换视频编辑处理模式,即不打断第一线程。
其中,若指定的视频帧位置为预设的视频帧位置或者第一跳转指令指示的视频帧位置,确定视频编辑处理模式可参照前文所述的方式实现;若指定的视频帧位置为第二跳转指令指示的视频帧位置,则可以确定视频编辑处理模式为第二处理模式。
S307、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
无论是在S305或者S306的基础上,第一线程对待处理视频执行目标视频编辑操作的过程中,应用程序可以响应播放指令,通过第二线程进行播放。具体地,本实施例中步骤S307与图2所示实施例中步骤S203类似,可参照 图2所示实施例的详细描述,简明起见,此处不再赘述。
本实施例提供的方法,在视频编辑场景中,通过分析指定的视频帧位置是否存在对应的目标视频帧、以及当前是否存在针对待处理视频执行目标视频编辑操作的进程,确定是否切换视频编辑处理模式,不仅能够实现满足用户的预览需求,解决用户预览卡顿的问题,且减小了频繁更新解码器带来的资源浪费,提高了执行目标视频编辑的处理效率。
图4为本公开另一实施例提供的视频处理方法的流程图。参照图4所示,本实施例提供的方法包括:
S401、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
S402、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例步骤S401、S402分别与图1所示实施例S101、S102类似,可参照图1所示实施例的详细描述,简明起见,此处在不再赘述。
S403、响应于所述播放指令,将视频编辑处理模式切换为第三处理模式,通过第一线程,按照第三处理模式对所述待处理视频执行目标视频编辑操作。
第三处理模式为根据预览播放速度确定执行目标特效的视频帧位置的模式。
示例性地,假设待处理视频有100帧,在步骤S401至S402中,已基于确定的视频编辑处理模式已对第1至第10视频帧执行了目标视频编辑操作,获得相应的目标视频帧。当应用程序接收到播放指令时,第二线程开始依次对第1至第10视频帧对应的目标视频帧进行渲染,以按照设定的播放速度播放第1至第10视频帧分别对应的目标视频帧,假设当前播放至第10帧,检测到要播放的下一视频帧(即第11帧)不存在相对应的目标视频帧。若执行 目标视频编辑操作耗时大于播放速度,若按照视频帧的先后顺序对第11视频帧执行目标视频编辑操作,则无法满足播放的需求,必然会导致播放卡顿严重、甚至花屏等问题。因此,将视频编辑处理模式切换为第三处理模式,使得第一线程根据播放速度确定执行目标视频编辑操作的视频帧位置,以保证正在执行目标视频编辑操作的视频帧位置始终领先于正在展示的目标视频帧在待处理视频帧中对应的视频帧位置,通过对领先预览播放进度的视频帧进行目标视频编辑操作,从而满足预览播放的需求,解决预览播放卡顿严重、花屏等问题。
需要说明的是,当视频编辑处理模式为第三处理模式,可以结合预览播放速度、预设的单个视频帧执行目标视频编辑操作的处理时长(即目标视频编辑操作的平均耗时)、当前正在执行目标视频编辑操作的视频帧位置、已执行了目标视频编辑操作的视频帧等等一个或者多个因素,动态地确定针对哪些领先于播放进度的视频帧执行目标视频编辑操作。
继续结合前一段的示例,例如,按照第三处理模式确定通过第一线程对第15视频帧执行目标视频编辑操作,但第15帧已执行了目标特效,则可以按照视频帧的先后顺序针对第16视频帧执行目标视频编辑操作(当然,并不限于第16帧,也可以是与第15帧有一定间隔的后续的其他视频帧,如第18帧),确定第16视频帧不存在相应目标视频帧,则对第16视频帧执行目标视频编辑操作,对第16视频帧执行目标视频编辑操作后,继续按照第三处理模式灵活确定要执行目标视频编辑操作的视频帧位置。
由上述关于第三处理模式的介绍以及示例可知,第三处理模式是响应于播放指令的一种视频编辑处理模式,在播放目标视频帧的过程中,通过第三处理模式能够保证正在执行目标视频编辑操作的视频帧位置始终领先于正在播放的目标视频帧在待处理视频中对应的视频帧位置,即领先于播放进度。
本实施例中,将视频编辑处理模式切换为第三处理模式,可以通过以下一个或多个因素确定切换时机。
1、用户操作的随机性。例如,用户可能连续输入播放指令以及暂停播放指令。2、当前是否存在针对待处理视频执行目标视频编辑操作的进程。3、已执行了目标视频编辑操作得到的目标视频帧。
示例性地说明上述因素的影响:
假设待处理视频有100帧,在步骤S401至S402中,已基于确定的视频编辑处理模式已对第1至第10视频帧执行了目标视频编辑操作,获得第1至第10视频帧分别对应的目标视频帧。
情形(1):假设,应用程序接收到播放指令,更新解码器,立即将视频编辑处理模式切换至第三处理模式;且从第1帧的位置开始播放目标视频帧。接着,应用程序接收到暂停播放指令,由于播放指令和暂停播放指令,这两个指令间隔时间较短,第二线程可能并未渲染至第10帧。而打断当前视频编辑处理模式的进程,即打断第一线程,重新更新解码器,切换视频编辑处理模式会带来了较大的计算资源消耗,这会导致视频处理效率下降。
情形(2):在播放的过程中,实时检测要播放的下一个视频帧位置是否存在相应的目标视频帧,若确定要播放的下一个视频帧位置不存在相应的目标视频帧,再切换视频编辑处理模式,则在播放第1帧至第10帧的过程中,还可以继续按照原先的视频编辑处理模式对待处理视频执行目标视频编辑操作,即不打断当前正在执行目标视频编辑操作的进程,能够提高计算资源利用率。
进一步的,在情形(2)的基础上,若应用程序接收到暂停播放指令时,播放进度未达到目标视频编辑操作的进度。例如,播放至第8帧时应用程序接收到暂停播放指令,在播放第1帧至第8帧的过程中,应用程序通过原先的视频编辑处理模式还对第11帧和第12帧执行了目标视频编辑操作。
在上述情形(2)中,虽然应用程序接收了播放指令和暂停播放指令,由于要播放的这部分存在相应目标视频帧,因此,不用切换第一线程对应的视频编辑处理模式,也能够保证播放的流畅度,这样也能够减小切换视频编辑处理模式却带来了较大的计算资源消耗;另一方面,在播放的过程中第一线程还按照原先的视频编辑处理模式对一部分视频帧执行了目标视频编辑操作,因此,提高了视频处理效率。
基于上述描述,可知,本步骤中,应用程序响应播放指令,可以基于上述一个或者多个因素,确定视频编辑处理模式的切换时机。
本实施例提供的方法,通过第一线程对待处理视频执行目标视频编辑操 作的过程中,接收到播放指令,响应该播放指令,可以将视频编辑处理模式切换为第三处理模式,以对待处理视频的视频帧执行目标视频编辑操作,通过第三处理模式,能够保证在播放过程中正在执行目标视频编辑操作的视频帧位置始终领先于播放进度,从而解决用户预览播放卡顿严重、花屏等问题,提高视频处理效率,有利于能够提升用户体验。
可选地,在图4所示实施例的基础上,还包括:
S404、接收暂停播放指令。
S405、响应于暂停播放指令,第二线程暂停播放,且第一线程继续按照第三处理模式对所述待处理视频的视频帧执行目标视频编辑操作。
具体地,当应用程序接收到暂停播放指令时,第二线程停止渲染目标视频帧,即暂停播放目标视频帧。
由于接收到暂停播放指令时,存在针对待处理视频执行目标特效的进程,因此,可以不打断该进程,即第一线程继续对待处理视频执行目标视频编辑操作,即继续按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。这样能够有效利用电子设备的计算资源执行视频编辑,提高了计算资源利用率,从而提高视频处理效率。
下面通过图5至图7所示实施例,步骤S403对应的几种可能的实现方式进行介绍。
图5为本公开另一实施例提供的视频处理方法的流程图。参照图5所示,本实施例的方法包括:
S501、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
S502、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例步骤S501、S502分别与图4所示实施例S401、S402类似,可 参照图4所示实施例的详细描述,简明起见,此处在不再赘述。
S503、响应于播放指令,从获取预览播放指令时正在执行目标视频编辑操作的视频帧位置开始,按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。
本步骤中,获取播放指令时,第一线程可能正在对某一视频帧执行目标视频编辑操作,这样的情况下,切换视频编辑处理模式的视频帧的位置即为当前正在执行目标视频编辑操作的视频帧位置;应用程序也可能刚刚对某一视频帧执行完目标视频编辑操作,但还未开始对下一视频帧执行目标视频编辑操作,这样的情况下,切换视频编辑处理模式的视频帧的位置即为要执行目标视频编辑操作的下一个视频帧的位置。
其中,应用程序获取播放指令,可以立即将第一线程对应的视频编辑处理模式切换为第三处理模式,并从确定切换视频编辑处理模式的视频帧的位置,按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。
需要说明的是S502和S503的执行顺序不分先后。
本实施例提供的方法,在对待处理视频执行目标视频编辑操作的过程中,获取播放指令;响应于播放指令,通过第一线程,从获取播放指令时正在执行目标视频编辑操作的视频帧位置开始,按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。本实施例提供的实现方式逻辑简单,无需进行复杂的判断。另外,通过第三处理模式,能够保证在播放过程中正在执行目标视频编辑操作的视频帧位置始终领先于播放进度,从而解决用户预览播放卡顿严重、花屏等问题,提高视频处理效率,有利于能够提升用户体验。
可选地,在图5所示实施例的基础上,还包括:
S504、获取暂停播放指令。
S505、响应于暂停播放指令,第二线程暂停播放,且第一线程继续按照第三处理模式对所述待处理视频的视频帧执行目标视频编辑操作。
具体地,当应用程序接收到暂停播放指令时,第二线程停止渲染目标视频帧,即暂停播放目标视频帧。
由于接收到暂停播放指令时,存在针对待处理视频执行目标特效的进程,因此,可以不打断该进程,即第一线程继续对待处理视频执行目标视频编辑 操作,即继续按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。这样能够有效利用电子设备的计算资源执行视频编辑,提高了计算资源利用率,从而提高视频处理效率。
图6为本公开另一实施例提供的视频处理方法的流程图。参照图6所示,本实施例提供的方法包括:
S601、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
S602、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例步骤S601、S602分别与图4所示实施例S401、S402类似,可参照图4所示实施例的详细描述,简明起见,此处在不再赘述。
S603、响应于播放指令,根据预设的单个视频帧执行目标视频编辑操作的处理时长、播放速度以及当前正在执行目标视频编辑操作的视频帧位置,确定第三处理模式对应的切换位置。
示例性地,可以根据当前正在执行目标视频编辑操作的视频帧的位置以及播放速度,确定当前的视频编辑处理模式可以继续执行的最大时长;接着,根据确定的最大时长以及预设的单个视频帧执行目标视频编辑操作的处理时长,确定在最大时长内,采用当前视频编辑处理模式能够处理的视频帧数量S;接着,从当前正在执行目标视频编辑操作的视频帧位置起连续的S个视频帧中,确定视频编辑处理模式的切换位置。其中,视频编辑处理模式的切换位置可以为从当前正在执行目标视频编辑操作的视频帧位置起连续的S个视频帧中任一视频帧。
在一些可能的实现方式中,为减小视频编辑处理模式切换出现错误,如切换时机太晚导致目标视频编辑操作处理进度无法满足渲染进度,则可以设置连续的S个视频帧中,距离当前正在执行目标视频编辑操作的视频帧较近 的视频帧为切换位置。
应理解,确定上述最大时长时,由于当前正在执行目标视频编辑操作的视频帧并未完成目标视频编辑操作,因此,可以根据当前正在执行目标视频编辑操作的视频帧的前一帧计算最大时长,从而提高了计算结果的准确性。
需要说明的是S602和S603的执行顺序不分先后。
S604、通过第一线程,从切换位置开始,按照第三处理模式对待处理视频的视频帧执行所述目标视频编辑操作。
其中,关于第三处理模式的详细内容,可参照图4所示实施例中的描述,简明起见,此处不再赘述。
本实施例提供的方法,在对待处理视频执行目标视频编辑操作的过程中,获取播放指令;响应于播放指令,根据单个视频帧执行目标视频编辑操作所需的时长、播放速度以及当前正在执行目标视频编辑操作的视频帧位置,灵活确定视频编辑处理模式的切换位置;当目标视频编辑操作进度到达切换位置时,将视频编辑处理模式切换为第三处理模式,提高了视频编辑处理模式切换的灵活性。另外,通过第三处理模式,能够保证在播放过程中正在执行目标视频编辑操作的视频帧位置始终领先于播放进度,从而解决用户预览播放卡顿严重、花屏等问题,提高视频处理效率,有利于能够提升用户体验。
可选地,在图6所示实施例的基础上,还包括:
S605、获取暂停播放指令。
S606、响应于暂停播放指令,第二线程暂停播放,且第一线程继续按照第三处理模式对所述待处理视频的视频帧执行目标视频编辑操作。
具体地,当应用程序接收到暂停播放指令时,第二线程停止渲染目标视频帧,即暂停播放目标视频帧。
由于接收到暂停播放指令时,存在针对待处理视频执行目标特效的进程,因此,可以不打断该进程,即第一线程继续对待处理视频执行目标视频编辑操作,即继续按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。这样能够有效利用电子设备的计算资源执行视频编辑,提高了计算资源利用率,从而提高视频处理效率。
图7为本公开另一实施例提供的视频处理方法的流程图。参照图7所示,本实施例提供的方法包括:
S701、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
S702、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例步骤S701、S702分别与图4所示实施例S401、S402类似,可参照图4所示实施例的详细描述,简明起见,此处在不再赘述。
S703、响应于播放指令,实时检测要播放的下一视频帧是否存在相应目标视频帧。
当检测到要播放的下一视频帧不存在相应的目标视频帧时,执行S704。
需要说明的是S702和S703的执行顺序不分先后。
S704、通过所述第一线程,从所述要播放的下一个目标视频帧的位置开始,按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。
需要说明的是,从要播放的下一个目标视频帧的位置开始,按照第三处理模式执行目标视频编辑操作,并不意味着下一个要执行目标视频编辑操作的视频帧即为要播放的下一个视频帧位置对应的视频帧。例如,下一个要播放的视频帧为待处理视频帧的第10帧,但第10帧不存在相应的目标视频帧,则从第10帧开始,第一线程按照第三处理模式执行目标视频编辑操作,但是,确定的要执行目标视频编辑操作的视频帧位置可能为待处理视频中的第13帧、第15帧等等,从而保证正在执行目标视频编辑操作的视频帧位置领先于播放进度。
本实施例中,在对待处理视频执行目标视频编辑操作的过程中,接收播放指令;响应于预览播放指令,在检测到要播放的下一个视频帧不存在相应的目标视频帧时,切换视频编辑处理模式,这样能够尽量减小由于用户频繁 操作而导致需要频繁切换视频编辑处理模式带来的资源消耗。另外,通过播放处理模式,能够保证在播放过程中正在执行目标视频编辑操作的视频帧位置始终领先于播放进度,从而解决用户预览播放卡顿严重、花屏等问题,提高视频处理效率,有利于能够提升用户体验。
可选地,在图7所示实施例的基础上,还包括:
S705、获取暂停播放指令。
S706、响应于暂停播放指令,第二线程暂停播放,且第一线程继续按照第三处理模式对所述待处理视频的视频帧执行目标视频编辑操作。
具体地,当应用程序接收到暂停播放指令时,第二线程停止渲染目标视频帧,即暂停播放目标视频帧。
由于接收到暂停播放指令时,存在针对待处理视频执行目标特效的进程,因此,可以不打断该进程,即第一线程继续对待处理视频执行目标视频编辑操作,即继续按照第三处理模式对待处理视频的视频帧执行目标视频编辑操作。这样能够有效利用电子设备的计算资源执行视频编辑,提高了计算资源利用率,从而提高视频处理效率。
图8为本来另一实施例提供的视频处理方法的流程图。
其中,应用程序支持视频帧位置跳转,因此,应用程序播放各目标视频帧的过程中,还可以获取跳转指令,即在上述图1至图7任一实施例的基础上,还可以执行本实施例的方法。
参照图8所示,本实施例的方法包括:
S801、获取第三跳转指令。
S802、响应于所述第三跳转指令,根据所述第三跳转指令指示的视频帧位置是否存在相应目标视频、以及所述第一线程是否正在对待处理视频执行所述目标视频编辑操作,重新确定视频编辑处理模式。
S803、通过所述第一线程,按照重新确定的视频编辑处理模式,对所述待处理视频的视频帧执行目标视频编辑操作。
且通过第二线程,对所述跳转指令指示的视频帧位置对应的目标视频帧进行渲染,展示第三跳转指令指示的视频帧位置对应的目标视频帧。
一些情况下,在播放的过程中,用户输入第三跳转指令,应用程序可以将播放状态切换为暂停播放;另一些情况下,在播放的过程中,用户输入第三跳转指令,应用程序也可以从第三跳转指令指示的视频帧位置开始播放目标视频帧。
本实施例中,若响应第三跳转指令,应用程序将播放状态切换为暂停播放状态,则可能存在以下几种情形:
情形(a)、第三跳转指令指示的视频帧位置存在相应目标视频帧,且当前存在对待处理视频执行目标视频编辑操作的进程,则不打断当前进程,由第一线程按照原先的视频编辑处理模式继续对待处理视频执行目标视频编辑操作。
由于第三跳转指令指示的视频帧位置存在相应目标视频帧,应用程序可以展示第三跳转指令指示的视频帧位置对应的目标视频帧,由于应用程序可以处于暂停预览播放状态,因此,不会发生卡顿现象。在此基础上,应用程序可以继续执行当前正在进行的进程。
情形(b)、第三跳转指令指示的视频帧位置存在相应目标视频帧,但当前不存在对待处理视频执行目标特效的进程,因此,可以从第三跳转指令指示的视频帧位置的下一帧开始逐帧确定是否存在相应的目标视频帧,直至从确定的第一个不存在相应目标视频帧的视频帧位置开始,按照第二处理模式对待处理视频行目标视频编辑操作。
情形(c)、第三跳转指令指示的视频帧位置不存在相应目标视频帧,则按照第二处理模式,从第三跳转指令指示的视频帧位置开始,对待处理视频的视频帧执行目标视频编辑操作。
若响应第三跳转指令,应用程序从第三跳转指令指示的视频帧位置开始播放目标视频帧,则可以将视频编辑处理模式切换为第三处理模式,保证从第三跳转指令指示的视频帧位置开始播放目标视频帧时,正在执行目标视频编辑操作的视频帧位置始终领先于当前的播放进度,从而实现播放不发生卡顿,满足用户的预览需求。
其中,确定视频编辑处理模式的切换时机的实现方式可以采用前述图5至图7任一实施例所示的方式,可参照前述图5至图7所示实施例的详细描 述,简明起见,此处不再赘述。
本实施例提供的方法,通过分析第三跳转指令指示的视频帧位置是否存在相应的目标视频帧以及当前是否存在正在执行目标视频编辑操作的进程,从而确定是否需要进行视频编辑处理模式的切换。此外,在确定是否切换视频编辑处理模式时,还考虑了播放状态,保证实时的显示效果。
图9为本公开另一实施例提供的视频处理方法的流程图。参照图9所示,本实施例的方法包括:
S901、通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行目标视频编辑操作,获取执行目标视频编辑操作得到的各目标视频帧。
S902、响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
本实施例步骤S901、S902分别与图1所示实施例S101、S102类似,可参照图1所示实施例的详细描述,简明起见,此处在不再赘述。
S903、对所述待处理视频的最后一个视频帧执行所述目标视频编辑操作后,通过第一线程,从待处理视频的起始视频帧的位置开始,按照第四处理模式对待处理视频的视频帧执行目标视频编辑操作。
其中,第四处理模式为依次对待处理视频中不存在相应目标视频帧的视频帧执行目标视频编辑操作的模式。第四处理模式是一种自适应的视频编辑处理模式。
当应用程序按照第一处理模式至第三处理模式中任一种或者多种视频编辑处理模式执行至待处理视频的最后一个视频帧位置,则可以切换至第四处理模式,充分利用计算资源,依次对待处理视频中不存在相应目标视频帧的视频帧执行目标视频编辑操作。
接下来,以待处理视频包括一个视频片段和待处理视频包括多个视频片 段两种情况分别进行介绍:
1、待处理视频包括一个视频片段
假设待处理视频包括一个视频片段,该视频片段记为视频片段A1,当检测到对视频片段A 1的最后一个视频帧执行了目标视频编辑操作时,则将视频编辑处理模式切换为第四处理模式,从视频片段A 1的起始视频帧位置,按照视频片段A 1中视频帧的先后顺序,依次对没有执行目标视频编辑操作的视频帧执行目标视频编辑操作。
示例性地,视频片段A 1帧包括100帧,假设按照第二处理模式对第50帧至第100帧执行目标视频编辑操作;当接收到对第100帧执行完目标视频编辑操作,则切换至第四处理模式,从第1帧开始,逐帧对第1帧至第49帧执行目标视频编辑操作。
又如,视频片段A 1帧包括100帧,假设按照第一处理模式对第1帧、第5帧、第10帧、第15帧、第20帧进行了目标视频编辑操作;从第30帧开始,切换至第二处理模式,对第30帧至第100帧执行了目标视频编辑操作;当检测到对第100帧执行完目标视频编辑操作后,则切换至第四处理模式,从第1帧开始,依次对未执行目标视频编辑操作的第2帧至第4帧、第6帧至第9帧、第11帧至第14帧、第16帧至第19帧执行目标视频编辑操作。
2、待处理视频包括多个视频片段
假设待处理视频包括多个视频片段,多个视频片段分别记为视频片段B 1至B N。当检测到对视频片段B n的最后一个视频帧执行了目标视频编辑操作后,则将视频编辑处理模式切换为第四处理模式,从视频片段B n的起始视频帧位置,按照视频片段B n中视频帧的先后顺序,依次对没有执行目标视频编辑操作的视频帧执行目标视频编辑操作。其中,N为大于或等于2的整数。n为大于或等于1,且小于或等于N的整数。
也就是说,待处理视频包括多个视频片段的情况下,当检测到对某个视频片段的最后一个视频帧执行了目标视频编辑操作后,则可以优先在视频片段内执行第四处理模式的特效处理任务。
在实际应用中,每个视频片段对应一个解码器,在视频片段内执行第四处理模式的目标视频编辑操作任务,能够减小由于视频片段切换导致更新解 码器带来的计算资源消耗。且在视频片段内执行第四处理模式的目标视频编辑操作任务,能够充分利用计算资源,提高视频处理效率。
示例性地,假设待处理视频包括3个视频片段,分别为视频片段B 1、B 2、B 3。当前正在对视频片段B 2按照第二处理模式执行目标视频编辑操作,当检测到对视频片段B 2的最后一个视频帧执行了目标视频编辑操作后,则切换至第四处理模式,从视频片段B 2的起始视频帧位置开始,依次对视频片段B 2中不存在相应目标视频帧的视频帧执行目标视频编辑操作。
当视频片段B 2中所有视频帧均存在相应的目标视频帧,可以将视频片段B 2标记为“已完成”。若没有接收到其他的操作指令的情况下,可以对视频片段B 1和视频片段B 3执行目标视频编辑操作。
这里以视频片段B 2完成了目标视频编辑操作,接下来要处理的下一个视频片段是视频片段B 1为例进行说明。
针对视频片段B 1可能未执行过目标视频编辑操作,则视频片段B 1中的任一视频帧都不存在相应的目标视频帧。这样的情况下,针对视频片段B 1,从视频片段B 1的起始视频帧位置开始,可以首先采用第一处理模式,依次对视频片段B 1的各视频帧执行目标视频编辑操作。在此基础上,当检测到对视频片段B 1的最后一个视频帧执行了目标视频编辑操作,再切换至第四处理模式,依次对视频片段B 1中不存在相应目标视频帧的视频帧执行目标视频编辑操作。
针对视频片段B 1可能执行过目标视频编辑操作,则视频片段B 1中部分视频帧存在相应的目标视频帧。这样的情况下,针对视频片段B 1,从视频片段B 1的起始视频帧位置开始,采用第四处理模式,依次对视频片段B 1中不存在相应目标视频帧的视频帧执行目标视频编辑操作。
需要说明的是,若视频片段B 1中所有视频帧均存在相应的目标视频编辑操作,则视频片段B 1会被标记为“已完成”,根据上述标记,则可以确定不用对视频片段B 1执行目标视频编辑操作。
下一个要处理的视频片段是视频片段B 3时,其实现方式与下一个要处理的视频片段是视频片段B 1的实现方式类似,可参照前述的详细介绍,此处不再赘述。
本实施例提供的方法,在对待处理视频的视频帧执行目标视频编辑操作的过程中,当检测到对待处理视频的最后一个视频帧执行了目标视频编辑操作,则将视频编辑处理模式切换为第四处理模式,依次对待处理视频中不存在相应目标视频帧的视频帧执行目标视频编辑操作。
图10为本公开一实施例提供的视频处理装置的框架示意图。参照图10所示的视频处理装置,包括三个实体层,分别为:图层(graph层)、交互层和算法处理层。
图层包括:解码线程和渲染线程;其中,解码线程包括:解码控制单元(decoder reader unit);渲染线程包括:视频预处理单元(clip preprocess unit)。
解码控制单元,用于根据用户输入的操作指令,例如,跳转指令、播放指令等,更新算法处理层中正在执行的算法处理任务;以及同步更新渲染位置。
视频预处理单元,用于判断当前视频帧位置是否存在相应的目标视频帧;若存在,则读取目标视频帧,并上传纹理,进行渲染、上屏显示;若不存在,则根据前一个执行了目标视频编辑操作的视频帧,读取其对应的目标视频帧的数据,并上传纹理,进行渲染、上屏显示。
交互层包括:任务调度模块和缓存控制模块。
其中,任务调度模块,用于生成视频编辑任务,并向算法处理层下发视频编辑任务。
需要说明的是,任务调度模块,可以基于用户输入的操作指令生成视频编辑任务,也可以基于算法处理层返回的指示切换视频编辑处理模式为自蔓延处理模式的信息生成视频编辑任务。因此,任务调度模块可以看作是通过外部触发策略和自蔓延策略进行任务调度。
在整个视频处理装置的框架中,以算法任务为最小处理单元。每次添加某种编辑(clip)的视频编辑操作,则会创建一个算法任务,即算法任务(task Param)对象,并返回一个任务标志符(task ID),一个算法任务对象唯一对应一个任务标识符。
算法任务(task Param)对象是一个结构体,算法任务中记录着一种剪辑 上算法处理需要的一切参数,并持有一个算法处理实例(task Process wrapper)引用。
算法处理实例(task Process wrapper)持有与之一一对应的Lab算法模型实例,并管理与之对应的内容。算法处理实例维护了一个主文件(MANE File),用于记录该路径文件所有算法结果(即目标视频帧)的相关信息(如指示视频帧位置的时间戳信息,如PTS信息),并提供接口查询。
在实际应用中,Lab算法模型实例数据量较大,例如,智能抠图中Lab算法模型实例约为100兆,为了减少出现内存空间不足(out of memory,OOM)的情况,Lab算法模型实例用后立即销毁。
缓存控制模块,用于根据缓存中存储的目标视频帧,向渲染线程返回目标视频帧;当缓存区中不存在相应的目标视频帧时,再向渲染线程返回目标视频帧在外部存储空间中的访问路径;若外部存储空间中不存在目标视频帧时,则向渲染线程返回空(return NULL)。
算法层也可以称为算法交互层、中间层等其他名称,本公开对此不作限制。
算法处理层包括:算法任务管理模块,算法任务管理模块对外提供访问接口,例如,算法任务管理模块提供增加、删除、查询特效处理任务以及进度的接口。
算法任务管理模块本身还持有算法任务(task Param)对象、算法处理实例以及算法线程。
其中,算法线程的本质是一个线程对象,是一个从线程池中选择的线程,执行相应消息队列中的特效处理任务。
算法线程执行视频编辑任务时,可以包括以下步骤:
步骤1、更新相应的解码器。
步骤2、校正视频编辑任务对应的总帧数。
步骤3、判断已处理帧数是否等于总帧数,或者,判断目标视频编辑操作的起始位置是否为待处理视频的最后一个视频帧。
需要说明的是,已处理帧数是否等于总帧数即对应图10中条件1,目标视频编辑操作的起始位置是否为待处理视频的最后一个视频帧即对应图10中 条件2。
步骤4、当上述条件均不满足时,确定该视频帧是否已经存在相应的目标视频帧;若存在,则跳过对该视频帧执行目标视频编辑操作;若不存在,则执行步骤5。
步骤5、根据相应的视频编辑处理模式确定该视频帧是否为需要丢弃的视频帧。
其中,这里所指的视频编辑处理模式可以为播放处理模式、跳转处理模式、预处理模式以及自蔓延处理模式中的任一种。关于上述各种视频编辑处理模式的含义可以参见前文描述。
需要说明的是,若视频编辑处理模式为播放处理模式,则根据播放速度确定该视频帧是否为需要丢弃的视频帧,若是需要丢弃的视频帧,则无需对其执行目标视频编辑操作;若不是需要丢弃的视频帧,则执行步骤6。
若视频编辑处理模式为跳转处理模式,则每一视频帧均为不需要丢弃的视频帧。
若视频编辑处理模式为预处理模式,则关键帧为不需要丢弃的视频帧,则执行步骤6;其他帧则为需要丢弃的视频帧。
视频编辑处理模式为自蔓延模式时,不存在相应的目标视频帧的视频帧均为不需要丢弃的视频帧;存在相应的目标视频帧的视频帧均为需要丢弃的视频帧。
步骤6、对视频帧执行目标视频编辑操作,获取相应的目标视频帧。
步骤7、存储视频帧对应的目标视频帧。
其中,视频帧对应的目标视频帧可以同时在外部存储空间以及缓存中进行存储。在渲染线程进行渲染时,若缓存中有相应的目标视频帧,则可以直接从缓存中读取数据;若缓存中无相应的目标视频帧,则从外部存储空间读取相应的目标视频帧。通过这样的方式,能够减少数据输入/输出次数,提高视频处理效率。
此外,在存储视频帧对应的目标视频帧时,可以将目标视频帧的相关信息可以传递给回调业务,以供回调业务调用业务线程执行相关操作。例如,在执行目标视频编辑的过程中,可以由回调业务反馈给业务线程当前的目标 视频编辑的执行进度。
采用上述图10所示的框架,算法线程可以基于多种视频编辑处理模式对待处理视频执行目标视频操作,保证算法线程的执行进度始终领先于渲染线程的进度,从而解决了用户预览卡顿的问题。此外,算法线程接收到对待处理视频的视频帧执行目标视频编辑操作的任务后可以独立执行,且不会对上层的编码线程与渲染线程产生影响,实现了编码线程、算法线程以及渲染线程之间的并行处理,解决了算法线程由于处理耗时导致阻塞渲染线程的问题,从而解决了用户预览卡顿的问题。
图11为本公开一实施例提供的图10所示实施例中各线程的生命周期示意图。结合图11所示,其中包括:抽帧线程、渲染线程、业务线程以及算法线程。
其中,抽帧线程用于执行同步抽帧;渲染线程,用于渲染目标视频帧,以供播放;业务线程用于控制算法线程对待处理视频的每个视频帧按照确定的视频编辑处理模式执行目标视频编辑操作以及用于控制播放的暂停和开始;算法线程用于根据业务线程的指示对相应的视频帧执行目标视频编辑操作。
在接收到针对待处理视频的每一个视频帧执行目标视频编辑操作的视频编辑指令时,首先,业务线程设置视频编辑处理模式为预处理模式或者跳转处理模式,且控制算法线程开启,并按照设置的视频编辑处理模式运行算法,以对待处理视频的视频帧执行目标视频编辑操作。参照图11所示,在此过程中,需要依次缓存相应的目标视频帧。
当需要播放时,业务线程首先暂停算法线程,并设置视频编辑处理模式为播放处理模式,控制算法线程按照播放处理模式运行算法,以对待处理视频的视频帧执行目标视频编辑操作。参照图11所示,在此过程中,仍需要依次缓存相应的目标视频帧。同步地,且业务线程控制渲染线程从缓存中读取相应的目标视频帧进行渲染显示,以供用户预览。
当暂停预览播放时,业务线程控制渲染线程暂停播放。
当跳转视频帧位置时,业务线程首先暂停算法线程,并设置视频编辑处理模式为跳转处理模式,控制算法线程按照跳转处理模式运行算法,并从最 终跳转的视频帧位置开始,对待处理视频的视频帧执行目标视频编辑操作。参照图11所示,在此过程中,仍需要同步地根据最终跳转的视频帧位置开始,缓存相应的目标视频帧。同步地,业务线程控制渲染线程展示相应的视频帧位置的目标视频帧。
当对待处理视频的最后一个视频帧执行了目标视频编辑操作后,业务线程设置视频编辑处理模式为自蔓延处理模式,控制算法线程从待处理视频的第一个视频帧位置开始,依次对没有执行目标视频编辑操作的视频帧执行目标视频编辑操作。参照图11所示,在此过程中,仍需要依次缓存相应的目标视频帧。
当取消或者停止对待处理视频执行目标视频编辑操作时,业务线程控制算法线程停止运行算法,并回收算法线程占用的资源。之后,回收渲染线程和算法线程。
结合图10以及图11所示,通过图10所示的框架,结合图11所示的线程之间的交互,可以实现对待处理视频的每一个视频帧执行目标视频编辑操作,且有效解决了用户预览播放卡顿的问题。
图12为本公开一实施例提供的视频处理装置的结构示意图。参照图12所示,本实施例提供的视频处理装置1200包括:
第一处理模块1210,用于通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧。
第二处理模块1220,用于响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
作为一种可能的实施方式,视频处理装置1200还可以包括:显示模块1240。显示模块1240,用于展示各目标视频帧。
作为一种可能的实施方式,所述指定的视频帧位置是根据获取视频编辑 指令时定位的视频帧位置确定的;其中,所述视频编辑指令用于指示对所述待处理视频执行所述目标视频编辑操作;获取所述视频编辑指令时定位的视频帧位置为预设的视频帧位置或者第一跳转指令指定的视频帧位置。
相应地,所述视频处理装置1200还可以包括:获取模块1230。
获取模块1230,用于获取视频编辑指令,以及获取第一跳转指令。
作为一种可能的实施方式,所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的,且所述第二跳转指令为用于对所述待处理视频执行所述目标视频编辑操作的视频编辑指令之后的指令。
相应地,获取模块1230,还用于获取第二跳转指令。
作为一种可能的实施方式,若所述指定的视频帧位置是根据获取视频编辑指令时定位的视频帧位置确定的;第一处理模块1210通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧之前,还用于根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式;其中,所述视频编辑处理模式为第一处理模式或者第二处理模式,所述第一处理模式为对关键帧执行所述目标视频编辑操作的模式,所述第二处理模式为逐帧执行所述目标视频编辑操作的模式。
作为一种可能的实施方式,第一处理模块1210,具体用于若所述指定的视频帧位置为所述预设的视频帧位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式;若所述指定的视频帧位置为第一跳转指令指定的视频帧位位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式或者所述第二处理模式。
作为一种可能的实施方式,第一处理模块1210,具体用于确定所述指定的视频帧位置是否存在相应目标视频帧;若所述指定的视频帧位置不存在相应目标视频帧,则根据所述指定的视频帧位置为所述预设的视频帧位置或者所述第一跳转指令指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式。
作为一种可能的实施方式,第一处理模块1210,具体用于若所述指定的 视频帧位置存在相应目标视频帧,则确定所述第一线程对应的视频编辑处理模式为所述第二处理模式。
相应地,第二处理模块1220,具体用于从所述指定的视频帧位置开始向后逐帧确定是否存在相应目标视频帧,直至确定第一个不存在相应目标视频帧的视频帧位置;通过所述第一线程,按照所述第二处理模式,从所述第一个不存在相应目标视频帧的视频帧位置开始,对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,若所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的;第一处理模块1210,通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧之前,还用于在确定所述指定的视频帧位置不存在相应目标视频帧时,则确定所述第一线程对应的视频编辑处理模式为第二处理模式。
作为一种可能的实施方式,若所述指定的视频帧位置存在相应目标视频帧,且当前所述第一线程正在对所述待处理视频执行所述目标视频编辑操作,则所述第一处理模块1210不打断所述第一线程,使得所述第一线程继续对所述待处理视频执行所述目标视频编辑操作。
作为一种可能的实施方式,第一处理模块1210,还用于响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,通过所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;其中,所述第三处理模式为根据播放速度确定执行所述目标视频编辑操作的视频帧位置的模式。
作为一种可能的实施方式,第一处理模块1210,具体用于响应于所述播放指令,实时检测是否存在要播放的下一个目标视频帧;当检测到不存在要播放的下一个目标视频帧时,通过所述第一线程,从所述要播放的下一个目标视频帧的位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,第一处理模块1210,具体用于响应于所述播 放指令,通过所述第一线程,从获取所述播放指令时正在执行所述目标视频编辑操作的视频帧位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,第一处理模块1210,具体用于响应于所述播放指令,根据预设的单个视频帧执行所述目标视频编辑操作的处理时长、播放速度以及当前正在执行所述目标视频编辑操作的视频帧位置,确定所述第三处理模式对应的切换位置;通过所述第一线程,从所述切换位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
作为一种可能的实施方式,第一处理模块1210,还用于响应于所述暂停播放指令,暂停所述第二线程对所述目标视频帧进行渲染;且所述第一线程继续按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
相应地,获取模块1230,还用于获取暂停播放指令。
作为一种可能的实施方式,第一处理模块1210,还用于对所述待处理视频的最后一个视频帧执行所述目标视频编辑操作后,通过所述第一线程,从所述待处理视频的起始视频帧的位置开始,按照第四处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;其中,所述第四处理模式为依次对所述待处理视频中不存在相应目标视频帧的视频帧执行所述目标视频编辑操作的模式。
本实施例提供的视频处理装置可以用于执行前述任一方法实施例的技术方案,其实现原理以及技术效果类似,可参照前述方法实施例的详细描述,简明起见,此处不再赘述。
图13为本公开一实施例提供的电子设备的结构示意图。参照图13所示,本实施例提供的电子设备1300包括:存储器1301和处理器1302。
其中,存储器1301可以是独立的物理单元,与处理器1302可以通过总线1303连接。存储器1301、处理器1302也可以集成在一起,通过硬件实现等。
存储器1301用于存储程序指令,处理器1302调用该程序指令,执行以 上任一方法实施例的操作。
可选地,当上述实施例的方法中的部分或全部通过软件实现时,上述电子设备1300也可以只包括处理器1302。用于存储程序的存储器1301位于电子设备1300之外,处理器1302通过电路/电线与存储器连接,用于读取并执行存储器中存储的程序。
处理器1302可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器1302还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器1301可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器还可以包括上述种类的存储器的组合。
本公开还提供一种可读存储介质,包括:计算机程序指令;计算机程序指令被电子设备的至少一个处理器执行时,实现上述任一方法实施例所示的视频处理方法。
本公开还提供一种计算机程序产品,所述计算机程序产品被计算机执行时,使得所述计算机实现上述任一方法实施例所示的视频处理方法。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者 设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (19)

  1. 一种视频处理方法,包括:
    通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧;
    响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
  2. 根据权利要求1所述的方法,其中,所述指定的视频帧位置是根据获取视频编辑指令时定位的视频帧位置确定的;其中,所述视频编辑指令用于指示对所述待处理视频执行所述目标视频编辑操作;获取所述视频编辑指令时定位的视频帧位置为预设的视频帧位置或者第一跳转指令指定的视频帧位置。
  3. 根据权利要求1所述的方法,其中,所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的,且所述第二跳转指令为用于对所述待处理视频执行所述目标视频编辑操作的视频编辑指令之后的指令。
  4. 根据权利要求2至3任一项所述的方法,其中,若所述指定的视频帧位置是根据获取视频编辑指令时定位的视频帧位置确定的;所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧之前,所述方法还包括:
    根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式;其中,所述视频编辑处理模式为第一处理模式或者第二处理模式,所述第一处理模式为对关键帧执行所述目标视频编辑操作的模式,所述第二处理模式为逐帧执行所述目标视频编辑操作的模式。
  5. 根据权利要求4所述的方法,其中,所述根据所述指定的视频帧位置, 确定所述第一线程对应的视频编辑处理模式,包括:
    若所述指定的视频帧位置为所述预设的视频帧位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式;
    若所述指定的视频帧位置为第一跳转指令指定的视频帧位位置,则确定所述第一线程对应的视频编辑处理模式为所述第一处理模式或者所述第二处理模式。
  6. 根据权利要求4所述的方法,其中,所述根据所述指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式,包括:
    确定所述指定的视频帧位置是否存在相应目标视频帧;
    若所述指定的视频帧位置不存在相应目标视频帧,则根据所述指定的视频帧位置为所述预设的视频帧位置或者所述第一跳转指令指定的视频帧位置,确定所述第一线程对应的视频编辑处理模式。
  7. 根据权利要求6所述的方法,其中,所述方法还包括:
    若所述指定的视频帧位置存在相应目标视频帧,则确定所述第一线程对应的视频编辑处理模式为所述第二处理模式;
    所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧,包括:
    从所述指定的视频帧位置开始向后逐帧确定是否存在相应目标视频帧,直至确定第一个不存在相应目标视频帧的视频帧位置;
    通过所述第一线程,按照所述第二处理模式,从所述第一个不存在相应目标视频帧的视频帧位置开始,对所述待处理视频的视频帧执行所述目标视频编辑操作。
  8. 根据权利要求3至7任一项所述的方法,其中,若所述指定的视频帧位置是根据第二跳转指令指定的视频帧位置确定的;所述通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧 之前,所述方法还包括:
    若所述指定的视频帧位置不存在相应目标视频帧,则确定所述第一线程对应的视频编辑处理模式为第二处理模式。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    若所述指定的视频帧位置存在相应目标视频帧,且当前所述第一线程正在对所述待处理视频执行所述目标视频编辑操作,则不打断所述第一线程,使得所述第一线程继续对所述待处理视频执行所述目标视频编辑操作。
  10. 根据权利要求1至9任一项所述的方法,其中,所述方法还包括:
    响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,通过所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;其中,所述第三处理模式为根据播放速度确定执行所述目标视频编辑操作的视频帧位置的模式。
  11. 根据权利要求10所述的方法,其中,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作,包括:
    响应于所述播放指令,实时检测是否存在要播放的下一个目标视频帧;
    当检测到不存在要播放的下一个目标视频帧时,通过所述第一线程,从所述要播放的下一个目标视频帧的位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
  12. 根据权利要求10至11任一项所述的方法,其中,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作,包括:
    响应于所述播放指令,通过所述第一线程,从获取所述播放指令时正在执行所述目标视频编辑操作的视频帧位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
  13. 根据权利要求10至12任一项所述的方法,其中,所述响应于所述播放指令,将所述视频编辑处理模式切换为第三处理模式,由所述第一线程,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操 作,包括:
    响应于所述播放指令,根据预设的单个视频帧执行所述目标视频编辑操作的处理时长、播放速度以及当前正在执行所述目标视频编辑操作的视频帧位置,确定所述第三处理模式对应的切换位置;
    通过所述第一线程,从所述切换位置开始,按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
  14. 根据权利要求10至13任一项所述的方法,其中,所述方法还包括:
    获取暂停播放指令;
    响应于所述暂停播放指令,暂停所述第二线程对所述目标视频帧进行渲染;且所述第一线程继续按照所述第三处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作。
  15. 根据权利要求1至14任一项所述的方法,其中,所述方法还包括:
    对所述待处理视频的最后一个视频帧执行所述目标视频编辑操作后,通过所述第一线程,从所述待处理视频的起始视频帧的位置开始,按照第四处理模式对所述待处理视频的视频帧执行所述目标视频编辑操作;
    其中,所述第四处理模式为依次对所述待处理视频中不存在相应目标视频帧的视频帧执行所述目标视频编辑操作的模式。
  16. 一种视频处理装置,其包括:
    第一处理模块,被配置为通过第一线程从指定的视频帧位置开始,对待处理视频中所述指定的视频帧位置的视频帧执行目标视频编辑操作以及对所述指定的视频帧位置之后的各视频帧预先执行所述目标视频编辑操作,获取执行所述目标视频编辑操作得到的各目标视频帧;
    第二处理模块,被配置为响应于播放指令,通过第二线程对各所述目标视频帧进行渲染,以展示各所述目标视频帧;其中,正在执行所述目标视频编辑操作的视频帧位置领先于正在展示的所述目标视频帧在所述待处理视频中对应的视频帧位置。
  17. 一种电子设备,包括:存储器和处理器;
    所述存储器被配置为存储计算机程序指令;
    所述处理器被配置为执行所述计算机程序指令,使得所述电子设备实现 如权利要求1至15任一项所述的视频处理方法。
  18. 一种可读存储介质,包括:计算机程序指令;
    所述计算机程序指令被电子设备的至少一个处理器执行时,使得所述电子设备实现如权利要求1至15任一项所述的视频处理方法。
  19. 一种计算机程序产品,当所述计算机程序产品被计算机执行时,使得所述计算机实现如权利要求1至15任一项所述的视频处理方法。
PCT/CN2022/129165 2021-11-15 2022-11-02 视频处理方法、装置、电子设备及可读存储介质 WO2023083064A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111347351.6 2021-11-15
CN202111347351.6A CN116132719A (zh) 2021-11-15 2021-11-15 视频处理方法、装置、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023083064A1 true WO2023083064A1 (zh) 2023-05-19

Family

ID=86310487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129165 WO2023083064A1 (zh) 2021-11-15 2022-11-02 视频处理方法、装置、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN116132719A (zh)
WO (1) WO2023083064A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778046A (zh) * 2023-08-28 2023-09-19 乐元素科技(北京)股份有限公司 基于多线程的头发模型处理方法、装置、设备及介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994406A (zh) * 2015-04-17 2015-10-21 新奥特(北京)视频技术有限公司 一种基于Silverlight插件的视频编辑方法和装置
CN105933773A (zh) * 2016-05-12 2016-09-07 青岛海信传媒网络技术有限公司 视频编辑方法及系统
US20170200473A1 (en) * 2016-01-08 2017-07-13 Gopro, Inc. Digital media editing
CN108040265A (zh) * 2017-12-13 2018-05-15 北京奇虎科技有限公司 一种对视频进行处理的方法和装置
CN108062760A (zh) * 2017-12-08 2018-05-22 广州市百果园信息技术有限公司 视频编辑方法、装置及智能移动终端
CN111459591A (zh) * 2020-03-31 2020-07-28 杭州海康威视数字技术股份有限公司 待渲染对象处理方法、装置和终端
CN111641838A (zh) * 2020-05-13 2020-09-08 深圳市商汤科技有限公司 一种浏览器视频播放方法、装置以及计算机存储介质
CN111935504A (zh) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 视频制作方法、装置、设备及存储介质
CN112954459A (zh) * 2021-03-04 2021-06-11 网易(杭州)网络有限公司 一种视频数据的处理方法和装置
CN112995746A (zh) * 2019-12-18 2021-06-18 华为技术有限公司 视频处理方法、装置与终端设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994406A (zh) * 2015-04-17 2015-10-21 新奥特(北京)视频技术有限公司 一种基于Silverlight插件的视频编辑方法和装置
US20170200473A1 (en) * 2016-01-08 2017-07-13 Gopro, Inc. Digital media editing
CN105933773A (zh) * 2016-05-12 2016-09-07 青岛海信传媒网络技术有限公司 视频编辑方法及系统
CN108062760A (zh) * 2017-12-08 2018-05-22 广州市百果园信息技术有限公司 视频编辑方法、装置及智能移动终端
CN108040265A (zh) * 2017-12-13 2018-05-15 北京奇虎科技有限公司 一种对视频进行处理的方法和装置
CN112995746A (zh) * 2019-12-18 2021-06-18 华为技术有限公司 视频处理方法、装置与终端设备
CN111459591A (zh) * 2020-03-31 2020-07-28 杭州海康威视数字技术股份有限公司 待渲染对象处理方法、装置和终端
CN111641838A (zh) * 2020-05-13 2020-09-08 深圳市商汤科技有限公司 一种浏览器视频播放方法、装置以及计算机存储介质
CN111935504A (zh) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 视频制作方法、装置、设备及存储介质
CN112954459A (zh) * 2021-03-04 2021-06-11 网易(杭州)网络有限公司 一种视频数据的处理方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778046A (zh) * 2023-08-28 2023-09-19 乐元素科技(北京)股份有限公司 基于多线程的头发模型处理方法、装置、设备及介质
CN116778046B (zh) * 2023-08-28 2023-10-27 乐元素科技(北京)股份有限公司 基于多线程的头发模型处理方法、装置、设备及介质

Also Published As

Publication number Publication date
CN116132719A (zh) 2023-05-16

Similar Documents

Publication Publication Date Title
US8787726B2 (en) Streaming video navigation systems and methods
US8811797B2 (en) Switching between time order and popularity order sending of video segments
US8036474B2 (en) Information processing apparatus enabling an efficient parallel processing
JP6499324B2 (ja) ビデオを再生するための方法、クライアント及びコンピュータ記憶媒体
WO2014134912A1 (zh) 一种绘图方法、装置及终端
WO2019170073A1 (zh) 媒体播放
US10838691B2 (en) Method and apparatus of audio/video switching
JP7312852B2 (ja) ビデオ処理方法及び装置、端末、及びコンピュータプログラム
CN111163345A (zh) 一种图像渲染方法及装置
CN109840879B (zh) 图像渲染方法、装置、计算机存储介质及终端
CN110418186A (zh) 音视频播放方法、装置、计算机设备和存储介质
KR102147633B1 (ko) 가변 길이 코딩된 파일을 디코딩하는 방법 및 장치
US20100247066A1 (en) Method and apparatus for reverse playback of encoded multimedia content
WO2023083064A1 (zh) 视频处理方法、装置、电子设备及可读存储介质
WO2017202175A1 (zh) 一种视频压缩方法、装置及电子设备
US8391688B2 (en) Smooth rewind media playback
CN113923472B (zh) 视频内容分析方法、装置、电子设备及存储介质
CN111263211B (zh) 一种缓存视频数据的方法及终端设备
WO2018201993A1 (zh) 图像绘制方法、终端及存储介质
CN112911390B (zh) 一种视频数据的播放方法及终端设备
CN117557701A (zh) 一种图像渲染方法和电子设备
WO2022120828A1 (zh) 视频抽帧方法、设备及存储介质
CN115209216A (zh) 视频的播放方法、装置及电子设备
CN111467797B (zh) 游戏数据处理方法、装置、计算机存储介质与电子设备
US20130278775A1 (en) Multiple Stream Processing for Video Analytics and Encoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22891860

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022891860

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022891860

Country of ref document: EP

Effective date: 20240515