CN113709560B - Video editing method, device, equipment and storage medium - Google Patents

Video editing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113709560B
CN113709560B CN202110345929.8A CN202110345929A CN113709560B CN 113709560 B CN113709560 B CN 113709560B CN 202110345929 A CN202110345929 A CN 202110345929A CN 113709560 B CN113709560 B CN 113709560B
Authority
CN
China
Prior art keywords
video
editing
image frame
quality
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110345929.8A
Other languages
Chinese (zh)
Other versions
CN113709560A (en
Inventor
祝晨晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110345929.8A priority Critical patent/CN113709560B/en
Publication of CN113709560A publication Critical patent/CN113709560A/en
Application granted granted Critical
Publication of CN113709560B publication Critical patent/CN113709560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The embodiment of the application discloses a video editing method, a device, equipment and a storage medium, and belongs to the technical field of video processing. The method comprises the following steps: extracting a plurality of image frames from the material video; determining a quality score for each image frame; arranging the image frames in the time domain, and carrying out interpolation processing on the quality score to obtain a quality score curve of the material video; and clipping the target video from the material video based on the quality score curve and the video clipping strategy. The method and the device realize that high-quality video clips are automatically intercepted from the material video, so that the efficiency and quality effect of video clipping are improved.

Description

Video editing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a video editing method, a device, equipment and a storage medium.
Background
Currently, some video processing tools are provided with video editing functionality, which refers to capturing video clips from a material video.
In the related art, if a section of video segment with better quality is required to be cut from a material video, a user is required to play and view the material video first, and based on subjective quality evaluation of the video content by the user, a start time stamp and a termination time stamp of the section with better quality are manually selected from the material video, and then the video segment between the start time stamp and the termination time stamp is cut.
Because this video editing mode is completely dependent on subjective quality evaluation and operation of the user, the quality of the video clips intercepted in this way is uncontrollable and has low efficiency.
Disclosure of Invention
The embodiment of the application provides a video editing method, a device, equipment and a storage medium, which can be used for efficiently intercepting high-quality video clips so as to improve the efficiency and quality effect of video editing. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, there is provided a video editing method, the method including:
extracting a plurality of image frames from the material video;
determining a quality score for each of the image frames;
arranging the image frames in a time domain, and carrying out interpolation processing on the quality scores to obtain a quality score curve of the material video;
and clipping the target video from the material video based on the quality score curve and the video clipping strategy.
According to an aspect of an embodiment of the present application, there is provided a video clip apparatus, the apparatus including:
the video frame extracting module is used for extracting a plurality of image frames from the material video;
a quality scoring module for determining a quality score for each of the image frames;
The curve construction module is used for arranging the image frames in a time domain, and carrying out interpolation processing on the quality scores to obtain a quality score curve of the material video;
and the video clipping module is used for clipping the target video from the material video based on the quality score curve and the video clipping strategy.
According to an aspect of the embodiments of the present application, there is provided a computer apparatus, including a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the video clipping method described above.
According to an aspect of embodiments of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the video clipping method described above.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the video clip method described above.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the quality score curve of the material video is generated by carrying out quality scoring on the image frames in the material video, and then the target video is obtained by clipping from the material video based on the quality score curve and a video clipping strategy, so that high-quality video clips are automatically intercepted from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame drawing mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculated amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality score is interpolated to obtain a quality score curve of the material video, so that the quality change rule of all the image frames in the material video can be accurately depicted, and the quality effect of the target video obtained by clipping can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment for an embodiment provided herein;
FIG. 2 is a diagram of a related product interface for video clip functionality provided by one embodiment of the present application;
FIG. 3 is a flow chart of a video editing method provided by one embodiment of the present application;
FIG. 4 is a flow chart of a video editing method provided in another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a mass fraction curve construction process;
fig. 6 schematically illustrates a water level lowering method;
FIG. 7 illustrates a schematic diagram of a sliding window approach;
FIG. 8 is a schematic diagram of the overall flow of the solution provided by one embodiment of the present application;
FIG. 9 is a block diagram of a video editing apparatus provided in one embodiment of the present application;
FIG. 10 is a block diagram of a video editing apparatus provided in another embodiment of the present application;
fig. 11 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment of an embodiment of the present application is shown. The solution implementation environment may be implemented as a video processing system, and the solution implementation environment may include: a terminal 10 and a server 20.
The terminal 10 may be an electronic device such as a cell phone, tablet, wearable device, PC (Personal Computer ) or the like. A client running a target application, which may be any application having a video processing function (including a video clip function), is installed in the terminal 10. For example, the target application may be a short video application, a social class application, or an application dedicated to video processing, etc., which is not limited in this application.
The server 20 may be a background server of the target application program, for providing background services to clients of the target application program. The server 20 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center. The server 20 may communicate with the terminal 10 through a network.
The video editing method provided in the embodiment of the present application may be executed by the terminal 10, for example, the execution subject of each step may be a client terminal that installs a target application program running in the terminal 10, or may be executed by the server 20, or may be executed by the terminal 10 and the server 20 in an interactive cooperation manner. In the following embodiments, for convenience of explanation, only the execution subjects of the steps will be described as computer devices unless otherwise specified. The computer device may be any electronic device having data processing and storage capabilities, including but not limited to the terminal 10 and server 20 described above.
Illustratively, as shown in FIG. 2, a related product interface diagram of the video clip functionality provided herein is shown. The user opens the video clip interface 21 of the target application, and various clip related options are provided in the video clip interface 21, such as cut option 22, split option, transition option, speed change option, order option, delete option, etc. Cut option 22 corresponds to the AI-capability based automatic editing function provided herein, such as automatically editing out highlight clips in the material video. After the user clicks the cut option 22, the client may display two options of a fixed duration of 30 seconds and a recommended duration, where the fixed duration of 30 seconds refers to the duration of the target video generated by the clip being the fixed duration of 30 seconds, and the recommended duration refers to the duration of the target video generated by the clip being the recommended duration, where the recommended duration may be determined by the client based on factors such as the duration and quality of the material video. Assume that after the user selects the recommended duration, the client is triggered to automatically clip the video. As can be seen by comparing the left and right diagrams in fig. 2, the left side diagram is a material video 23 to be clipped, which is 21s long, and the right side diagram is a clipped target video 24, which is 10 seconds long. And the target video is formed by splicing 3 video clips, for example, the 3 video clips can be 3 clips with higher quality in the material video.
Referring to fig. 3, a flowchart of a video editing method according to an embodiment of the present application is shown. The method may include the following steps (310-340):
step 310 extracts a plurality of image frames from the material video.
The material video refers to a video to be clipped, and in a clipping process (i.e. a process of generating a target video based on the material video clipping), the number of the used material videos may be 1 or more (i.e. 2 or more than 2), which is not limited in this application.
In this embodiment, in order to reduce the amount of computation, it is not necessary to score the quality of each image frame in the material video, but rather, by extracting the frames, the plurality of image frames are extracted from the material video, and only those extracted image frames are scored for quality, while those not extracted are not.
Optionally, a plurality of image frames are extracted from the material video in a sampling mode at equal time intervals. For example, one image frame is extracted every 125 ms.
At step 320, a quality score for each image frame is determined.
And for each extracted image frame, carrying out quality scoring on the extracted image frame to obtain the quality score of the image frame. The mass fraction is used to characterize the quality of the image frames. It should be noted that the influencing factors of the quality score may be preset in conjunction with the actual requirements, for example, but not limited to exposure, definition, color, texture, noise, focusing, artifacts, etc.
Optionally, the image frames are scored for quality by a pre-trained AI (Artificial Intelligence ) model. In one example, the quality scores of the image frames are directly output by an AI model, which may extract image features of the image frames from a scoring dimension or dimensions and determine the quality scores of the image frames based on the image features. In another example, individual scores of the image frame are output from multiple scoring dimensions, respectively, through multiple AI models, and then each individual score is directly summed or weighted summed to obtain a quality score of the image frame. Alternatively, the single scoring dimension may correspond to a factor of influence.
And 330, arranging the image frames in the time domain, and interpolating the quality scores to obtain a quality score curve of the material video.
After the quality of each image frame is obtained, the time is taken as the horizontal axis, the quality is taken as the vertical axis, a two-dimensional rectangular coordinate system is constructed, and the position points corresponding to each image frame are marked in the two-dimensional rectangular coordinate system. Since the quality scores of the respective image frames are discrete in time, in order to generate a quality score curve of the material video, a difference process may be performed on the quality scores, and then a quality score curve of the material video may be generated based on the interpolated respective quality score fits. For example, interpolation is performed in units of 0.1s, and the mass distribution curve can achieve a time accuracy of 0.1 s.
In the embodiment of the present application, the quality score curve refers to a function curve describing the quality of the whole material video after obtaining the quality scores of a plurality of discrete time points in the material video through a certain process.
And step 340, clipping the target video from the material video based on the quality score curve and the video clipping strategy.
The video clip policy may be set by user definition or by default, which is not limited in this application. The video clipping strategy is used for defining clipping modes of the material video, such as defining video clipping parameters of the number of video clips obtained from the material video, the duration of a single video clip, the total duration of all video clips, the clipping modes of the clips and the like.
After the quality score curve of the material video is obtained, the material video can be clipped according to a set video clipping strategy, and finally the target video is obtained.
In summary, according to the technical scheme provided by the embodiment of the application, the quality score curve of the material video is generated by performing quality scoring on the image frames in the material video, and then the target video is clipped from the material video based on the quality score curve and the video clipping strategy, so that high-quality video clips are automatically intercepted from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame drawing mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculated amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality score is interpolated to obtain a quality score curve of the material video, so that the quality change rule of all the image frames in the material video can be accurately depicted, and the quality effect of the target video obtained by clipping can be improved.
Referring to fig. 4, a flowchart of a video editing method according to another embodiment of the present application is shown. The method may include the following steps (410-460):
step 410 extracts a plurality of image frames from a material video.
Optionally, for the material video selected by the user, the image frames included in the material video are arranged according to the time axis sequence, one image frame is extracted at equal time intervals, for example, one frame is extracted every 100ms, and each extracted image frame is sent to step 420 for quality scoring.
Optionally, to ensure frame extraction efficiency, decoder concurrency, image size compression, image buffering, and the like may be used to improve frame extraction efficiency. The decoder is used for decoding the material video in parallel to obtain image frames, the image size compression is used for compressing the sizes of the image frames to improve the decoding speed, and the image caching is used for caching the decoded image frames by adopting a multi-level caching technology, so that the time consumption required by frame extraction can be reduced by reducing the time consumption required by decoding and caching, and the frame extraction efficiency is improved.
Optionally, determining a device performance level based on the device model, and determining a frame extraction configuration policy according to the device performance level, where the frame extraction configuration policy is used to indicate a decoding policy and/or a buffering policy in a frame extraction process, and performing frame extraction processing on the material video according to the frame extraction configuration policy, so as to obtain a plurality of image frames. Considering that devices of different models have different device performance levels, some devices with better performance can employ higher configuration decoding strategies and/or caching strategies, such as properly increasing the concurrency of the decoder and properly increasing the caching speed, while some devices with poorer performance can employ lower configuration decoding strategies and/or caching strategies, such as properly reducing the concurrency of the decoder and properly reducing the caching speed. The device performance grade and the corresponding relation between the device performance grade and the frame extraction configuration strategy, which are belonged to the device model, can be preset and corresponding tables can be established, and the device model of the current device and the proper frame extraction configuration strategy can be obtained through table lookup, so that the compatibility of the scheme for different devices is improved, and the scheme can be normally executed on different devices. The number and the division manner of the above-mentioned device performance levels are not limited in this application, and may be divided into, for example, 3 types of high, medium and low device performance levels, 5 types of device performance levels, and so on.
In step 420, a quality score for each image frame is determined.
Optionally, scoring the image frame from a plurality of different dimensions through a plurality of AI models to obtain a plurality of single scores of the image frame, performing weighted summation processing on the plurality of single scores of the image frame to obtain an initial quality score of the image frame, and then adjusting the initial quality score of the image frame based on a score adjustment strategy to determine the quality score of the image frame. For example, the AI models may score the image frames from a plurality of different dimensions such as highlighting, aesthetics, classification, etc., to obtain a plurality of individual scores for the image frames, and then weight-sum to obtain an initial quality score for the image frames based on the individual scores and weights for the respective dimensions. In one example, the initial mass fraction may be determined directly as the final mass fraction. In another example, an initial quality score of the image frame is adjusted based on a score adjustment policy, and a quality score of the image frame is determined. The score adjustment is used to perform special processing on some special image frames to improve the video quality of the target video generated by the subsequent clip.
For example, if a face region exists in the image frame, the initial quality score of the image frame is adjusted upward to determine the quality score of the image frame. Since the image frames containing the human face are often the more important image frames in the video, the quality score can be properly adjusted upwards for such image frames to improve the probability of being selected. Whether a face area exists in the image frame or not can be identified and determined by adopting a face identification model obtained through machine learning.
For another example, if the image frame belongs to a scene cut image frame, the initial quality score of the image frame is adjusted downward to determine the quality score of the image frame. Scene segmentation image frames refer to image frames located at the tail of a previous scene and image frames located at the head of a subsequent scene in the process of switching from one scene to another. For example, the 1 st to 10 th frames are indoor scenes, and switching from 11 th frames to outdoor scenes is started, and then the scene division image frames include the 10 th frame and the 11 th frame. Introducing too many scene segmentation image frames into the target video can cause scene switching of the target video to be too frequent and appear as a disturbance of picture content. Therefore, by reducing the quality score of the scene cut image frame, the probability of being selected can be reduced as much as possible. The scene corresponding to the image frame can be identified and determined by adopting a scene identification model obtained through machine learning.
For another example, if the image frame belongs to the start period image frame or the end period image frame, the initial quality of the image frame is adjusted upward to determine the quality of the image frame. The start period image frame refers to an image frame located within a start period of a material video, such as an image frame within 1s before the material video. The end period image frame refers to an image frame located within the end period of the material video, such as an image frame within the last 1s of the material video. For image frames in a relatively sensitive period such as the beginning and the end in the material video, the quality score of the image frames should be appropriately increased so as to improve the probability of being selected.
Note that, the values of the up/down adjustment of the mass fraction in the above-mentioned score adjustment strategy may be preset in combination with actual conditions, which is not limited in this application.
In addition, when the image frames are scored from a plurality of different dimensions through a plurality of AI models, the plurality of AI models may run in parallel, thereby improving scoring efficiency. In addition, each AI model can share the picture data of the image frame, and in the process of processing the respective images, the pictures can be compressed again according to own requirements, so that the efficiency and the accuracy of scoring can be considered.
And 430, arranging the image frames in the time domain, and performing interpolation processing on the quality score to obtain a quality score curve of the material video.
Optionally, a two-dimensional rectangular coordinate system with time as a horizontal axis and mass as a vertical axis is constructed, based on the time stamp and mass fraction of each image frame, position points corresponding to the image frames are added in the two-dimensional rectangular coordinate system, then interpolation processing is carried out on the mass fraction based on each position point, a position point sequence after interpolation is obtained, and finally a mass fraction curve of the material video is generated by fitting based on the position point sequence after interpolation.
As shown in fig. 5, assume that the 1 st frame, the 11 th frame, and the 21 st frame … … are extracted from the material video, black points in the drawing are position points corresponding to each extracted image frame marked in the two-dimensional rectangular coordinate system based on the time stamp and the mass fraction of each extracted image frame, white points are interpolation points obtained by performing interpolation processing on the mass fraction, the position points and the interpolation points form an interpolated position point sequence, and a curve 50 is finally fitted, namely a mass fraction curve of the material video.
In the embodiment of the present application, the manner adopted by the interpolation algorithm and the curve fitting algorithm is not limited.
Step 440, determining video clip parameters according to the video clip policy.
Optionally, the video clip parameters include at least one of: the method comprises the steps of editing the number of video clips obtained from a material video, the duration of a single video clip, the total duration of all video clips and the clip mode.
Optionally, in an embodiment of the present application, the following video clip policies are provided: intelligent optimization strategy, fixed time length strategy, multi-segment fixed time length strategy and multi-material segment corresponding fixed time length strategy.
Wherein the intelligent preference strategy refers to automatically determining the total duration of the target video generated by clipping based on the material video. Optionally, under the condition that the video editing strategy is an intelligent optimization strategy, determining the total duration of all video clips acquired from the material video according to the duration of the material video; and determining the clip mode as a water level dropping method. One or more video clips with better quality can be selected from the material video by adopting an intelligent optimization strategy, and the duration of each video clip is not fixed. Optionally, the total duration of all video clips obtained from the material video has a positive correlation with the duration of the material video.
The fixed duration policy refers to that the total duration of the target video generated by clipping is a fixed value, and the fixed duration can be set by user definition or default by a system. Optionally, under the condition that the video editing strategy is a fixed duration strategy, determining the total duration of all video clips obtained from the material video according to a preset single fixed duration; if the number of the material videos is 1, determining that the clip mode is a water level reduction method; or if the number of the material videos is a plurality of, determining that the clip mode is a sliding window method. For example, a preset single fixed duration is determined as the total duration of all video clips acquired from the material video, that is, the total duration of the target video. For example, if the fixed duration is set to be 10s by the user, a video segment with the duration of 10s is clipped and acquired from the material video as the target video, or a plurality of video segments are clipped and acquired from the material video, and the total duration of the plurality of video segments is 10s, and then the plurality of video segments are spliced to generate the target video.
The multi-segment fixed duration strategy is to intercept a plurality of video segments with fixed duration from the material video, and then splice the video segments to generate a target video. The fixed time lengths corresponding to the video clips can be the same or different. The fixed time periods can be set by user definition or default by the system. Optionally, in the case that the video editing strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by editing from the material video and the duration of a single video segment according to a plurality of preset fixed durations; and determining the clip mode as a sliding window method. For example, 3 fixed durations are preset, namely 3s, 2s and 3s respectively, then 3 video clips are cut from the material video, the durations of the 3 video clips are 3s, 2s and 3s respectively, and then the target video is generated by splicing the video clips.
The strategy of the multiple material segments corresponding to the fixed time length refers to that the number of the material videos is multiple, and a video segment with a fixed time length is intercepted from each material video, and the fixed time length can be set by user definition or system default. Optionally, in the case that the video editing policy is a fixed duration policy corresponding to multiple material clips, determining a duration of a video clip obtained from each material video according to fixed durations respectively set for multiple material videos; and determining the clip mode as a sliding window method. For example, the method comprises 2 sections of material videos, wherein the fixed time length set for the 1 st section of material video is 5s, the fixed time length set for the 2 nd section of material video is 7s, then a 5s video segment is cut from the 1 st section of material video, a 7s video segment is cut from the 2 nd section of material video, and then the 2 sections of video segments are spliced to obtain a target video.
And step 450, clipping and acquiring video clips meeting the video clipping parameter requirements from the material video based on the quality score curve.
In terms of a clip mode of capturing video clips from a material video, the present application provides a water level down method and a sliding window method. The water level drop method is suitable for intercepting one or more video clips in one step, and the total duration of each intercepted video clip is a fixed value. The sliding window method is suitable for intercepting one video clip at a time, and the duration of the single video clip is a fixed value.
Under the condition that a video fragment is intercepted by adopting a water level descent method, adding a water level line in a two-dimensional rectangular coordinate system where a mass distribution curve is located, wherein the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis is mass distribution, and the water level line is parallel to the horizontal axis; moving the water line along the longitudinal axis; when the water line moves to a first target position, and a video clip meeting the video clip parameter requirement exists in the first direction of the water line, the video clip is intercepted from the material video; wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
For example, as shown in fig. 6, with the X-axis (horizontal axis) as the time axis and the Y-axis (vertical axis) as the mass-division axis, the mass-division curve 61 may be expressed as y=f (X). Assuming that the total duration of the material video is T, the target video to be clipped is a fixed duration T0 (T0 < T), by moving the water line 62 from top to bottom (i.e., in the positive direction of the vertical axis), the curve segment located below the water line 62 remains, and assuming that the total duration corresponding to the remaining curve segment is T1. Since the mass fraction curve 61 is a continuous curve, there is and only one target position during the movement of the water line 62, such that the total duration t1 of the video segments below the water line 62 = the duration T0 of the target video. When this condition is satisfied, a video clip located below the water line 62 may be cut. For example, assuming that the water line 62 satisfies t1=t0 when in the position shown in fig. 6, 2 video clips are cut, 1 being a video clip from T1 to T2, and another being a video clip from T3 to T4.
It should be noted that, if the two-dimensional rectangular coordinate system in which the mass distribution curve is initially constructed is shown in fig. 5, the positive direction of the vertical axis is upward, and the negative direction of the vertical axis is downward, when the video clips are intercepted by adopting the water level dropping method, one possible implementation manner is to flip the two-dimensional rectangular coordinate system (including the mass distribution curve) up and down, that is, the positive direction of the vertical axis is downward, the negative direction of the vertical axis is upward (as shown in fig. 6), then add a water line into the flipped two-dimensional rectangular coordinate system, and then select a high-quality video clip under the water line, which meets the conditions, by moving the water line from top to bottom (that is, along the positive direction of the vertical axis) so as to make the water line drop; another possible implementation is to not flip the two-dimensional rectangular coordinate system (nor flip the mass fraction curve), i.e. keep the positive direction of the vertical axis up, the negative direction of the vertical axis down (as shown in fig. 5), add a water line in the two-dimensional rectangular coordinate system, and then pick a eligible high quality video clip located above the water line for truncation by moving the water line from bottom up (i.e. in the positive direction of the vertical axis) such that the water line rises.
Under the condition that a video clip is intercepted by adopting a sliding window method, determining the number n of sliding windows to be added and the duration of each sliding window according to video clip parameters, wherein n is a positive integer; n sliding windows are added in a two-dimensional rectangular coordinate system where the mass distribution curve is located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis is mass fraction, and the window boundary of the sliding window is perpendicular to the horizontal axis; moving the sliding window along the transverse axis, and acquiring an integral of a curve segment positioned in the sliding window; wherein, under the condition that the number of the sliding windows is a plurality of, any two sliding windows have no overlapping area; and when the n sliding windows are positioned at the second target position, and the integral sum takes the maximum value, video fragments in the n sliding windows are intercepted from the material video.
For example, as shown in fig. 7, with the X-axis (horizontal axis) as the time axis and the Y-axis (vertical axis) as the mass-division axis, the mass-division curve 71 may be expressed as y=f (X). Assuming that the total duration of the material video is T, the target video to be clipped is fixed duration T0 (T0 < T), the left window boundary and the right window boundary of the sliding window are indicated by dotted lines in the figure, and by moving the sliding window, when the window boundaries are positioned at the positions of T1 and T2, the integral of the curve segment positioned in the sliding window is assumed Taking the maximum value, then the video segments from t1 to t2 in the sliding window are truncated.
Step 460, obtaining the target video based on the video clip.
And if the number of the video clips obtained by interception is 1, directly taking the video clips as target videos. If the number of the video segments obtained by interception is greater than 1, splicing the video segments to obtain a target video.
In the following, with reference to fig. 8, a technical scheme of the present application is described in overview, after a material video is acquired, a plurality of image frames are extracted from the material video by a frame extraction tool, then quality scores of the image frames are determined by an AI scoring component, interpolation is performed based on the quality scores of the image frames to obtain a quality score curve of the material video, and then a clipping algorithm is operated to clip a target video from the material video based on the quality score curve and a video clipping strategy, and the target video is returned as a result.
In addition, the application provides various video clip strategies and corresponding clip modes, and the video clip strategies and the clip modes can be specifically shown in the following table 1:
TABLE 1
Sequence number Video clip strategy Clip mode
1 Intelligent preference policy Method for lowering water level
2 Fixed duration policy By water-level lowering or sliding window
3 Multi-segment fixed duration policy Sliding window method
4 Strategy for corresponding multiple material fragments to fixed time length Sliding window method
In summary, the technical solution provided in the embodiments of the present application provides a plurality of different video editing strategies, and each video editing strategy is to clip video based on a quality score curve of a material video, so that a one-time scoring result can be multiplexed for multiple times, when a user changes a video editing strategy, the material video does not need to be scored again, and the quality score curve of the material video stored before multiplexing can be used, so that a separation design of quality scoring and editing algorithm is made, and even if the user changes the video editing strategy, a target segment meeting the requirement of the strategy can be quickly generated.
In addition, according to the video editing scheme provided by the application, the editing effect depends on the scoring capability of the AI model and the editing algorithm, so that the editing effect is more stable and controllable compared with manual editing, and the better editing effect can be realized only by continuously updating the AI model and the editing algorithm in the scheme iterative optimization process.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 9, a block diagram of a video editing apparatus according to an embodiment of the present application is shown. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The device may be a computer device or may be provided in a computer device. The apparatus 900 may include: a video frame extraction module 910, a quality scoring module 920, a curve construction module 930, and a video clip module 940.
The video extracting module 910 is configured to extract a plurality of image frames from a material video.
A quality scoring module 920, configured to determine a quality score of each of the image frames.
And the curve construction module 930 is configured to arrange the image frames in a time domain, and interpolate the quality score to obtain a quality score curve of the material video.
And the video clipping module 940 is used for clipping the target video from the material video based on the quality score curve and the video clipping strategy.
In an exemplary embodiment, as shown in fig. 10, the video clip module 940 includes: a parameter determination unit 942, a fragment interception unit 944, and a result generation unit 946.
A parameter determining unit 942 configured to determine video clip parameters according to the video clip policy, where the video clip parameters include at least one of: the number of video clips obtained from the material video, the duration of a single video clip, the total duration of all video clips and the clip mode.
And the segment intercepting unit 944 is configured to clip and acquire a video segment that meets the video clip parameter requirement from the material video based on the quality score curve.
And a result generating unit 946, configured to obtain the target video based on the video segment.
Optionally, the segment intercepting unit 944 is configured to:
under the condition that the video clips are intercepted by adopting a water level descent method, adding a water level line in a two-dimensional rectangular coordinate system where the mass distribution curve is located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis is mass fraction, and the water line is parallel to the horizontal axis;
moving the water line along the longitudinal axis;
when the water line moves to a first target position, and a video clip meeting the video clip parameter requirement exists in the first direction of the water line, the video clip is intercepted from the material video;
Wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
Optionally, the segment intercepting unit 944 is configured to:
under the condition that the video clips are intercepted by adopting a sliding window method, determining the number n of sliding windows to be added and the duration of each sliding window according to the video clip parameters, wherein n is a positive integer;
adding the n sliding windows in a two-dimensional rectangular coordinate system where the mass distribution curve is located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis is mass fraction, and the window boundary of the sliding window is perpendicular to the horizontal axis;
moving the sliding window along the transverse axis, and acquiring the integral of a curve segment in the sliding window; wherein, under the condition that the number of the sliding windows is a plurality of, any two sliding windows do not have an overlapping area;
and when the n sliding windows are positioned at the second target position, and the integral sum takes the maximum value, video fragments in the n sliding windows are intercepted from the material video.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video editing strategy is an intelligent optimization strategy, determining the total duration of all video fragments obtained by editing from the material video according to the duration of the material video;
And determining the clip mode as a water level dropping method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video editing strategy is a fixed duration strategy, determining the total duration of all video clips obtained from the material video according to a preset single fixed duration;
if the number of the material videos is 1, determining that the clip mode is a water level reduction method; or if the number of the material videos is a plurality of, determining that the clip mode is a sliding window method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video editing strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by editing from the material video and the duration of a single video segment according to a plurality of preset fixed durations;
and determining the clip mode of the fragments as a sliding window method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video editing strategy is a fixed duration strategy corresponding to multiple material fragments, determining the duration of the video fragments obtained by editing in each material video according to fixed durations respectively set for the multiple material videos;
And determining the clip mode of the fragments as a sliding window method.
In an exemplary embodiment, as shown in fig. 10, the quality scoring module 920 includes: a single scoring unit 922, a scoring summing unit 924, and a scoring adjustment unit 926.
A single scoring unit 922 configured to score the image frame from a plurality of different dimensions by using a plurality of artificial intelligence AI models, to obtain a plurality of single scores of the image frame.
And the scoring and summing unit 924 is configured to perform weighted summation processing on multiple single components of the image frame, so as to obtain an initial quality score of the image frame.
The scoring adjustment unit 926 is configured to adjust an initial quality score of the image frame based on a score adjustment policy, and determine a quality score of the image frame.
Optionally, the scoring adjustment unit 926 is configured to:
if the face area exists in the image frame, the initial quality of the image frame is adjusted upwards, and the quality of the image frame is determined;
or if the image frame belongs to the scene segmentation image frame, the initial quality of the image frame is adjusted downwards, and the quality of the image frame is determined;
or if the image frame belongs to the image frame in the initial period or the image frame in the end period, the initial quality of the image frame is up-regulated, and the quality of the image frame is determined.
In an exemplary embodiment, the curve construction module 930 is configured to:
constructing a two-dimensional rectangular coordinate system with time as a horizontal axis and mass as a vertical axis;
adding position points corresponding to the image frames in the two-dimensional rectangular coordinate system based on the time stamps and the mass fractions of the image frames;
performing interpolation processing on the mass fraction based on each position point to obtain an interpolated position point sequence;
and fitting and generating a quality score curve of the material video based on the interpolated position point sequence.
In an exemplary embodiment, the video extraction module 910 is configured to:
determining a device performance level based on the device model;
determining a frame extraction configuration strategy according to the equipment performance level, wherein the frame extraction configuration strategy is used for indicating a decoding strategy and/or a cache strategy in a frame extraction process;
and carrying out frame extraction processing on the material video according to the frame extraction configuration strategy to obtain the plurality of image frames.
In summary, according to the technical scheme provided by the embodiment of the application, the quality score curve of the material video is generated by performing quality scoring on the image frames in the material video, and then the target video is clipped from the material video based on the quality score curve and the video clipping strategy, so that high-quality video clips are automatically intercepted from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame drawing mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculated amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality score is interpolated to obtain a quality score curve of the material video, so that the quality change rule of all the image frames in the material video can be accurately depicted, and the quality effect of the target video obtained by clipping can be improved.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the content structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to FIG. 11, a block diagram of a computer device 1100 according to one embodiment of the present application is shown. The computer device 1100 may be a mobile phone, a tablet computer, a smart tv, a multimedia playing device, a PC, or a server. The computer device 1100 may be used to implement the video clip method described above.
In general, the computer device 1100 includes: a processor 1101 and a memory 1102.
The processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1101 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1101 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1101 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1101 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store a computer program configured to be executed by one or more processors to implement the video clip method described above.
In some embodiments, the computer device 1100 may further optionally include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102, and peripheral interface 1103 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1103 by buses, signal lines or circuit boards. Specifically, the peripheral device may include: at least one of a display 1104, audio circuitry 1105, a communication interface 1106, and a power supply 1107.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is not limiting as to the computer device 1100, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which, when executed by a processor of a computer device, implements the video clipping method described above.
Alternatively, the above-mentioned computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory ), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, or the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the video clip method described above.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limited by the embodiments of the present application.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and scope of the invention.

Claims (12)

1. A method of video editing, the method comprising:
extracting a plurality of image frames from the material video;
determining a quality score for each of the image frames;
adding position points corresponding to the image frames in a two-dimensional rectangular coordinate system based on the time stamps and the mass fractions of the image frames; wherein, the horizontal axis of the two-dimensional rectangular coordinate system is time and the vertical axis is mass fraction;
performing interpolation processing on the mass fraction based on each position point to obtain an interpolated position point sequence;
fitting and generating a quality score curve of the material video based on the interpolated position point sequence;
under the condition that the video editing strategy is a fixed duration strategy, determining the total duration of all video clips obtained from the material video according to a preset single fixed duration; if the number of the material videos is 1, determining that a fragment editing mode is a water level reduction method; or if the number of the material videos is a plurality of, determining that the clip mode is a sliding window method;
And editing and acquiring video clips meeting the requirements of video clipping parameters from the material video based on the quality score curve, wherein the video clipping parameters comprise at least one of the following: the number of video clips obtained from the material video, the duration of a single video clip, the total duration of all video clips and the clip mode of the clips;
obtaining a target video based on the video clip;
the water level lowering method is to move a water level line which is added in the two-dimensional rectangular coordinate system and is parallel to the transverse axis along the longitudinal axis, and intercept the video clips from the material video when the water level line moves to a first target position so that the video clips meeting the video clipping parameters exist in the first direction of the water level line; the sliding window method is to determine the number and duration of sliding windows in the two-dimensional rectangular coordinate system according to the video clipping parameters, slide the sliding windows along the transverse axis, intercept video clips in the sliding windows when the integral sum of curve segments in the sliding windows is maximum, and the window boundaries of the sliding windows are perpendicular to the transverse axis, wherein no overlapping area exists in any two sliding windows under the condition that the number of the sliding windows is multiple.
2. The method according to claim 1, wherein the editing and obtaining video clips meeting video editing parameter requirements from the material video based on the quality score curve comprises:
under the condition that the video clips are intercepted by adopting the water level descent method, adding the water level line into the two-dimensional rectangular coordinate system where the mass distribution curve is located; wherein the water line is parallel to the transverse axis;
moving the water line along the longitudinal axis;
when the water line moves to the first target position, and a video clip meeting the video clip parameter requirement exists in the first direction of the water line, the video clip is intercepted from the material video;
wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
3. The method according to claim 1, wherein the editing and obtaining video clips meeting video editing parameter requirements from the material video based on the quality score curve comprises:
under the condition that the video clips are intercepted by adopting the sliding window method, determining the number n of the sliding windows to be added and the duration of each sliding window according to the video clip parameters, wherein n is a positive integer;
Adding the n sliding windows in the two-dimensional rectangular coordinate system where the mass distribution curve is located;
moving the sliding window along the transverse axis, and acquiring the integral of a curve segment in the sliding window;
and when the n sliding windows are positioned at the second target position, and the integral sum takes the maximum value, video fragments in the n sliding windows are intercepted from the material video.
4. The method according to claim 1, wherein the method further comprises:
under the condition that the video editing strategy is an intelligent optimization strategy, determining the total duration of all video fragments obtained by editing from the material video according to the duration of the material video;
and determining the clip mode as the water level lowering method.
5. The method according to claim 1, wherein the method further comprises:
under the condition that the video editing strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by editing from the material video and the duration of a single video segment according to a plurality of preset fixed durations;
and determining the clip mode of the fragments as the sliding window method.
6. The method according to claim 1, wherein the method further comprises:
under the condition that the video editing strategy is a fixed duration strategy corresponding to multiple material fragments, determining the duration of the video fragments obtained by editing in each material video according to fixed durations respectively set for the multiple material videos;
and determining the clip mode of the fragments as the sliding window method.
7. The method of claim 1, wherein said determining a quality score for each of said image frames comprises:
scoring the image frame from a plurality of different dimensions through a plurality of artificial intelligence AI models to obtain a plurality of single scores of the image frame;
weighting and summing a plurality of single sub-divisions of the image frame to obtain an initial quality division of the image frame;
and adjusting the initial quality of the image frame based on a score adjustment strategy, and determining the quality of the image frame.
8. The method of claim 7, wherein the adjusting the initial quality score of the image frame based on the score adjustment strategy, determining the quality score of the image frame, comprises:
if the face area exists in the image frame, the initial quality of the image frame is adjusted upwards, and the quality of the image frame is determined;
Or,
if the image frame belongs to the scene segmentation image frame, the initial quality of the image frame is adjusted downwards, and the quality of the image frame is determined;
or,
and if the image frame belongs to the image frame in the initial period or the image frame in the ending period, the initial quality of the image frame is adjusted upwards, and the quality of the image frame is determined.
9. The method of claim 1, wherein extracting a plurality of image frames from the material video comprises:
determining a device performance level based on the device model;
determining a frame extraction configuration strategy according to the equipment performance level, wherein the frame extraction configuration strategy is used for indicating a decoding strategy and/or a cache strategy in a frame extraction process;
and carrying out frame extraction processing on the material video according to the frame extraction configuration strategy to obtain the plurality of image frames.
10. A video editing apparatus, the apparatus comprising:
the video frame extracting module is used for extracting a plurality of image frames from the material video;
a quality scoring module for determining a quality score for each of the image frames;
the curve construction module is used for adding position points corresponding to the image frames in a two-dimensional rectangular coordinate system based on the time stamps and the mass fractions of the image frames; wherein, the horizontal axis of the two-dimensional rectangular coordinate system is time and the vertical axis is mass fraction; performing interpolation processing on the mass fraction based on each position point to obtain an interpolated position point sequence; fitting and generating a quality score curve of the material video based on the interpolated position point sequence;
The video editing module is used for determining the total duration of all video clips obtained from the material video according to a preset single fixed duration under the condition that the video editing strategy is a fixed duration strategy; if the number of the material videos is 1, determining that a fragment editing mode is a water level reduction method; or if the number of the material videos is a plurality of, determining that the clip mode is a sliding window method; and editing and acquiring video clips meeting the requirements of video clipping parameters from the material video based on the quality score curve, wherein the video clipping parameters comprise at least one of the following: the number of video clips obtained from the material video, the duration of a single video clip, the total duration of all video clips and the clip mode of the clips; obtaining a target video based on the video clip;
the water level lowering method is to move a water level line which is added in the two-dimensional rectangular coordinate system and is parallel to the transverse axis along the longitudinal axis, and intercept the video clips from the material video when the water level line moves to a first target position so that the video clips meeting the video clipping parameters exist in the first direction of the water level line; the sliding window method is to determine the number and duration of sliding windows in the two-dimensional rectangular coordinate system according to the video clipping parameters, slide the sliding windows along the transverse axis, intercept video clips in the sliding windows when the integral sum of curve segments in the sliding windows is maximum, and the window boundaries of the sliding windows are perpendicular to the transverse axis, wherein no overlapping area exists in any two sliding windows under the condition that the number of the sliding windows is multiple.
11. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the video editing method of any of claims 1 to 9.
12. A computer readable storage medium having stored therein at least one program loaded and executed by a processor to implement the video editing method of any of claims 1 to 9.
CN202110345929.8A 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium Active CN113709560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345929.8A CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345929.8A CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113709560A CN113709560A (en) 2021-11-26
CN113709560B true CN113709560B (en) 2024-01-02

Family

ID=78647896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345929.8A Active CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113709560B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770061A (en) * 2020-12-16 2021-05-07 影石创新科技股份有限公司 Video editing method, system, electronic device and storage medium
CN115379290A (en) * 2022-08-22 2022-11-22 上海商汤智能科技有限公司 Video processing method, device, equipment and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE708759A (en) * 1966-12-30 1968-07-01
WO2003001788A2 (en) * 2001-06-25 2003-01-03 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
CA2438479A1 (en) * 2002-09-13 2004-03-13 Ge Medical Systems Global Technology Company, Llc Computer assisted analysis of tomographic mammography data
CN101685532A (en) * 2008-09-24 2010-03-31 中国科学院自动化研究所 Method for correcting simple linear wide-angle lens
CA2795179A1 (en) * 2010-04-08 2011-10-13 General Electric Company Image quality assessment including comparison of overlapped margins
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN102999901A (en) * 2012-10-17 2013-03-27 中国科学院计算技术研究所 Method and system for processing split online video on the basis of depth sensor
CN103620639A (en) * 2011-04-29 2014-03-05 频率Ip股份有限责任公司 Multiple-carousel selective digital service feeds
CN108259893A (en) * 2018-03-22 2018-07-06 天津大学 Virtual reality method for evaluating video quality based on double-current convolutional neural networks
CN108476289A (en) * 2017-07-31 2018-08-31 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, aircraft and system
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
WO2019057198A1 (en) * 2017-09-25 2019-03-28 北京达佳互联信息技术有限公司 Video recording method and device
CN110087123A (en) * 2019-05-15 2019-08-02 腾讯科技(深圳)有限公司 Video file production method, device, equipment and readable storage medium storing program for executing
CN110087143A (en) * 2019-04-26 2019-08-02 北京谦仁科技有限公司 Method for processing video frequency and device, electronic equipment and computer readable storage medium
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN111127341A (en) * 2019-12-05 2020-05-08 Oppo广东移动通信有限公司 Image processing method and apparatus, and storage medium
CN111754493A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method and device for evaluating image noise intensity, electronic equipment and storage medium
CN111866585A (en) * 2020-06-22 2020-10-30 北京美摄网络科技有限公司 Video processing method and device
CN112308786A (en) * 2019-08-01 2021-02-02 司法鉴定科学研究院 Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
CN112532897A (en) * 2020-11-25 2021-03-19 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050219362A1 (en) * 2004-03-30 2005-10-06 Cernium, Inc. Quality analysis in imaging
US10452713B2 (en) * 2014-09-30 2019-10-22 Apple Inc. Video analysis techniques for improved editing, navigation, and summarization
US20170180746A1 (en) * 2015-12-22 2017-06-22 Le Holdings (Beijing) Co., Ltd. Video transcoding method and electronic apparatus
US10089534B2 (en) * 2016-12-16 2018-10-02 Adobe Systems Incorporated Extracting high quality images from a video

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE708759A (en) * 1966-12-30 1968-07-01
WO2003001788A2 (en) * 2001-06-25 2003-01-03 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
CA2438479A1 (en) * 2002-09-13 2004-03-13 Ge Medical Systems Global Technology Company, Llc Computer assisted analysis of tomographic mammography data
CN101685532A (en) * 2008-09-24 2010-03-31 中国科学院自动化研究所 Method for correcting simple linear wide-angle lens
CA2795179A1 (en) * 2010-04-08 2011-10-13 General Electric Company Image quality assessment including comparison of overlapped margins
CN103620639A (en) * 2011-04-29 2014-03-05 频率Ip股份有限责任公司 Multiple-carousel selective digital service feeds
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN102999901A (en) * 2012-10-17 2013-03-27 中国科学院计算技术研究所 Method and system for processing split online video on the basis of depth sensor
CN108476289A (en) * 2017-07-31 2018-08-31 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, aircraft and system
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
WO2019057198A1 (en) * 2017-09-25 2019-03-28 北京达佳互联信息技术有限公司 Video recording method and device
CN108259893A (en) * 2018-03-22 2018-07-06 天津大学 Virtual reality method for evaluating video quality based on double-current convolutional neural networks
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110087143A (en) * 2019-04-26 2019-08-02 北京谦仁科技有限公司 Method for processing video frequency and device, electronic equipment and computer readable storage medium
CN110087123A (en) * 2019-05-15 2019-08-02 腾讯科技(深圳)有限公司 Video file production method, device, equipment and readable storage medium storing program for executing
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN112308786A (en) * 2019-08-01 2021-02-02 司法鉴定科学研究院 Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium
CN111127341A (en) * 2019-12-05 2020-05-08 Oppo广东移动通信有限公司 Image processing method and apparatus, and storage medium
CN111866585A (en) * 2020-06-22 2020-10-30 北京美摄网络科技有限公司 Video processing method and device
CN111754493A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method and device for evaluating image noise intensity, electronic equipment and storage medium
CN112532897A (en) * 2020-11-25 2021-03-19 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113709560A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
US10896478B2 (en) Image grid with selectively prominent images
US11321385B2 (en) Visualization of image themes based on image content
CN108401177B (en) Video playing method, server and video playing system
JP2022528294A (en) Video background subtraction method using depth
US11475666B2 (en) Method of obtaining mask frame data, computing device, and readable storage medium
CN113709560B (en) Video editing method, device, equipment and storage medium
CN106529406B (en) Method and device for acquiring video abstract image
CN111738243B (en) Method, device and equipment for selecting face image and storage medium
US20200366965A1 (en) Method of displaying comment information, computing device, and readable storage medium
US11409794B2 (en) Image deformation control method and device and hardware device
CN112562705A (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
US8983188B1 (en) Edge-aware smoothing in images
CN109445941B (en) Method, device, terminal and storage medium for configuring processor performance
CN112235520A (en) Image processing method and device, electronic equipment and storage medium
CN110418191A (en) A kind of generation method and device of short-sighted frequency
CN112954450A (en) Video processing method and device, electronic equipment and storage medium
CN112102364A (en) Target tracking method and device, electronic equipment and storage medium
CN112272327A (en) Data processing method, device, storage medium and equipment
CN112686965A (en) Skin color detection method, device, mobile terminal and storage medium
CN112785488A (en) Image processing method and device, storage medium and terminal
CN112887694B (en) Video playing method, device and equipment and readable storage medium
CN113038222A (en) Video processing method and device, electronic equipment and storage medium
CN108769825B (en) Method and device for realizing live broadcast
CN112860941A (en) Cover recommendation method, device, equipment and medium
CN110996173A (en) Image data processing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant