CN113709560A - Video editing method, device, equipment and storage medium - Google Patents

Video editing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113709560A
CN113709560A CN202110345929.8A CN202110345929A CN113709560A CN 113709560 A CN113709560 A CN 113709560A CN 202110345929 A CN202110345929 A CN 202110345929A CN 113709560 A CN113709560 A CN 113709560A
Authority
CN
China
Prior art keywords
video
quality
determining
clipping
strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110345929.8A
Other languages
Chinese (zh)
Other versions
CN113709560B (en
Inventor
祝晨晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110345929.8A priority Critical patent/CN113709560B/en
Publication of CN113709560A publication Critical patent/CN113709560A/en
Application granted granted Critical
Publication of CN113709560B publication Critical patent/CN113709560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The embodiment of the application discloses a video editing method, a video editing device, video editing equipment and a storage medium, and belongs to the technical field of video processing. The method comprises the following steps: extracting a plurality of image frames from a material video; determining the quality score of each image frame; arranging each image frame in a time domain, and carrying out interpolation processing on the quality score to obtain a quality score curve of the material video; and editing the target video from the material video based on the quality curve and the video editing strategy. The method and the device have the advantages that the high-quality video segments are automatically intercepted from the material video, so that the efficiency and the quality effect of video editing are improved.

Description

Video editing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a video clipping method, a video clipping device, video clipping equipment and a storage medium.
Background
Currently, some video processing tools provide the functionality of video clipping, which refers to the cutting of video segments from material video.
In the related art, if a video segment with better quality needs to be intercepted from a material video, a user needs to play and view the material video first, manually select a start time stamp and an end time stamp of the segment with better quality from the material video based on subjective quality evaluation of the user on video content, and then intercept the video segment between the start time stamp and the end time stamp.
Because the video editing mode completely depends on subjective quality evaluation and operation of users, the quality of the video segments intercepted by the mode is uncontrollable and the efficiency is low.
Disclosure of Invention
The embodiment of the application provides a video clipping method, a video clipping device, video clipping equipment and a storage medium, which can efficiently intercept high-quality video segments, so that the efficiency and quality effect of video clipping are improved. The technical scheme is as follows:
according to an aspect of an embodiment of the present application, there is provided a video clipping method, the method including:
extracting a plurality of image frames from a material video;
determining a quality score for each of the image frames;
arranging the image frames in a time domain, and performing interpolation processing on the quality scores to obtain quality score curves of the material videos;
and editing the target video from the material video based on the quality curve and the video editing strategy.
According to an aspect of an embodiment of the present application, there is provided a video clip apparatus including:
the video frame extracting module is used for extracting a plurality of image frames from the material video;
a quality scoring module for determining a quality score for each of the image frames;
the curve construction module is used for arranging the image frames in a time domain and carrying out interpolation processing on the quality scores to obtain quality score curves of the material videos;
and the video clipping module is used for clipping the target video from the material video based on the quality curve and the video clipping strategy.
According to an aspect of embodiments of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned video clipping method.
According to an aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the above-mentioned video clipping method.
According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the video clipping method described above.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the quality of the image frames in the material video is scored to generate the quality score curve of the material video, and then the target video is obtained by clipping from the material video based on the quality score curve and the video clipping strategy, so that the high-quality video segments are automatically captured from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame extracting mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculation amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality scores are subjected to interpolation processing to obtain a quality score curve of the material video, so that the quality change rules of all the image frames in the material video can be accurately drawn, and the quality effect of the target video obtained by clipping is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an environment for implementing an embodiment provided by an embodiment of the present application;
FIG. 2 is a diagram of a related products interface for video clip functionality provided by one embodiment of the present application;
FIG. 3 is a flow diagram of a method of video clipping provided by one embodiment of the present application;
FIG. 4 is a flow diagram of a video clipping method provided by another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a construction process of a mass fraction curve;
FIG. 6 is a schematic view exemplarily showing a water level lowering method;
FIG. 7 is a schematic diagram illustrating a sliding window approach;
FIG. 8 is a schematic diagram of the overall flow of the solution provided by an embodiment of the present application;
FIG. 9 is a block diagram of a video clipping device provided by one embodiment of the present application;
FIG. 10 is a block diagram of a video clipping device according to another embodiment of the present application;
fig. 11 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Refer to fig. 1, which illustrates a schematic diagram of an environment for implementing an embodiment of the present application. The embodiment implementation environment may be implemented as a video processing system, and may include: a terminal 10 and a server 20.
The terminal 10 may be an electronic device such as a mobile phone, a tablet Computer, a wearable device, a PC (Personal Computer), and the like. The terminal 10 has installed therein a client running a target application, which may be any application having a video processing function (including a video clip function). For example, the target application may be a short video application, a social application, or an application dedicated to video processing, and the like, which is not limited in this application.
The server 20 may be a background server of the target application program, and is used for providing a background service for the client of the target application program. The server 20 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. The server 20 may communicate with the terminal 10 through a network.
The video clipping method provided by the embodiment of the application may be executed by the terminal 10, for example, the execution subject of each step may be a client installed and operated by a target application program in the terminal 10, or may be executed by the server 20, or may be executed by the terminal 10 and the server 20 in an interactive cooperation manner. In the following embodiments, for convenience of description, only the main body of each step execution is described as a computer device except for a specific description. The computer device may be any electronic device with data processing and storage capabilities, including but not limited to the terminal 10 and the server 20 described above.
Illustratively, as shown in FIG. 2, a related products interface diagram of the video clip functionality provided by the present application is shown. The user opens the video clip interface 21 of the target application, which video clip interface 21 provides a variety of clip related options such as a cut option 22, a split option, a transition option, a shift option, a shuffle option, a delete option, etc. The cut-off option 22 corresponds to an AI capability-based automatic cutting function provided by the present application, such as automatically cutting out highlight segments in material video. After the user clicks the cut option 22, the client may display two options, namely a fixed duration of 30 seconds and a recommended duration, where the fixed duration of 30 seconds means that the duration of the target video generated by the cutting is the fixed duration of 30 seconds, the recommended duration means that the duration of the target video generated by the cutting is the recommended duration, and the recommended duration may be determined by the client based on factors such as the duration and quality of the material video. And after the user selects the recommended duration, triggering the client to automatically clip the video. As can be seen from a comparison of the left and right graphs in fig. 2, the left graph shows a material video 23 to be edited, which is 21s long, and the right graph shows a target video 24 to be edited, which is 10 seconds long. And, the target video is composed of 3 video segments, for example, the 3 video segments may be 3 segments with higher quality in the material video.
Referring to fig. 3, a flow chart of a video clipping method according to an embodiment of the present application is shown. The method comprises the following steps (310-340):
in step 310, a plurality of image frames are extracted from the material video.
The material video refers to a video to be edited, and in a primary editing process (i.e., a process of generating a target video based on the material video editing), the number of the used material videos may be 1 or multiple (i.e., 2 or more than 2), which is not limited in this application.
In the embodiment of the present application, in order to reduce the amount of calculation, it is not necessary to perform quality scoring on each image frame in the material video, but a plurality of image frames are extracted from the material video in a frame extraction manner, and only the extracted image frames are subjected to quality scoring, while those image frames that are not extracted do not need to be subjected to quality scoring.
Alternatively, a plurality of image frames are extracted from the material video by sampling at equal time intervals. For example, one image frame is extracted every 125 ms.
At step 320, a quality score for each image frame is determined.
And for each extracted image frame, performing quality scoring on the image frame to obtain the quality score of the image frame. The quality score is used to characterize the quality of the image frame. It should be noted that the influencing factors of the quality score can be preset in combination with actual requirements, such as but not limited to exposure, definition, color, texture, noise, focus, artifacts, and the like.
Optionally, the image frames are quality scored by a pre-trained AI (Artificial Intelligence) model. In one example, the quality score of the image frame is directly output through an AI model, which may extract image features of the image frame from one or more scoring dimensions and determine the quality score of the image frame based on the image features. In another example, single scores of the image frame are output from a plurality of scoring dimensions through a plurality of AI models respectively, and then the single scores are directly summed or weighted and summed to obtain the quality score of the image frame. Alternatively, the single scoring dimension may correspond to one factor of influence.
And 330, arranging the image frames in a time domain, and performing interpolation processing on the quality scores to obtain a quality score curve of the material video.
After the quality scores of the image frames are obtained, a two-dimensional rectangular coordinate system is constructed by taking time as a horizontal axis and the quality scores as a vertical axis, and position points corresponding to the image frames are marked in the two-dimensional rectangular coordinate system. Since the quality scores of the respective image frames are discrete in time, in order to generate a quality score of the material video, the quality scores may be subjected to difference processing, and then the quality score of the material video may be generated based on the interpolated quality scores by fitting. For example, by performing interpolation in units of 0.1s, the mass fraction curve can achieve a time accuracy of 0.1 s.
In the embodiment of the present application, the quality score curve refers to a function curve describing the quality of the whole material video obtained by certain processing after the quality scores of a plurality of discrete time points in the material video are obtained.
And step 340, editing the material video to obtain a target video based on the quality curve and the video editing strategy.
The video clip policy may be set by a user in a self-defined manner or by default, which is not limited in this application. The video clip strategy is used for specifying the clipping mode of the material video, such as video clip parameters of the number of video segments obtained by clipping from the material video, the duration of a single video segment, the total duration of all the video segments, the clipping mode of the segments and the like.
After the quality curve of the material video is obtained, the material video can be clipped according to a set video clipping strategy, and finally the target video is obtained.
In summary, according to the technical scheme provided by the embodiment of the application, the quality of the image frames in the material video is scored to generate the quality score curve of the material video, and then the target video is obtained by clipping from the material video based on the quality score curve and the video clipping strategy, so that the high-quality video clip is automatically captured from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame extracting mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculation amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality scores are subjected to interpolation processing to obtain a quality score curve of the material video, so that the quality change rules of all the image frames in the material video can be accurately drawn, and the quality effect of the target video obtained by clipping is improved.
Referring to fig. 4, a flow chart of a video clipping method according to another embodiment of the present application is shown. The method comprises the following steps (410-460):
in step 410, a plurality of image frames are extracted from the material video.
Optionally, for the material video selected by the user, the image frames included in the material video are arranged in the time axis sequence, one image frame is extracted at equal time intervals, for example, one frame is extracted every 100ms, and each extracted image frame is sent to step 420 for quality scoring.
Alternatively, in order to ensure the frame extraction efficiency, the frame extraction efficiency can be improved by using a decoder concurrency mode, an image size compression mode, an image buffering mode and the like. The decoder concurrency means that a plurality of decoders are adopted to decode a material video in parallel to obtain an image frame, the image size compression means that the size of the image frame is compressed to improve the decoding speed, the image caching means that a multi-level cache technology is adopted to cache the decoded image frame, and because the video decoding and the image frame caching are the precondition steps of frame extraction, the time consumption required by frame extraction can be reduced by reducing the time consumption required by decoding and caching, and the frame extraction efficiency is improved.
Optionally, the device performance level is determined based on the device model, a frame extraction configuration policy is determined according to the device performance level, the frame extraction configuration policy is used for indicating a decoding policy and/or a cache policy in the frame extraction process, and frame extraction processing is performed on the material video according to the frame extraction configuration policy to obtain a plurality of image frames. Considering that different models of devices have different device performance levels, some devices with better performance can adopt a decoding strategy and/or a caching strategy with higher configuration, such as appropriately increasing the concurrent number of decoders and appropriately increasing the caching speed, and some devices with poorer performance can adopt a decoding strategy and/or a caching strategy with lower configuration, such as appropriately decreasing the concurrent number of decoders and appropriately decreasing the caching speed. The device performance level to which the device model belongs and the corresponding relation between the device performance level and the frame extraction configuration strategy can be preset and established, and the device model of the current device and the suitable frame extraction configuration strategy can be obtained through table look-up, so that the compatibility of the application scheme to different devices is improved, and the application scheme can be normally executed on different devices. The number and the division manner of the device performance levels are not limited in the present application, and may be, for example, 3 device performance levels of high, medium, and low, or 5 device performance levels.
At step 420, a quality score for each image frame is determined.
Optionally, the image frames are scored from multiple different dimensions through multiple AI models to obtain multiple single scores of the image frames, the multiple single scores of the image frames are subjected to weighted summation to obtain initial quality scores of the image frames, and then the initial quality scores of the image frames are adjusted based on a score adjustment strategy to determine the quality scores of the image frames. For example, the AI models may score the image frames from different dimensions such as highlight, aesthetics, classification, etc. to obtain single scores of the image frames, and then perform weighted summation to obtain an initial quality score of the image frames based on the single scores and the weights of the dimensions. In one example, the initial mass score may be determined directly as the final mass score. In another example, an initial quality score of the image frame is adjusted based on a score adjustment policy, determining a quality score for the image frame. The score adjustment is performed to perform special processing on some special image frames to improve the video quality of the target video generated by the subsequent editing.
For example, if a human face region exists in the image frame, the initial quality score of the image frame is adjusted up to determine the quality score of the image frame. Since the image frames containing the human faces are often the more important image frames in the video, the quality scores can be properly adjusted up for the image frames so as to improve the probability of selecting the image frames. Whether the image frame has the face region or not can be identified and determined by adopting a face identification model obtained through machine learning.
For another example, if the image frame belongs to a scene division image frame, the initial quality score of the image frame is adjusted downward to determine the quality score of the image frame. The scene division image frame refers to an image frame located at the tail of a previous scene and an image frame located at the head of a subsequent scene during switching from one scene to another scene. For example, the 1 st to 10 th frames are indoor scenes, and the scene is switched to an outdoor scene from 11 frames, so that the scene division image frame comprises the 10 th frame and the 11 th frame. Too many scene segmentation image frames are introduced into the target video, so that the scene switching of the target video is too frequent, and the picture content looks disordered. Therefore, the quality score of the scene segmentation image frame is reduced, and the probability of being selected can be reduced as much as possible. The scene corresponding to the image frame can be identified and determined by adopting a scene identification model obtained through machine learning.
For another example, if the image frame belongs to the image frame in the start period or the image frame in the end period, the initial quality score of the image frame is adjusted up, and the quality score of the image frame is determined. The start period image frame refers to an image frame located in the start period of the material video, such as the image frame in the first 1s of the material video. The end period image frame refers to an image frame located in an end period of the material video, such as an image frame in the last 1s of the material video. For image frames in sensitive time periods such as the beginning and the end of the material video, the quality score should be increased appropriately to improve the probability of being selected.
It should be noted that, the value of the quality up/down adjustment in the score adjustment strategy may be preset in combination with the actual situation, and the present application is not limited thereto.
In addition, when the image frames are graded from multiple different dimensions by multiple AI models, the multiple AI models can be run in parallel, thereby improving the grading efficiency. In addition, each AI model can share the image data of the image frame, and the image can be compressed and the like according to the requirement of the AI model in the respective image processing process, so that the scoring efficiency and accuracy can be considered.
And 430, arranging the image frames in a time domain, and performing interpolation processing on the quality scores to obtain a quality score curve of the material video.
Optionally, a two-dimensional rectangular coordinate system with time as a horizontal axis and quality as a vertical axis is constructed, based on the time stamp and the quality score of each image frame, position points corresponding to the image frame are added in the two-dimensional rectangular coordinate system, then, based on each position point, interpolation processing is performed on the quality score to obtain a position point sequence after interpolation, and finally, based on the position point sequence after interpolation, a quality score curve of the material video is generated through fitting.
As shown in fig. 5, it is assumed that the 1 st frame, the 11 th frame, and the 21 st frame … … are extracted from the material video, black points in the drawing are position points corresponding to each extracted image frame marked in a two-dimensional rectangular coordinate system based on a time stamp and a quality score of each extracted image frame, white points are interpolation points obtained by interpolating the quality scores, the position points and the interpolation points form an interpolated position point sequence, and finally a curve 50 is fitted to be a quality score curve of the material video.
In the embodiment of the present application, the manner adopted by the interpolation algorithm and the curve fitting algorithm is not limited.
At step 440, video clip parameters are determined according to the video clip policy.
Optionally, the video clip parameters comprise at least one of: the number of video segments obtained by cutting from the material video, the duration of a single video segment, the total duration of all the video segments and the segment cutting mode.
Optionally, in the embodiment of the present application, several video clip strategies are provided as follows: the method comprises an intelligent optimization strategy, a fixed duration strategy, a multi-segment fixed duration strategy and a multi-material segment corresponding fixed duration strategy.
Wherein, the intelligent optimization strategy refers to automatically determining the total time length of the target video generated by clipping based on the material video. Optionally, under the condition that the video clipping strategy is an intelligent preferred strategy, determining the total duration of all video segments obtained by clipping from the material video according to the duration of the material video; and determining the clip editing mode as a water level down method. One or more video clips with better quality can be selected from the material video by adopting an intelligent preference strategy, and the duration of each video clip is not fixed. Optionally, the total duration of all the video segments obtained by clipping from the material video is in positive correlation with the duration of the material video.
The fixed duration strategy means that the total duration of the target video generated by the clips is a fixed value, and the fixed duration can be set by a user in a self-defined way or a default way. Optionally, under the condition that the video clipping strategy is a fixed duration strategy, determining the total duration of all video segments obtained by clipping from the material video according to a preset single fixed duration; if the number of the material videos is 1, determining that the clip editing mode is a water level descending method; or, if the number of the material videos is multiple, determining that the clip editing mode is the sliding window method. For example, a preset single fixed duration is determined as the total duration of all video segments obtained by editing from the material video, that is, the total duration of the target video. For example, if the fixed duration is set to be 10s by the user, a video segment with a duration of 10s is clipped from the material video to be used as the target video, or a plurality of video segments are clipped from the material video and the total duration of the plurality of video segments is 10s, and then the plurality of video segments are spliced to generate the target video.
The multi-segment fixed duration strategy is to intercept a plurality of video segments with fixed duration from a material video and then splice the plurality of video segments to generate a target video. The fixed time lengths corresponding to the video clips may be the same or different. The fixed time lengths can be set by a user in a self-defined mode or a default mode. Optionally, under the condition that the video clipping strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by clipping from the material video and the duration of a single video segment according to a plurality of preset fixed durations; and determining the clip mode as a sliding window method. For example, 3 fixed time durations, which are 3s, 2s, and 3s respectively, are preset, then 3 video segments are cut out from the material video, the time durations of the 3 video segments are 3s, 2s, and 3s respectively, and then the target video is generated by splicing the plurality of video segments.
The strategy that the multiple material segments correspond to the fixed time length means that the number of the material videos is multiple, and a segment of video segment with the fixed time length is intercepted from each material video, wherein the fixed time length can be set by a user in a self-defined mode or a default mode by a system. Optionally, under the condition that the video editing strategy is a fixed duration strategy corresponding to the multiple material videos, determining the duration of the video segment obtained by editing each material video according to the fixed durations respectively set for the multiple material videos; and determining the clip mode as a sliding window method. For example, 2 segments of material videos are included, the fixed time duration set for the 1 st segment of material video is 5s, and the fixed time duration set for the 2 nd segment of material video is 7s, so that a segment of 5s video segment is cut from the 1 st segment of material video, a segment of 7s video segment is cut from the 2 nd segment of material video, and then the 2 segments of video segments are spliced to obtain the target video.
And step 450, editing and acquiring video segments meeting the requirements of the video editing parameters from the material video based on the quality curve.
In terms of a segment cutting mode for cutting video segments from a material video, the application provides a water level dropping method and a sliding window method. The water level dropping method is suitable for intercepting one or more video clips in one step, and the total duration of each intercepted video clip is a fixed value. The sliding window method is suitable for intercepting one video segment in one step, and the duration of the single video segment is a fixed value.
Under the condition that a video clip is intercepted by adopting a water level descent method, adding a water level line in a two-dimensional rectangular coordinate system where a mass fraction curve is positioned, wherein the horizontal axis of the two-dimensional rectangular coordinate system is time and the vertical axis is mass fraction, and the water level line is parallel to the horizontal axis; moving the water line along the longitudinal axis; when the water level line moves to a first target position, and a video segment meeting the video clip parameter requirement exists in the first direction of the water level line, intercepting the video segment from the material video; wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
For example, as shown in fig. 6, the mass-fraction curve 61 may be represented by Y ═ f (X) with the X axis (horizontal axis) as the time axis and the Y axis (vertical axis) as the mass-fraction axis. Assuming that the total time length of the material video is T and the target video to be edited is a fixed time length T0(T0 < T), by moving the waterline 62 from top to bottom (i.e., along the positive direction of the vertical axis), the curve segment located below the waterline 62 is retained, and assuming that the total time length corresponding to the retained curve segment is T1. Since the quality curve 61 is a continuous curve, there is and only one target position during the movement of the water line 62, so that the total duration T1 of the video segment below the water line 62 is equal to the duration T0 of the target video. When this condition is satisfied, a video segment located below the water line 62 may be cut. For example, assuming that T1 is satisfied when the waterline 62 is in the position shown in fig. 6, T0, then 2 video clips are cut, 1 being a video clip from T1 to T2, and the other being a video clip from T3 to T4.
It should be noted that, if the two-dimensional rectangular coordinate system where the initially constructed mass distribution curve is located is as shown in fig. 5, and the positive direction of the longitudinal axis is upward and the negative direction of the longitudinal axis is downward, when the video clip is captured by using the water level descent method, a possible implementation manner is to turn over the two-dimensional rectangular coordinate system (including the mass distribution curve) up and down, that is, to make the positive direction of the longitudinal axis downward and the negative direction of the longitudinal axis upward (as shown in fig. 6), then add the water level line in the turned-over two-dimensional rectangular coordinate system, and then select the qualified high-quality video clip located below the water level line to capture by moving the water level line from top to bottom (i.e., along the positive direction of the longitudinal axis) (so that the water level line descends); another possible implementation is to not flip the two-dimensional rectangular coordinate system (or flip the quality score), i.e., keep the positive direction of the longitudinal axis upward and the negative direction of the longitudinal axis downward (as shown in fig. 5), add the water level line in the two-dimensional rectangular coordinate system, and then select the qualified high-quality video clip located above the water level line by moving the water level line from bottom to top (i.e., along the positive direction of the longitudinal axis) (so that the water level line rises) to intercept it.
Under the condition of intercepting a video segment by adopting a sliding window method, determining the number n of sliding windows to be added and the duration of each sliding window according to video clipping parameters, wherein n is a positive integer; adding n sliding windows in a two-dimensional rectangular coordinate system where the mass distribution curves are located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis of the two-dimensional rectangular coordinate system is mass, and the window boundary of the sliding window is vertical to the horizontal axis; moving the sliding window along the horizontal axis and obtaining the integral of the curve section in the sliding window; under the condition that the number of the sliding windows is multiple, no overlapping area exists between any two sliding windows; and when the n sliding windows are positioned at the second target position, so that the total integral sum takes the maximum value, cutting out the video segments in the n sliding windows from the material video.
For example, as shown in fig. 7, the mass fraction curve 71 may be represented by Y ═ f (X) with the X axis (horizontal axis) as the time axis and the Y axis (vertical axis) as the mass fraction axis. Assuming that the total time length of the material video is T and the target video to be clipped is a fixed time length T0(T0 < T), the left and right window boundaries of the sliding window are indicated by broken lines in the figure, and by moving the sliding window, when the window boundaries are located at the positions of T1 and T2, it is assumed that the curved line segment located in the sliding window isIntegration
Figure BDA0003000678900000111
Taking the maximum value, the video segment from t1 to t2 in the sliding window is truncated.
Step 460, obtain the target video based on the video clip.
And if the number of the video clips obtained by intercepting is 1, directly taking the video clips as the target video. And if the number of the video segments obtained by intercepting is more than 1, splicing the video segments to obtain the target video.
In the following, with reference to fig. 8, a summary description is made on the technical solution of the present application, after a material video is obtained, a plurality of image frames are extracted from the material video by a frame extraction tool, then the quality score of each image frame is determined by an AI scoring component, interpolation is performed based on the quality score of each image frame to obtain a quality score curve of the material video, then a clipping algorithm is run based on the quality score curve and a video clipping strategy, and a target video is clipped from the material video and returned as a result.
In addition, the present application provides a plurality of video clipping strategies and corresponding clip modes, which may be specifically shown in table 1 below:
TABLE 1
Serial number Video clipping strategy Segment editing mode
1 Intelligent preference policy Method of lowering water level
2 Fixed duration policy Water level lowering method or sliding window method
3 Multi-segment fixed duration policy Sliding window method
4 Multi-material segment corresponding fixed duration strategy Sliding window method
In summary, the technical solutions provided in the embodiments of the present application provide a plurality of different video clipping strategies, and each video clipping strategy is based on a quality curve of a material video for video clipping, so that a scoring result can be reused for a plurality of times, and when a user changes a video clipping strategy, the material video does not need to be scored again, and the quality curve of the material video stored before multiplexing is only needed, so that a separate design of quality scoring and clipping algorithms is made, and even if the user changes the video clipping strategy, a target segment meeting the requirement of the strategy can be quickly generated.
In addition, according to the video clipping scheme provided by the application, the clipping effect depends on the scoring capability of the AI model and the clipping algorithm, so that the clipping effect is more stable and controllable compared with manual clipping, and a better clipping effect can be realized only by continuously updating the AI model and the clipping algorithm in the iterative optimization process of the scheme.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to FIG. 9, a block diagram of a video clipping device according to an embodiment of the present application is shown. The device has the functions of realizing the method examples, and the functions can be realized by hardware or by hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 900 may include: a video framing module 910, a quality scoring module 920, a curve construction module 930, and a video clipping module 940.
The video extracting module 910 is configured to extract a plurality of image frames from the material video.
A quality score module 920, configured to determine a quality score of each of the image frames.
A curve constructing module 930, configured to arrange each image frame in a time domain, and perform interpolation processing on the quality score to obtain a quality score curve of the material video.
And a video clipping module 940, configured to clip a target video from the material video based on the quality score and a video clipping policy.
In an exemplary embodiment, as shown in fig. 10, the video clip module 940 includes: a parameter determination unit 942, a fragment interception unit 944 and a result generation unit 946.
A parameter determining unit 942 for determining video clip parameters according to the video clip policy, the video clip parameters comprising at least one of: the number of the video segments obtained by cutting the material video, the duration of a single video segment, the total duration of all the video segments and the segment cutting mode.
A segment intercepting unit 944, configured to obtain, based on the quality score, a video segment that meets the requirement of the video clip parameter from the material video.
A result generating unit 946, configured to obtain the target video based on the video segment.
Optionally, the fragment truncation unit 944 is configured to:
adding a water level line in a two-dimensional rectangular coordinate system where the quality partial curve is located under the condition of intercepting the video clip by adopting a water level descent method; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis of the two-dimensional rectangular coordinate system is mass, and the water level line is parallel to the horizontal axis;
moving the water line along the longitudinal axis;
when the water level line moves to a first target position, so that a video segment meeting the video clip parameter requirement exists in a first direction of the water level line, intercepting the video segment from the material video;
wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
Optionally, the fragment truncation unit 944 is configured to:
under the condition of intercepting the video segment by adopting a sliding window method, determining the number n of sliding windows to be added and the duration of each sliding window according to the video clip parameters, wherein n is a positive integer;
adding the n sliding windows into a two-dimensional rectangular coordinate system where the mass distribution curve is located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis of the two-dimensional rectangular coordinate system is mass, and the window boundary of the sliding window is perpendicular to the horizontal axis;
moving the sliding window along the horizontal axis and obtaining an integral of a curve segment located in the sliding window; under the condition that the number of the sliding windows is multiple, no overlapping area exists between any two sliding windows;
and when the n sliding windows are positioned at a second target position, so that the integral sum value takes a maximum value, cutting out video clips in the n sliding windows from the material video.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video clipping strategy is an intelligent optimization strategy, determining the total duration of all video segments obtained by clipping from the material video according to the duration of the material video;
and determining that the clip editing mode is a water level down method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video clipping strategy is a fixed duration strategy, determining the total duration of all video segments obtained by clipping from the material video according to a preset single fixed duration;
if the number of the material videos is 1, determining that the clip editing mode is a water level descent method; or if the number of the material videos is multiple, determining that the clip editing mode is a sliding window method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video clipping strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by clipping from the material video and the duration of a single video segment according to a plurality of preset fixed durations;
and determining that the clip mode is a sliding window method.
Optionally, the parameter determining unit 942 is configured to:
under the condition that the video editing strategy is a multi-material-segment corresponding fixed duration strategy, determining the duration of a video segment obtained by editing each material video according to the fixed durations respectively set for the plurality of material videos;
and determining that the clip mode is a sliding window method.
In an exemplary embodiment, as shown in fig. 10, the quality scoring module 920 includes: a singles scoring unit 922, a scoring summation unit 924, and a score adjustment unit 926.
And the single item scoring unit 922 is configured to score the image frame from multiple different dimensions through multiple artificial intelligence AI models to obtain multiple single items of the image frame.
The scoring and summing unit 924 is configured to perform weighted summing processing on the plurality of single terms of the image frame to obtain an initial quality score of the image frame.
A score adjusting unit 926, configured to adjust an initial quality score of the image frame based on a score adjusting policy, and determine a quality score of the image frame.
Optionally, the score adjusting unit 926 is configured to:
if the image frame has a face region, the initial quality score of the image frame is adjusted upwards, and the quality score of the image frame is determined;
or if the image frame belongs to a scene segmentation image frame, adjusting the initial quality of the image frame downwards, and determining the quality of the image frame;
or if the image frame belongs to the image frame at the beginning period or the image frame at the ending period, the initial quality score of the image frame is adjusted upwards, and the quality score of the image frame is determined.
In an exemplary embodiment, the curve construction module 930 is configured to:
constructing a two-dimensional rectangular coordinate system with time as a horizontal axis and mass as a vertical axis;
adding position points corresponding to the image frames in the two-dimensional rectangular coordinate system based on the time stamps and the quality scores of the image frames;
performing interpolation processing on the quality scores based on the position points to obtain a position point sequence after interpolation;
and fitting and generating a quality curve of the material video based on the interpolated position point sequence.
In an exemplary embodiment, the video framing module 910 is configured to:
determining a device performance level based on the device model;
determining a frame extraction configuration strategy according to the equipment performance level, wherein the frame extraction configuration strategy is used for indicating a decoding strategy and/or a caching strategy in the frame extraction process;
and performing frame extraction processing on the material video according to the frame extraction configuration strategy to obtain the plurality of image frames.
In summary, according to the technical scheme provided by the embodiment of the application, the quality of the image frames in the material video is scored to generate the quality score curve of the material video, and then the target video is obtained by clipping from the material video based on the quality score curve and the video clipping strategy, so that the high-quality video clip is automatically captured from the material video, and the efficiency and quality effect of video clipping are improved.
In addition, when the quality score curve of the material video is generated, on one hand, only partial image frames in the material video are required to be subjected to quality scoring in a frame extracting mode, and the whole image frames are not required to be subjected to quality scoring, so that the calculation amount is reduced, and the time consumption required by video editing is further reduced; on the other hand, the extracted image frames are arranged in the time domain, and the quality scores are subjected to interpolation processing to obtain a quality score curve of the material video, so that the quality change rules of all the image frames in the material video can be accurately drawn, and the quality effect of the target video obtained by clipping is improved.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the content structure of the device may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to FIG. 11, a block diagram of a computing device 1100 provided in an embodiment of the present application is shown. The computer device 1100 may be a terminal such as a mobile phone, a tablet computer, a smart television, a multimedia player, a PC, or may be a server. The computer device 1100 may be used to implement the video clipping method described above.
Generally, the computer device 1100 includes: a processor 1101 and a memory 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1102 is used to store a computer program configured to be executed by one or more processors to implement the video clip method described above.
In some embodiments, the computer device 1100 may also optionally include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device may include: at least one of a display 1104, audio circuitry 1105, a communications interface 1106, and a power supply 1107.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 11 does not constitute a limitation of the computer device 1100, and may include more or fewer components than those illustrated, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, code set or set of instructions which, when executed by a processor of a computer device, implements the above-described video clipping method.
Alternatively, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the video clipping method described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of video clipping, the method comprising:
extracting a plurality of image frames from a material video;
determining a quality score for each of the image frames;
arranging the image frames in a time domain, and performing interpolation processing on the quality scores to obtain quality score curves of the material videos;
and editing the target video from the material video based on the quality curve and the video editing strategy.
2. The method of claim 1, wherein the clipping target video from the material video based on the quality score and a video clipping strategy comprises:
determining video clip parameters according to the video clip policy, the video clip parameters including at least one of: the number of video segments obtained by clipping from the material video, the duration of a single video segment, the total duration of all the video segments and a segment clipping mode;
editing and acquiring video segments meeting the video editing parameter requirements from the material video based on the quality curve;
and obtaining the target video based on the video clip.
3. The method of claim 2, wherein the obtaining of the video segments meeting the video clip parameter requirements from the material video based on the quality curve comprises:
adding a water level line in a two-dimensional rectangular coordinate system where the quality partial curve is located under the condition of intercepting the video clip by adopting a water level descent method; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis of the two-dimensional rectangular coordinate system is mass, and the water level line is parallel to the horizontal axis;
moving the water line along the longitudinal axis;
when the water level line moves to a first target position, so that a video segment meeting the video clip parameter requirement exists in a first direction of the water level line, intercepting the video segment from the material video;
wherein the mass fraction in the first direction of the water line is greater than the mass fraction in the second direction of the water line.
4. The method of claim 2, wherein the obtaining of the video segments meeting the video clip parameter requirements from the material video based on the quality curve comprises:
under the condition of intercepting the video segment by adopting a sliding window method, determining the number n of sliding windows to be added and the duration of each sliding window according to the video clip parameters, wherein n is a positive integer;
adding the n sliding windows into a two-dimensional rectangular coordinate system where the mass distribution curve is located; the horizontal axis of the two-dimensional rectangular coordinate system is time, the vertical axis of the two-dimensional rectangular coordinate system is mass, and the window boundary of the sliding window is perpendicular to the horizontal axis;
moving the sliding window along the horizontal axis and obtaining an integral of a curve segment located in the sliding window; under the condition that the number of the sliding windows is multiple, no overlapping area exists between any two sliding windows;
and when the n sliding windows are positioned at a second target position, so that the integral sum value takes a maximum value, cutting out video clips in the n sliding windows from the material video.
5. The method of claim 2, wherein determining video clip parameters according to the video clip policy comprises:
under the condition that the video clipping strategy is an intelligent optimization strategy, determining the total duration of all video segments obtained by clipping from the material video according to the duration of the material video;
and determining that the clip editing mode is a water level down method.
6. The method of claim 2, wherein determining video clip parameters according to the video clip policy comprises:
under the condition that the video clipping strategy is a fixed duration strategy, determining the total duration of all video segments obtained by clipping from the material video according to a preset single fixed duration;
if the number of the material videos is 1, determining that the clip editing mode is a water level descent method; or if the number of the material videos is multiple, determining that the clip editing mode is a sliding window method.
7. The method of claim 2, wherein determining video clip parameters according to the video clip policy comprises:
under the condition that the video clipping strategy is a multi-segment fixed duration strategy, determining the number of video segments obtained by clipping from the material video and the duration of a single video segment according to a plurality of preset fixed durations;
and determining that the clip mode is a sliding window method.
8. The method of claim 2, wherein determining video clip parameters according to the video clip policy comprises:
under the condition that the video editing strategy is a multi-material-segment corresponding fixed duration strategy, determining the duration of a video segment obtained by editing each material video according to the fixed durations respectively set for the plurality of material videos;
and determining that the clip mode is a sliding window method.
9. The method of claim 1, wherein said determining a quality score for each of said image frames comprises:
scoring the image frames from a plurality of different dimensions through a plurality of Artificial Intelligence (AI) models to obtain a plurality of singles scores of the image frames;
carrying out weighted summation processing on the plurality of single items of the image frame to obtain an initial quality score of the image frame;
and adjusting the initial quality score of the image frame based on a score adjusting strategy, and determining the quality score of the image frame.
10. The method of claim 9, wherein the adjusting the initial quality score of the image frame based on a score adjustment policy, determining the quality score of the image frame, comprises:
if the image frame has a face region, the initial quality score of the image frame is adjusted upwards, and the quality score of the image frame is determined;
alternatively, the first and second electrodes may be,
if the image frame belongs to a scene segmentation image frame, adjusting the initial quality of the image frame downwards, and determining the quality of the image frame;
alternatively, the first and second electrodes may be,
if the image frame belongs to the image frame at the beginning period or the image frame at the ending period, the initial quality score of the image frame is adjusted upwards, and the quality score of the image frame is determined.
11. The method according to claim 1, wherein the arranging the image frames in the time domain and interpolating the quality scores to obtain the quality score curve of the material video comprises:
constructing a two-dimensional rectangular coordinate system with time as a horizontal axis and mass as a vertical axis;
adding position points corresponding to the image frames in the two-dimensional rectangular coordinate system based on the time stamps and the quality scores of the image frames;
performing interpolation processing on the quality scores based on the position points to obtain a position point sequence after interpolation;
and fitting and generating a quality curve of the material video based on the interpolated position point sequence.
12. The method of claim 1, wherein the extracting the plurality of image frames from the material video comprises:
determining a device performance level based on the device model;
determining a frame extraction configuration strategy according to the equipment performance level, wherein the frame extraction configuration strategy is used for indicating a decoding strategy and/or a caching strategy in the frame extraction process;
and performing frame extraction processing on the material video according to the frame extraction configuration strategy to obtain the plurality of image frames.
13. A video clipping apparatus, characterized in that the apparatus comprises:
the video frame extracting module is used for extracting a plurality of image frames from the material video;
a quality scoring module for determining a quality score for each of the image frames;
the curve construction module is used for arranging the image frames in a time domain and carrying out interpolation processing on the quality scores to obtain quality score curves of the material videos;
and the video clipping module is used for clipping the target video from the material video based on the quality curve and the video clipping strategy.
14. A computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a video clipping method according to any one of claims 1 to 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the video clipping method according to any one of claims 1 to 12.
CN202110345929.8A 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium Active CN113709560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110345929.8A CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110345929.8A CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113709560A true CN113709560A (en) 2021-11-26
CN113709560B CN113709560B (en) 2024-01-02

Family

ID=78647896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110345929.8A Active CN113709560B (en) 2021-03-31 2021-03-31 Video editing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113709560B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127877A1 (en) * 2020-12-16 2022-06-23 影石创新科技股份有限公司 Video editing method and system, electronic device, and storage medium
CN115379290A (en) * 2022-08-22 2022-11-22 上海商汤智能科技有限公司 Video processing method, device, equipment and storage medium

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE708759A (en) * 1966-12-30 1968-07-01
WO2003001788A2 (en) * 2001-06-25 2003-01-03 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
CA2438479A1 (en) * 2002-09-13 2004-03-13 Ge Medical Systems Global Technology Company, Llc Computer assisted analysis of tomographic mammography data
US20050219362A1 (en) * 2004-03-30 2005-10-06 Cernium, Inc. Quality analysis in imaging
CN101685532A (en) * 2008-09-24 2010-03-31 中国科学院自动化研究所 Method for correcting simple linear wide-angle lens
CA2795179A1 (en) * 2010-04-08 2011-10-13 General Electric Company Image quality assessment including comparison of overlapped margins
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN102999901A (en) * 2012-10-17 2013-03-27 中国科学院计算技术研究所 Method and system for processing split online video on the basis of depth sensor
CN103620639A (en) * 2011-04-29 2014-03-05 频率Ip股份有限责任公司 Multiple-carousel selective digital service feeds
US20160092561A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Video analysis techniques for improved editing, navigation, and summarization
US20170180746A1 (en) * 2015-12-22 2017-06-22 Le Holdings (Beijing) Co., Ltd. Video transcoding method and electronic apparatus
US20180173959A1 (en) * 2016-12-16 2018-06-21 Adobe Systems Incorporated Extracting High Quality Images from a Video
CN108259893A (en) * 2018-03-22 2018-07-06 天津大学 Virtual reality method for evaluating video quality based on double-current convolutional neural networks
CN108476289A (en) * 2017-07-31 2018-08-31 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, aircraft and system
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
WO2019057198A1 (en) * 2017-09-25 2019-03-28 北京达佳互联信息技术有限公司 Video recording method and device
CN110087123A (en) * 2019-05-15 2019-08-02 腾讯科技(深圳)有限公司 Video file production method, device, equipment and readable storage medium storing program for executing
CN110087143A (en) * 2019-04-26 2019-08-02 北京谦仁科技有限公司 Method for processing video frequency and device, electronic equipment and computer readable storage medium
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN111127341A (en) * 2019-12-05 2020-05-08 Oppo广东移动通信有限公司 Image processing method and apparatus, and storage medium
CN111754493A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method and device for evaluating image noise intensity, electronic equipment and storage medium
CN111866585A (en) * 2020-06-22 2020-10-30 北京美摄网络科技有限公司 Video processing method and device
CN112308786A (en) * 2019-08-01 2021-02-02 司法鉴定科学研究院 Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
CN112532897A (en) * 2020-11-25 2021-03-19 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and computer readable storage medium

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE708759A (en) * 1966-12-30 1968-07-01
WO2003001788A2 (en) * 2001-06-25 2003-01-03 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
CA2438479A1 (en) * 2002-09-13 2004-03-13 Ge Medical Systems Global Technology Company, Llc Computer assisted analysis of tomographic mammography data
US20050219362A1 (en) * 2004-03-30 2005-10-06 Cernium, Inc. Quality analysis in imaging
CN101685532A (en) * 2008-09-24 2010-03-31 中国科学院自动化研究所 Method for correcting simple linear wide-angle lens
CA2795179A1 (en) * 2010-04-08 2011-10-13 General Electric Company Image quality assessment including comparison of overlapped margins
CN103620639A (en) * 2011-04-29 2014-03-05 频率Ip股份有限责任公司 Multiple-carousel selective digital service feeds
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN102999901A (en) * 2012-10-17 2013-03-27 中国科学院计算技术研究所 Method and system for processing split online video on the basis of depth sensor
US20160092561A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Video analysis techniques for improved editing, navigation, and summarization
US20170180746A1 (en) * 2015-12-22 2017-06-22 Le Holdings (Beijing) Co., Ltd. Video transcoding method and electronic apparatus
US20180173959A1 (en) * 2016-12-16 2018-06-21 Adobe Systems Incorporated Extracting High Quality Images from a Video
CN108476289A (en) * 2017-07-31 2018-08-31 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, aircraft and system
WO2019042341A1 (en) * 2017-09-04 2019-03-07 优酷网络技术(北京)有限公司 Video editing method and device
WO2019057198A1 (en) * 2017-09-25 2019-03-28 北京达佳互联信息技术有限公司 Video recording method and device
CN108259893A (en) * 2018-03-22 2018-07-06 天津大学 Virtual reality method for evaluating video quality based on double-current convolutional neural networks
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110087143A (en) * 2019-04-26 2019-08-02 北京谦仁科技有限公司 Method for processing video frequency and device, electronic equipment and computer readable storage medium
CN110087123A (en) * 2019-05-15 2019-08-02 腾讯科技(深圳)有限公司 Video file production method, device, equipment and readable storage medium storing program for executing
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN112308786A (en) * 2019-08-01 2021-02-02 司法鉴定科学研究院 Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
CN110572722A (en) * 2019-09-26 2019-12-13 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and readable storage medium
CN111127341A (en) * 2019-12-05 2020-05-08 Oppo广东移动通信有限公司 Image processing method and apparatus, and storage medium
CN111866585A (en) * 2020-06-22 2020-10-30 北京美摄网络科技有限公司 Video processing method and device
CN111754493A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Method and device for evaluating image noise intensity, electronic equipment and storage medium
CN112532897A (en) * 2020-11-25 2021-03-19 腾讯科技(深圳)有限公司 Video clipping method, device, equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127877A1 (en) * 2020-12-16 2022-06-23 影石创新科技股份有限公司 Video editing method and system, electronic device, and storage medium
CN115379290A (en) * 2022-08-22 2022-11-22 上海商汤智能科技有限公司 Video processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113709560B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
US11321385B2 (en) Visualization of image themes based on image content
CN112235520B (en) Image processing method and device, electronic equipment and storage medium
CN106713988A (en) Beautifying method and system for virtual scene live
US10354394B2 (en) Dynamic adjustment of frame rate conversion settings
CN106162223A (en) A kind of news video cutting method and device
CN113709560B (en) Video editing method, device, equipment and storage medium
US11409794B2 (en) Image deformation control method and device and hardware device
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN109445941B (en) Method, device, terminal and storage medium for configuring processor performance
CN110969572B (en) Face changing model training method, face exchange device and electronic equipment
CN112102422B (en) Image processing method and device
CN116033189B (en) Live broadcast interactive video partition intelligent control method and system based on cloud edge cooperation
WO2023231235A1 (en) Method and apparatus for editing dynamic image, and electronic device
WO2023151424A1 (en) Method and apparatus for adjusting playback rate of audio picture of video
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN113038222A (en) Video processing method and device, electronic equipment and storage medium
US20190371039A1 (en) Method and smart terminal for switching expression of smart terminal
US20230343017A1 (en) Virtual viewport generation method and apparatus, rendering and decoding methods and apparatuses, device and storage medium
CN111161685B (en) Virtual reality display equipment and control method thereof
CN108769825B (en) Method and device for realizing live broadcast
CN116681613A (en) Illumination-imitating enhancement method, device, medium and equipment for face key point detection
CN108416830B (en) Animation display control method, device, equipment and storage medium
CN115222858A (en) Method and equipment for training animation reconstruction network and image reconstruction and video reconstruction thereof
CN112860941A (en) Cover recommendation method, device, equipment and medium
CN114299415A (en) Video segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant