CN117459665B - Video editing method, system and storage medium - Google Patents
Video editing method, system and storage medium Download PDFInfo
- Publication number
- CN117459665B CN117459665B CN202311386913.7A CN202311386913A CN117459665B CN 117459665 B CN117459665 B CN 117459665B CN 202311386913 A CN202311386913 A CN 202311386913A CN 117459665 B CN117459665 B CN 117459665B
- Authority
- CN
- China
- Prior art keywords
- video
- determining
- scenes
- target advertisement
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000011156 evaluation Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000052 comparative effect Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 239000012634 fragment Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a video editing method, a system and a storage medium, which belong to the technical field of video processing and specifically comprise the following steps: dividing the target advertisement video based on the divided video frames to obtain the number of video scenes and the time length of different video scenes of the target advertisement video, determining the video complexity of the target advertisement video by combining the time length of the target advertisement video, determining the scene complexity of different video scenes according to the number of video frames and the similar condition, determining the video frame number of the head video of the target advertisement video by combining the video complexity, determining the matching degree of different video scenes based on the theme and background dubbing of the target advertisement video, and generating the head video by taking the number of video frames as constraint condition and combining the scene complexity, the matching degree and the background dubbing recognition result of different video scenes, thereby further improving the accuracy of video editing.
Description
Technical Field
The invention belongs to the technical field of video processing, and particularly relates to a video editing method, a video editing system and a storage medium.
Background
The video advertisement can intuitively realize the display of related commodities to users, but simultaneously, due to the limitation of the duration of the video advertisement, the core propaganda points of the commodities are often required to be displayed within a set time, so how to automatically generate the high-light film head according to the identification result of the advertisement becomes a technical problem to be solved.
In order to solve the technical problems, in the prior art, in the invention patent CN202211667885.1, "method, device, equipment and storage medium for generating advertisement titles", automatic generation of advertisement titles is realized by combining key segments and background dubbing, so that the processing efficiency of video clips is improved, but the following technical problems exist:
In the prior art, when the advertisement film head is generated, the determination of the number of key fragments differentiated according to the number of scenes of the advertisement video and the time length of the advertisement video and the clipping time length is ignored, and specifically, because of the difference of the number of scenes of the advertisement video and the time length, the complexity of the advertisement video is also different to a certain extent, and the problem of poor matching degree is unavoidable by adopting an automatic clipping mode, so that if the dynamic adjustment of the number of the key fragments cannot be carried out, the space for further manual adjustment is lacking.
The invention provides a video editing method, a video editing system and a storage medium.
Disclosure of Invention
In order to achieve the purpose of the invention, the invention adopts the following technical scheme:
According to one aspect of the present invention, an observable full-link log tracking method is provided.
A video editing method, comprising:
S1, evaluating the similarity between a video frame and an adjacent video frame of a target advertisement video through a detection result of the video frame to obtain a similarity evaluation result of the video frame and an dissimilar video frame, extracting the adjacent video frame behind the dissimilar video frame based on a preset quantity to serve as a comparison video frame, and determining the dividing accuracy of the video frame and the dividing video frame according to the similarity data of the comparison video frame and the similarity between the dissimilar video frame and the video frame;
S2, dividing the target advertisement video based on the divided video frames to obtain the number of video scenes of the target advertisement video and the duration of different video scenes, and determining the video complexity of the target advertisement video by combining the duration of the target advertisement video;
s3, determining scene complexity of different video scenes according to the number of video frames and similar conditions, and determining the number of video frames of the head clip video of the target advertisement video by combining the video complexity;
s4, determining the matching degree of different video scenes based on the theme and the background dubbing of the target advertisement video, taking the number of video frames as a constraint condition, and generating the head clip video by combining the scene complexity, the matching degree and the recognition result of the background dubbing of the different video scenes.
The invention has the beneficial effects that:
1. The method comprises the steps of determining the dividing accuracy of video frames and dividing video frames according to the similarity data of the comparison video frames and the similarity of the dissimilar video frames and the video frames, realizing the positioning of the divided video frames through the similarity, and simultaneously considering the video frames caused by the difference of the similarity of the video frames and the comparison video frames and the quantity of the comparison video frames with the similarity not meeting the requirement as the difference of the reliability of the divided video frames, so as to realize the accurate division of different video scenes of the target advertisement video.
2. The method has the advantages that the video frame number of the head video of the target advertisement video is determined by integrating the scene complexity and the video complexity, so that the similar difference and the time difference of the video frames of different video scenes of different target advertisement videos are considered, and meanwhile, the difference of the complexity caused by the number of the video scenes of the target advertisement video and the time is considered, thereby laying a foundation for dynamically realizing accurate evaluation of the video frame number of the head video, ensuring the efficiency of editing, and ensuring the reliability of editing results.
3. The method has the advantages that the generation of the head video is carried out by combining the scene complexity, the matching degree and the background dubbing recognition results of different video scenes, so that the difference of the number of available video frames of different video scenes caused by the difference of the scene complexity of the different video scenes is considered, and meanwhile, the matching condition of the different video scenes with the subject of the target advertisement video caused by the difference of the matching degree is also considered, and the accurate production of the head video and the differentiated editing of the different video scenes are realized.
The further technical scheme is that adjacent video frames of the video frames are determined according to the preset number of video frames before and after the video frames.
The further technical scheme is that when the similarity between the adjacent video frames and the video frames does not meet the requirement, the adjacent video frames are determined to be dissimilar video frames.
The further technical scheme is that the method for determining the matching degree of the video scene comprises the following steps:
Extracting keywords through the subject of the target advertisement video, and expanding the keywords according to the keywords and a preset keyword library to obtain matched keywords;
Extracting background texts of different video scenes based on background dubbing of the different video scenes, and determining the types of matching of the matching keywords and the matching times of the matching keywords of different types according to the matching data of the background texts of the different video scenes and the matching keywords;
And determining the weight values of the different types of matching keywords based on the types of the different matching keywords and the theme of the target advertisement video, and determining the matching degree of the video scene by combining the types of the matching keywords and the matching times of the different types of the matching keywords.
The further technical scheme is that the method for generating the head clip video by combining the scene complexity, the matching degree and the background dubbing recognition results of different video scenes specifically comprises the following steps:
Determining video frame clipping priority values of different scenes according to scene complexity and matching degree of different video scenes, determining the number of video frame clipping of different scenes according to the video frame clipping priority values of different scenes and the number of video frames of the head clipping video, determining clipping video frames of different scenes according to recognition results of background dubbing of different scenes, and generating the head clipping video through clipping video frames of different scenes.
In another aspect, the present invention provides a computer system comprising: a communicatively coupled memory and processor, and a computer program stored on the memory and capable of running on the processor, characterized by: the processor, when running the computer program, performs a video editing method as described above.
In another aspect, the present invention provides a computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a video editing method as described above.
Additional features and advantages will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 is a flow chart of a video editing method;
FIG. 2 is a flow chart of a method of determining scene complexity;
FIG. 3 is a block diagram of a computer system.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present disclosure.
The applicant found that when automatically editing the video of the head of an advertisement video, the prior art solution is often automatically generated by editing the highlight video frames of different video scenes, but ignores the difference in the number of highlight video frames existing in different video backgrounds due to the difference in the similarity of the video frames, and meanwhile, the difference in the number of video frames needed to be edited due to the difference in the duration and the matching degree of the different video backgrounds with the advertisement video.
Example 1
To solve the above problems, according to one aspect of the present invention, as shown in fig. 1, there is provided a video editing method, comprising:
S1, evaluating the similarity between a video frame and an adjacent video frame of a target advertisement video through a detection result of the video frame to obtain a similarity evaluation result of the video frame and an dissimilar video frame, extracting the adjacent video frame behind the dissimilar video frame based on a preset quantity to serve as a comparison video frame, and determining the dividing accuracy of the video frame and the dividing video frame according to the similarity data of the comparison video frame and the similarity between the dissimilar video frame and the video frame;
specifically, adjacent video frames of the video frames are determined according to a preset number of video frames before and after the video frames.
It should be noted that, when the similarity between the adjacent video frame and the video frame does not meet the requirement, the adjacent video frame is determined to be a dissimilar video frame.
Specifically, the method for determining the dividing accuracy of the video frame includes:
S11, taking the similarity of the video frames and the dissimilar video frames as reference similarity, determining the reference division accuracy of the video frames through the similarity of the video frames and the dissimilar video frames, determining whether a comparison video frame with the similarity larger than the reference similarity exists according to the similarity data of the comparison video frames and the video frames, if so, entering the next step, if not, determining the video frames as division video frames, and taking the reference division accuracy as the division accuracy of the video frames;
S12, determining whether a contrast video frame with the similarity not meeting the requirement exists according to the similarity data of the contrast video frame and the video frame, if so, entering the next step, and if not, entering the step S14;
S13, taking the comparison video frames with the similarity not meeting the requirement as similar comparison video frames, determining the similarity evaluation quantity of the video frames according to the quantity of the similar comparison video frames, the quantity of the similarity and the maximum value of the similarity, judging whether the similarity evaluation quantity meets the requirement, if so, determining that the video frames do not belong to divided video frames, and determining the division accuracy of the video frames according to the similarity evaluation quantity of the video frames;
S14, dividing the comparison video frame into a difference video frame and other video frames according to the similarity data of the comparison video frame and the reference similarity, judging whether the difference evaluation meets the requirement or not according to the number of the difference video frames, the minimum value of the similarity and the deviation amount of the similarity and the reference similarity and the determination of the difference evaluation amount of the video frames, if so, determining that the video frame is a divided video frame, and obtaining the division accuracy of the video frame according to the difference evaluation amount of the video frame and the reference division accuracy;
S15, obtaining the number of other video frames and the similarity with the video frames, and determining the dividing accuracy of the video frames by combining the reference dividing accuracy, the difference evaluation and the similarity evaluation.
It can be appreciated that when the division accuracy of the video frame is greater than a preset accuracy, the video frame is determined to be a divided video frame.
In another possible embodiment, the method for determining the dividing accuracy of the video frame is as follows:
Taking the similarity of the video frame and the dissimilar video frame as reference similarity, determining the reference division accuracy of the video frame through the similarity of the video frame and the dissimilar video frame, determining the comparative video frame with the similarity which does not meet the requirement according to the similarity data of the comparative video frame and the video frame, and taking the comparative video frame with the similarity which does not meet the requirement as a similar comparative video frame;
Determining the similarity evaluation quantity of the video frames according to the quantity of the similarity comparison video frames, the quantity of the similarity and the maximum value of the similarity;
when the similarity evaluation amount of the video frames does not meet the requirement:
Determining that the video frame does not belong to the divided video frame, and determining the dividing accuracy of the video frame through the similarity evaluation quantity of the video frame;
when the similarity evaluation amount of the video frame meets the requirement:
Dividing the comparison video frame into a difference video frame and other video frames according to the similarity data of the comparison video frame and the reference similarity, judging whether the difference evaluation meets the requirement or not according to the number of the difference video frames, the minimum value of the similarity, the deviation amount of the similarity and the reference similarity and the determination of the difference evaluation amount of the video frames, if yes, determining that the video frame is a divided video frame, and obtaining the division accuracy of the video frame through the difference evaluation amount of the video frame and the reference division accuracy;
and acquiring the number of other video frames and the similarity with the video frames, and determining the dividing accuracy of the video frames by combining the reference dividing accuracy, the difference evaluation value and the similarity evaluation value.
In this embodiment, the accuracy of dividing the video frames and the dividing video frames are determined according to the similarity data between the comparison video frames and the similarity between the dissimilar video frames and the video frames, so that the positioning of the dividing video frames by the similarity is realized, and meanwhile, the difference of the reliability of the dividing video frames caused by the difference of the number of the comparison video frames whose similarity is not satisfied with the requirement and the similarity between the video frames is considered, so that the accurate division of different video scenes of the target advertisement video is realized.
S2, dividing the target advertisement video based on the divided video frames to obtain the number of video scenes of the target advertisement video and the duration of different video scenes, and determining the video complexity of the target advertisement video by combining the duration of the target advertisement video;
Further, the method for determining the video complexity of the target advertisement video in the step S2 includes:
s21, acquiring the time length of the target advertisement video, judging whether the time length of the target advertisement video is smaller than a preset time length, if so, entering the next step, and if not, entering the step S23;
S22, determining whether the number of video scenes of the target advertisement video meets the requirement or not according to the number of video scenes of the target advertisement video and the duration of the target advertisement video, if so, determining the video complexity of the target advertisement video through the number of video scenes of the target advertisement video and the duration of the target advertisement video, and if not, entering the next step;
s23, determining the average value of the time length of the video scenes of the target advertisement video according to the time length of different video scenes of the target advertisement video, determining the number of the video scenes with the time length longer than a preset time length limit amount, and determining the processing complexity of the video scenes of the target advertisement video by combining the number of the video scenes;
s24, acquiring the time length and the number of video frames of the target advertisement video, and determining the video complexity of the target advertisement video by combining the processing complexity of the video scene of the target advertisement video.
It should be noted that, in the step S22, determining whether the number of video scenes of the target advertisement video meets the requirement according to the number of video scenes of the target advertisement video and the duration of the target advertisement video specifically includes:
And determining the maximum number of the scene number of the target advertisement video according to the duration of the target advertisement video, and determining whether the number of the video scenes of the target advertisement video meets the requirement or not based on the maximum number and the number of the video scenes of the target advertisement video.
In another possible embodiment, the method for determining the video complexity of the target advertisement video in the step S2 is as follows:
when the duration of the target advertisement video and the number of video scenes of the target advertisement video meet the requirements:
determining that the target advertisement video does not exist in the time period longer than a preset time period limiting amount of video scenes according to the time periods of different video scenes of the target advertisement video:
Determining the video complexity of the target advertisement video according to the number of video scenes of the target advertisement video and the duration of the target advertisement video;
Determining that the target advertisement video exists in a time period longer than a preset time period by a defined amount of video scenes according to the time periods of different video scenes of the target advertisement video:
Taking the video scenes with the time length being longer than the preset time length limiting amount as complex video scenes, determining the complex video scene compensation amount according to the number of the complex video scenes, the average time length of the complex video scenes and the number of the complex video scenes with the time length being longer than the average time length of the complex video scenes, and determining the video complexity of the target advertisement video according to the complex video scene compensation amount, the number of the video scenes of the target advertisement video and the time length of the target advertisement video;
When any one of the duration of the target advertisement video and the number of video scenes of the target advertisement video does not meet the requirement:
Determining the average value of the time length of the video scenes of the target advertisement video according to the time length of different video scenes of the target advertisement video, determining the number of the video scenes with the time length longer than the preset time length by a limited amount, and determining the processing complexity of the video scenes of the target advertisement video by combining the number of the video scenes; and acquiring the time length and the number of video frames of the target advertisement video, and determining the video complexity of the target advertisement video by combining the processing complexity of the video scene of the target advertisement video.
S3, determining scene complexity of different video scenes according to the number of video frames and similar conditions, and determining the number of video frames of the head clip video of the target advertisement video by combining the video complexity;
In one possible embodiment, as shown in fig. 2, the method for determining the scene complexity in the step S3 is as follows:
S31, determining the similarity between different video frames of the video scene according to the similarity of the video frames of the video scene, and dividing the video frames into a plurality of video frame groups according to the similarity between the different video frames;
S32, acquiring the number of video frame groups of the video scene, determining whether the number of the video frame groups of the video scene meets the requirement or not by combining the duration of the video scene, if so, entering step S34, and if not, entering step S33;
S33, screening the video frame groups according to the number of the video frames of the video scene to obtain effective video frame groups, determining whether the number of the video frame groups of the video scene meets the requirement or not according to the number of the effective video frame groups and the duration of the video scene, if so, entering a step S34, and if not, determining scene complexity according to the number of the effective video frame groups and the duration of the video scene;
S34, analyzing complexity of the effective video frame groups of the video scene based on the number of the effective video frame groups of the video scene, the deviation amount of the number of the video frames of the effective video frame groups and the number of preset video frames and the number of the video frames of the effective video frame groups, and determining scene complexity by combining the number of the video frame groups of the video scene and the duration of the video scene.
In this embodiment, the determination of the number of video frames of the head video of the target advertisement video is determined by integrating the complexity of the scene and the complexity of the video, which considers both the similar difference and the time difference of the video frames of different video scenes of different target advertisement videos and the complexity difference caused by the number and the time of the video scenes of the target advertisement video, thereby setting a foundation for dynamically realizing accurate evaluation of the number of video frames of the head video, ensuring the efficiency of editing, and ensuring the reliability of the editing result.
S4, determining the matching degree of different video scenes based on the theme and the background dubbing of the target advertisement video, taking the number of video frames as a constraint condition, and generating the head clip video by combining the scene complexity, the matching degree and the recognition result of the background dubbing of the different video scenes.
It should be further noted that the method for determining the matching degree of the video scene in the step S4 includes:
Extracting keywords through the subject of the target advertisement video, and expanding the keywords according to the keywords and a preset keyword library to obtain matched keywords;
Extracting background texts of different video scenes based on background dubbing of the different video scenes, and determining the types of matching of the matching keywords and the matching times of the matching keywords of different types according to the matching data of the background texts of the different video scenes and the matching keywords;
And determining the weight values of the different types of matching keywords based on the types of the different matching keywords and the theme of the target advertisement video, and determining the matching degree of the video scene by combining the types of the matching keywords and the matching times of the different types of the matching keywords.
Further, the generating of the head clip video by combining the scene complexity, the matching degree and the background dubbing recognition results of different video scenes specifically includes:
Determining video frame clipping priority values of different scenes according to scene complexity and matching degree of different video scenes, determining the number of video frame clipping of different scenes according to the video frame clipping priority values of different scenes and the number of video frames of the head clipping video, determining clipping video frames of different scenes according to recognition results of background dubbing of different scenes, and generating the head clipping video through clipping video frames of different scenes.
In this embodiment, by combining the scene complexity, the matching degree and the recognition result of background dubbing of different video scenes, the generation of the head video is performed, which not only considers the difference of the number of available video frames of different video scenes caused by the difference of the scene complexity of different video scenes, but also considers the matching condition of the different video scenes with the subject of the target advertisement video caused by the difference of the matching degree, thereby realizing the accurate production of the head video and the differential editing of different video scenes.
Example 2
As shown in fig. 3, the present invention provides a computer system comprising: a communicatively coupled memory and processor, and a computer program stored on the memory and capable of running on the processor, characterized by: the processor, when running the computer program, performs a video editing method as described above.
Example 3
In another aspect, the present invention provides a computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a video editing method as described above.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.
Claims (8)
1. A video editing method, comprising:
The method comprises the steps of carrying out similarity evaluation of video frames and adjacent video frames of a video frame through a detection result of the video frame of a target advertisement video to obtain a similarity evaluation result of the video frame and an dissimilar video frame, extracting the adjacent video frames behind the dissimilar video frame based on a preset quantity to serve as comparison video frames, and determining the dividing accuracy of the video frames and the dividing video frames according to the similarity data of the comparison video frames and the similarity of the dissimilar video frames and the video frames;
Dividing the target advertisement video based on the divided video frames to obtain the number of video scenes of the target advertisement video and the duration of different video scenes, and determining the video complexity of the target advertisement video by combining the duration of the target advertisement video;
determining scene complexity of different video scenes according to the number of video frames of the video scenes and the similarity of the video frames of the video scenes, and determining the number of video frames of the head clip video of the target advertisement video by combining the video complexity;
Determining the matching degree of different video scenes based on the theme and background dubbing of the target advertisement video, taking the number of video frames as a constraint condition, and generating a head clip video by combining the scene complexity, the matching degree and the recognition result of the background dubbing of the different video scenes;
the method for determining the video complexity of the target advertisement video comprises the following steps:
s21, acquiring the time length of the target advertisement video, judging whether the time length of the target advertisement video is smaller than a preset time length, if so, entering the next step, and if not, entering the step S23;
S22, determining whether the number of video scenes of the target advertisement video meets the requirement or not according to the number of video scenes of the target advertisement video and the duration of the target advertisement video, if so, determining the video complexity of the target advertisement video through the number of video scenes of the target advertisement video and the duration of the target advertisement video, and if not, entering the next step;
s23, determining the average value of the time length of the video scenes of the target advertisement video according to the time length of different video scenes of the target advertisement video, determining the number of the video scenes with the time length longer than a preset time length limit amount, and determining the processing complexity of the video scenes of the target advertisement video by combining the number of the video scenes;
S24, acquiring the time length and the number of video frames of the target advertisement video, and determining the video complexity of the target advertisement video by combining the processing complexity of the video scene of the target advertisement video;
the method for determining the scene complexity comprises the following steps:
S31, determining the similarity between different video frames of the video scene according to the similarity of the video frames of the video scene, and dividing the video frames into a plurality of video frame groups according to the similarity between the different video frames;
S32, acquiring the number of video frame groups of the video scene, determining whether the number of the video frame groups of the video scene meets the requirement or not by combining the duration of the video scene, if so, entering step S34, and if not, entering step S33;
S33, screening the video frame groups according to the number of the video frames of the video scene to obtain effective video frame groups, determining whether the number of the video frame groups of the video scene meets the requirement or not according to the number of the effective video frame groups and the duration of the video scene, if so, entering a step S34, and if not, determining scene complexity according to the number of the effective video frame groups and the duration of the video scene;
S34, analyzing complexity of the effective video frame groups of the video scene based on the number of the effective video frame groups of the video scene, the deviation amount of the number of the video frames of the effective video frame groups and the number of preset video frames and the number of the video frames of the effective video frame groups, and determining scene complexity by combining the number of the video frame groups of the video scene and the duration of the video scene;
The method for determining the matching degree of the video scene comprises the following steps:
Extracting keywords through the subject of the target advertisement video, and expanding the keywords according to the keywords and a preset keyword library to obtain matched keywords;
Extracting background texts of different video scenes based on background dubbing of the different video scenes, and determining the types of matching of the matching keywords and the matching times of the matching keywords of different types according to the matching data of the background texts of the different video scenes and the matching keywords;
Determining weight values of the different types of matching keywords based on the types of the different matching keywords and the subject of the target advertisement video, and determining the matching degree of the video scene by combining the types of matching of the matching keywords and the matching times of the different types of matching keywords;
Generating a head clip video by combining the scene complexity, the matching degree and the background dubbing recognition results of different video scenes, wherein the method specifically comprises the following steps:
Determining video frame clipping priority values of different scenes according to scene complexity and matching degree of different video scenes, determining the number of video frame clipping of different scenes according to the video frame clipping priority values of different scenes and the number of video frames of the head clipping video, determining clipping video frames of different scenes according to recognition results of background dubbing of different scenes, and generating the head clipping video through clipping video frames of different scenes.
2. The video clip method of claim 1, wherein adjacent video frames of the video frame are determined based on a preset number of video frames before and after the video frame.
3. The video editing method of claim 1, wherein when the similarity between the adjacent video frame and the video frame is not satisfied, determining the adjacent video frame as a dissimilar video frame.
4. The video editing method of claim 1, wherein when the division accuracy of the video frame is greater than a preset accuracy, determining the video frame as a divided video frame.
5. The video editing method as claimed in claim 1, wherein the method for determining the division accuracy of the video frames is:
Taking the similarity of the video frame and the dissimilar video frame as reference similarity, determining the reference division accuracy of the video frame through the similarity of the video frame and the dissimilar video frame, determining the comparative video frame with the similarity which does not meet the requirement according to the similarity data of the comparative video frame and the video frame, and taking the comparative video frame with the similarity which does not meet the requirement as a similar comparative video frame;
determining the similarity evaluation quantity of the video frames according to the quantity of the similarity comparison video frames, the sum of the similarity and the maximum value of the similarity;
when the similarity evaluation amount of the video frames does not meet the requirement:
Determining that the video frame does not belong to the divided video frame, and determining the dividing accuracy of the video frame through the similarity evaluation quantity of the video frame;
when the similarity evaluation amount of the video frame meets the requirement:
Dividing the comparison video frame into a difference video frame and other video frames according to the similarity data of the comparison video frame and the reference similarity, judging whether the difference evaluation meets the requirement or not according to the number of the difference video frames, the minimum value of the similarity, the deviation amount of the similarity and the reference similarity and the determination of the difference evaluation amount of the video frames, if yes, determining that the video frame is a divided video frame, and obtaining the division accuracy of the video frame through the difference evaluation amount of the video frame and the reference division accuracy;
and acquiring the number of other video frames and the similarity with the video frames, and determining the dividing accuracy of the video frames by combining the reference dividing accuracy, the difference evaluation value and the similarity evaluation value.
6. The video editing method according to claim 1, wherein determining whether the number of video scenes of the target advertisement video meets a requirement according to the number of video scenes of the target advertisement video and a duration of the target advertisement video, specifically comprises:
And determining the maximum number of the scene number of the target advertisement video according to the duration of the target advertisement video, and determining whether the number of the video scenes of the target advertisement video meets the requirement or not based on the maximum number and the number of the video scenes of the target advertisement video.
7. A computer system, comprising: a communicatively coupled memory and processor, and a computer program stored on the memory and capable of running on the processor, characterized by: the processor, when running the computer program, performs a video editing method as claimed in any of claims 1-6.
8. A computer storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform a video editing method as claimed in any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311386913.7A CN117459665B (en) | 2023-10-25 | 2023-10-25 | Video editing method, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311386913.7A CN117459665B (en) | 2023-10-25 | 2023-10-25 | Video editing method, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117459665A CN117459665A (en) | 2024-01-26 |
CN117459665B true CN117459665B (en) | 2024-05-07 |
Family
ID=89592233
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311386913.7A Active CN117459665B (en) | 2023-10-25 | 2023-10-25 | Video editing method, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117459665B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117648387B (en) * | 2024-01-29 | 2024-05-07 | 杭银消费金融股份有限公司 | Construction method of logic data section based on data entity |
CN117651159B (en) * | 2024-01-29 | 2024-04-23 | 杭州锐颖科技有限公司 | Automatic editing and pushing method and system for motion real-time video |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010041744A1 (en) * | 2008-10-09 | 2010-04-15 | 国立大学法人 北海道大学 | Moving picture browsing system, and moving picture browsing program |
CN106686404A (en) * | 2016-12-16 | 2017-05-17 | 中兴通讯股份有限公司 | Video analysis platform, matching method, accurate advertisement delivery method and system |
CN111327945A (en) * | 2018-12-14 | 2020-06-23 | 北京沃东天骏信息技术有限公司 | Method and apparatus for segmenting video |
CN112153462A (en) * | 2019-06-26 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Video processing method, device, terminal and storage medium |
CN114286174A (en) * | 2021-12-16 | 2022-04-05 | 天翼爱音乐文化科技有限公司 | Video editing method, system, device and medium based on target matching |
CN116095251A (en) * | 2022-12-23 | 2023-05-09 | 深圳市闪剪智能科技有限公司 | Method, device, equipment and storage medium for generating advertisement film head |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8699806B2 (en) * | 2006-04-12 | 2014-04-15 | Google Inc. | Method and apparatus for automatically summarizing video |
US8311344B2 (en) * | 2008-02-15 | 2012-11-13 | Digitalsmiths, Inc. | Systems and methods for semantically classifying shots in video |
US11023737B2 (en) * | 2014-06-11 | 2021-06-01 | Arris Enterprises Llc | Detection of demarcating segments in video |
CN114550070A (en) * | 2022-03-08 | 2022-05-27 | 腾讯科技(深圳)有限公司 | Video clip identification method, device, equipment and storage medium |
-
2023
- 2023-10-25 CN CN202311386913.7A patent/CN117459665B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010041744A1 (en) * | 2008-10-09 | 2010-04-15 | 国立大学法人 北海道大学 | Moving picture browsing system, and moving picture browsing program |
CN106686404A (en) * | 2016-12-16 | 2017-05-17 | 中兴通讯股份有限公司 | Video analysis platform, matching method, accurate advertisement delivery method and system |
CN111327945A (en) * | 2018-12-14 | 2020-06-23 | 北京沃东天骏信息技术有限公司 | Method and apparatus for segmenting video |
CN112153462A (en) * | 2019-06-26 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Video processing method, device, terminal and storage medium |
CN114286174A (en) * | 2021-12-16 | 2022-04-05 | 天翼爱音乐文化科技有限公司 | Video editing method, system, device and medium based on target matching |
CN116095251A (en) * | 2022-12-23 | 2023-05-09 | 深圳市闪剪智能科技有限公司 | Method, device, equipment and storage medium for generating advertisement film head |
Also Published As
Publication number | Publication date |
---|---|
CN117459665A (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117459665B (en) | Video editing method, system and storage medium | |
CN108632640B (en) | Method, system, computer readable medium and electronic device for determining insertion area metadata of new video | |
CN110234037B (en) | Video clip generation method and device, computer equipment and readable medium | |
CN112511854B (en) | Live video highlight generation method, device, medium and equipment | |
CN111460219B (en) | Video processing method and device and short video platform | |
US7555149B2 (en) | Method and system for segmenting videos using face detection | |
US20060245724A1 (en) | Apparatus and method of detecting advertisement from moving-picture and computer-readable recording medium storing computer program to perform the method | |
CN113613065B (en) | Video editing method and device, electronic equipment and storage medium | |
US7676821B2 (en) | Method and related system for detecting advertising sections of video signal by integrating results based on different detecting rules | |
US11621792B2 (en) | Real-time automated classification system | |
US20080052612A1 (en) | System for creating summary clip and method of creating summary clip using the same | |
US11342003B1 (en) | Segmenting and classifying video content using sounds | |
JP2009544985A (en) | Computer implemented video segmentation method | |
Bost et al. | Remembering winter was coming: Character-oriented video summaries of TV series | |
CN109167934B (en) | Video processing method and device and computer readable storage medium | |
US11120839B1 (en) | Segmenting and classifying video content using conversation | |
US7995901B2 (en) | Facilitating video clip identification from a video sequence | |
JP5257356B2 (en) | Content division position determination device, content viewing control device, and program | |
CN114339451A (en) | Video editing method and device, computing equipment and storage medium | |
Liu et al. | Computational approaches to temporal sampling of video sequences | |
CN114845149A (en) | Editing method of video clip, video recommendation method, device, equipment and medium | |
CN108566567B (en) | Movie editing method and device | |
CN113012723B (en) | Multimedia file playing method and device and electronic equipment | |
Fernández Chappotin | Design of a player-plugin for metadata visualization and intelligent navigation | |
CN117221646A (en) | News stripping method, system, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |