CN114268831A - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN114268831A
CN114268831A CN202111437367.6A CN202111437367A CN114268831A CN 114268831 A CN114268831 A CN 114268831A CN 202111437367 A CN202111437367 A CN 202111437367A CN 114268831 A CN114268831 A CN 114268831A
Authority
CN
China
Prior art keywords
video
decibel
clipping
sound wave
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111437367.6A
Other languages
Chinese (zh)
Inventor
曹莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bright Jupiter Private Ltd
Original Assignee
Lemei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lemei Technology Co ltd filed Critical Lemei Technology Co ltd
Priority to CN202111437367.6A priority Critical patent/CN114268831A/en
Publication of CN114268831A publication Critical patent/CN114268831A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the invention provides a video editing method and a video editing device, which relate to the technical field of video processing, and the method comprises the following steps: extracting sound wave information of sound in the video; obtaining a clipping decibel threshold value determined according to the sound wave information; determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information; and cutting off the video segment to be cut off from the video. By applying the video clipping scheme provided by the embodiment of the invention, the efficiency of video clipping can be improved.

Description

Video editing method and device
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video editing method and apparatus.
Background
Nowadays, video recording equipment is continuously updated, and a user can record a video by using the video recording equipment, clip the recorded video, and share the clipped video to other users.
Generally, a user repeatedly watches a video, subjectively determines a video segment to be clipped in the video, and manually clips the video segment to be clipped from the video, which results in low efficiency of video clipping.
Disclosure of Invention
Embodiments of the present invention provide a video clipping method and apparatus to improve the efficiency of video clipping. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a video clipping method, where the method includes:
extracting sound wave information of sound in the video;
obtaining a clipping decibel threshold value determined according to the sound wave information;
determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
and clipping the video segments to be clipped from the video.
In one embodiment of the present invention, the clipping decibel threshold includes: an upper decibel threshold and/or a lower decibel threshold;
determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information, wherein the determining comprises the following steps:
under the condition that the clipping decibel threshold value comprises the decibel upper limit threshold value, determining an upper limit sound wave band of which the decibel value is greater than the decibel upper limit threshold value in the sound wave information, and determining a video segment corresponding to the upper limit sound wave band in the video as a video segment to be clipped;
and under the condition that the clipping decibel threshold value comprises the decibel lower limit threshold value, determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold value in the sound wave information, and determining a video segment to be clipped according to a video segment corresponding to the lower limit sound wave segment in the video.
In an embodiment of the present invention, the determining a video segment to be clipped according to a video segment in the video corresponding to the lower limit sound wave segment includes:
obtaining the duration of an alternative video clip in the video, wherein the alternative video clip is: video clips in the video corresponding to the lower sound-limiting bands;
and determining the video segments with the duration being greater than the gap duration in the alternative video segments as the video segments to be clipped.
In one embodiment of the present invention, the gap duration is: and the duration is determined according to the duration of the alternative video clip.
In an embodiment of the present invention, the obtaining a clipping decibel threshold determined according to the sound wave information includes:
displaying the sound wave form of the sound in the video to a user according to the sound wave information;
and obtaining a clipping decibel threshold value set by the user according to the sound wave shape.
In a second aspect, an embodiment of the present invention further provides a video editing apparatus, where the apparatus includes:
the information extraction module is used for extracting sound wave information of sound in the video;
the threshold value obtaining module is used for obtaining a clipping decibel threshold value determined according to the sound wave information;
the segment determining module is used for determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
and the video clipping module is used for clipping the video segments to be clipped from the video.
In one embodiment of the present invention, the clipping decibel threshold includes: an upper decibel threshold and/or a lower decibel threshold;
the segment determination module includes:
the first segment determining submodule is used for determining an upper limit sound wave band of which the decibel value is greater than the decibel upper limit threshold in the sound wave information under the condition that the clipping decibel threshold comprises the decibel upper limit threshold, and determining a video segment corresponding to the upper limit sound wave band in the video as a video segment to be clipped;
and the second segment determining submodule is used for determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold in the sound wave information under the condition that the clipping decibel threshold comprises the decibel lower limit threshold, and determining a video segment to be clipped according to a video segment corresponding to the lower sound limiting band in the video.
In an embodiment of the present invention, the second segment determining submodule is specifically configured to:
under the condition that the clipping decibel threshold value comprises the decibel lower limit threshold value, determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold value in the sound wave information, obtaining the duration of an alternative video segment in the video, and determining a video segment of which the duration is larger than the gap duration in the alternative video segment as a video segment to be clipped, wherein the alternative video segment is as follows: video segments of the video corresponding to each of the lower sound-limiting bands.
In one embodiment of the present invention, the gap duration is: and the duration is determined according to the duration of the alternative video clip.
In an embodiment of the present invention, the threshold obtaining module is specifically configured to:
displaying the sound wave form of the sound in the video to a user according to the sound wave information;
and obtaining a clipping decibel threshold value set by the user according to the sound wave shape.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the video clip method of the first aspect when executing the program stored in the memory.
In a fourth aspect, the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the video clipping method steps described in the first aspect.
The embodiment of the invention has the following beneficial effects:
therefore, when the scheme provided by the embodiment of the invention is applied to video clipping, the sound wave information of sound in a video is extracted, the clipping decibel threshold value determined according to the sound wave information is also obtained, and the video segment to be clipped in the video is determined according to the clipping decibel threshold value and the sound wave information on the basis, so that a user does not need to watch the video repeatedly and subjectively determine the video segment to be clipped, and the video clipping efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by referring to these drawings.
Fig. 1 is a flowchart illustrating a first video editing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second video editing method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a first acoustic waveform provided by an embodiment of the present invention;
FIG. 3b is a schematic diagram of a second acoustic waveform provided by an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a first video editing apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a second video editing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a first video clipping method according to an embodiment of the present invention, where the method includes the following steps S101 to S104.
Step S101: and extracting sound wave information of sound in the video.
The Video may be a Vlog (Video Log), a documentary, or a recorded live Video.
The sound wave information may include a decibel value of sound in the video, and may also include information such as a frequency and a tone of sound in the video.
Specifically, the sound in the video can be measured by an instrument such as a decibel meter, so that the measurement result of the instrument is obtained as sound wave information of the sound in the video; the sound volume, the audio frequency, the tone color and other information of the sound in the video can be measured by using an application program for measuring the audio information of the sound, and the measurement result of the application program can be obtained as the sound wave information of the sound in the video.
Step S102: and obtaining a clipping decibel threshold determined according to the sound wave information.
The clipping decibel threshold value can be a decibel upper limit threshold value, and the sound with the decibel value larger than the decibel upper limit threshold value can be considered as noise; the clipping decibel threshold may also be a decibel lower threshold, and sounds with decibel values less than the decibel lower threshold may be considered silent.
For example, the clipping decibel upper limit threshold may be 90 decibels, and sounds with decibel values greater than 90 decibels may be considered noise.
For another example, the clipping db lower threshold may be 40 db, and a sound with a db value less than 40 db may be considered as silence.
Specifically, the clipping decibel threshold may be input by a user using an external input device after viewing the sound wave information, or may be determined according to a maximum decibel value and a minimum decibel value recorded in the sound wave information.
In one implementation, the clipping decibel threshold can be obtained through the following steps S102A-S102B in the embodiment shown in FIG. 2.
In another implementation, a decibel difference or a decibel ratio may be preset, and the minimum decibel value is added to the decibel difference, or the minimum decibel value is divided by the decibel ratio, so as to obtain a calculated decibel value as a decibel lower limit threshold; and subtracting the maximum decibel value from the decibel difference, or multiplying the maximum decibel value by the decibel difference to obtain a calculated decibel value serving as an upper limit threshold of decibels.
The decibel ratio can be used to represent the difference between the maximum decibel value and the minimum decibel value and the clipping decibel threshold, and the difference between the maximum decibel value or the minimum decibel value and the clipping decibel threshold is larger when the decibel ratio is smaller.
For example, if the minimum decibel value is 36 decibels, the decibel difference is 10 decibels, and the decibel ratio is 0.9, the minimum decibel value and the decibel difference may be added to obtain a decibel lower limit threshold of 36+10 to 46 decibels, or the minimum decibel value and the decibel ratio may be divided to obtain a decibel lower limit threshold of 36/0.9 to 40 decibels.
In addition, the decibel value range between the maximum decibel value and the minimum decibel value can be considered, and the decibel upper limit threshold value and/or the decibel lower limit threshold value are/is calculated according to the maximum decibel value, the minimum decibel value and the decibel value range, so that the decibel upper limit threshold value and the decibel lower limit threshold value which are obtained through calculation can be guaranteed to be always within the decibel value range, and information that the decibel value is larger than the decibel upper limit threshold value or information that the decibel value is smaller than the decibel lower limit threshold value in the sound wave information can be effectively identified.
Specifically, a range ratio may be preset, a range difference of the decibel value range is calculated by using the range ratio, the maximum decibel value and the range difference are subtracted to obtain an upper decibel limit threshold, and the minimum decibel value and the range difference are added to obtain a lower decibel limit threshold.
The range ratio is similar to the decibel ratio, and can also be used to represent the difference between the maximum decibel value and the minimum decibel value and the clipping decibel threshold.
For example, if the maximum decibel value is 90 decibels and the minimum decibel value is 30 decibels, the decibel value range is 60 decibels, the range ratio is set to 0.1, the decibel value of the decibel value range is multiplied by the range ratio to obtain the range difference 60 × 0.1 — 6 decibels, the maximum decibel value is subtracted from the range difference to obtain the upper limit threshold value of 90-6 — 84 decibels, and the minimum decibel value is added to the range difference to obtain the lower limit threshold value of 30+6 — 36 decibels.
Step S103: and determining the video segments to be clipped in the video according to the clipping decibel threshold and the sound wave information.
As can be seen from step S102, the clipping db threshold may include at least one of an upper db threshold and a lower db threshold.
In an embodiment of the present invention, when the clipping decibel threshold includes a decibel upper limit threshold, an upper limit sonic segment whose decibel value is greater than the decibel upper limit threshold in the sonic information may be determined, and a video segment corresponding to the upper limit sonic segment in the video is determined as a video segment to be clipped.
The decibel value of the upper limit acoustic wave band is greater than the decibel upper limit threshold, and the sound corresponding to the upper limit acoustic wave band can be considered to belong to noise.
When playing a video segment in a video, it is usually necessary to play the sound corresponding to the video segment in this time period, so that the sound in the video usually corresponds to the video playing time, and the sound wave information of the sound in the video also corresponds to the video playing time.
After the upper limit sound wave segment with the decibel value larger than the decibel upper limit threshold value in the sound wave information is determined, the starting time and the ending time corresponding to the upper limit sound wave segment can be determined according to the corresponding relation, the video segment of the time period from the starting time to the ending time in the video is searched, and the video segment is determined to be the video segment to be edited.
In another embodiment of the present invention, when the clipping decibel threshold includes a decibel lower limit threshold, a lower sound limiting band in which a decibel value in the sound wave information is smaller than the decibel lower limit threshold may be determined, and a video segment to be clipped may be determined according to a video segment corresponding to the lower sound limiting band in the video.
The decibel value of the lower sound-limiting band is smaller than the decibel lower limit threshold, and the sound corresponding to the lower sound-limiting band can be considered to be silent.
Similar to the above embodiment in which the clipping decibel threshold includes the decibel upper threshold, in this embodiment, the start time and the end time corresponding to the lower-limit sonic segment may be determined according to the correspondence between sonic information of sound in the video and video playing time, and a video segment in the video from the start time to the end time is determined as a video segment to be clipped.
For example, in the Vlog recorded by the user, the voice-over video segment without sound in the Vlog can be determined as the video segment to be clipped.
In addition, all the video segments corresponding to the lower limit acoustic wave segment in the video may be determined as the video segments to be clipped, or the video segments corresponding to the lower limit acoustic wave segment in the video may be determined as the video segments to be clipped.
The description of determining the video segment corresponding to the lower sound-limiting band in the video as the video segment to be clipped can be referred to the following embodiments, and will not be detailed here.
In another embodiment of the present invention, the clipping decibel threshold may include the decibel upper threshold and the decibel lower threshold, and in this case, it is determined that a video segment to be clipped in a video is similar to the above embodiment, and details are not described here.
Because the noise or the silent video segment in the video is usually the video segment to be edited, the decibel value of the noise is usually larger, and the silent decibel value is usually smaller, the upper limit acoustic wave band and the lower limit acoustic wave band in the acoustic wave information can be accurately determined by comparing the size relationship between the decibel value contained in the acoustic wave information and the editing decibel threshold value, so that the noise video segment to be edited in the video is accurately determined according to the determined upper limit acoustic wave band, and the silent video segment to be edited in the video is accurately determined according to the determined lower limit acoustic wave band.
Step S104: and cutting off the video segment to be cut off from the video.
After determining the video segments to be clipped in the video, the video can be clipped by using a video clipping mode in the prior art to obtain the clipped video.
Therefore, when the scheme provided by the embodiment of the invention is applied to video clipping, the sound wave information of sound in a video is extracted, the clipping decibel threshold value determined according to the sound wave information is also obtained, and the video segment to be clipped in the video is determined according to the clipping decibel threshold value and the sound wave information on the basis, so that a user does not need to watch the video repeatedly and subjectively determine the video segment to be clipped, and the video clipping efficiency can be improved.
In addition, before the user clips the video, in the process of shooting the video, the characteristic that the video segment to be clipped is determined according to the clipping decibel threshold and the sound wave information in the embodiment of the invention can be utilized, and the recorded video segment in the time period can meet the condition of clipping the video according to the sound wave information in a manner of keeping quiet or speaking loudly and the like in the time period needing to be clipped, so that the video segment can be rapidly clipped from the video by directly applying the scheme provided by the embodiment of the invention when the video is clipped at the later stage.
This is illustrated below by a specific example.
For example, the video may be a video for displaying an article, and the video may include a video clip for taking the article and placing the article on a desktop for displaying, and when the video is clipped, the video clip for taking the article is usually clipped, and the video clip for placing the article on the desktop for displaying is reserved. Based on this, the above-described video clipping scheme may reversely guide the user to take a video of the displayed item. When shooting videos of this kind, the user can slow down the action of taking articles, sound emitted when taking articles is reduced as much as possible, or great sound is emitted when placing articles, so that the decibel value of sound in the video clip of the articles to be taken is smaller than the decibel lower limit threshold value, and the decibel value of sound in the video clip of the articles to be placed on a desktop for display is larger than the decibel lower limit threshold value, and therefore when the videos shot are clipped, the videos can be clipped quickly.
In general, when a video is played, the content of the audio playing corresponds to the video subtitles, and when two adjacent subtitles are played, after the voice playing of the previous subtitle is finished, the voice of the next subtitle is not directly played, but the next subtitle is played only after a short pause for a certain time, so that the decibel value of the sound in the pause time period may be smaller than the decibel lower threshold. In view of this, there may be a plurality of lower limit acoustic wave segments in the acoustic wave information, and if the video segments corresponding to all the lower limit acoustic wave segments are determined as the video segments to be clipped, the clipped video will be discontinuous and natural, and the ornamental performance of the video will be reduced.
In view of the above situation, in an embodiment of the present invention, the duration of the candidate video segment in the video may be obtained, and the video segment of which the duration is greater than the gap duration of the candidate video segment is determined as the video segment to be clipped.
Wherein, the alternative video clips are: video segments in the video corresponding to the lower limit sound wave segments.
The above-mentioned gap duration may be any duration preset. For example, the gap duration may be 3 seconds, 10 seconds, or other duration.
The gap duration may also be a duration determined according to a duration of the alternative video segment.
The gap duration is used for determining the video segments to be clipped, the gap duration is determined according to the duration of the alternative video segments, and the number of the alternative video segments determined as the video segments to be clipped can be controlled, so that the continuity of the clipped video can be ensured while the video is rapidly clipped, and the ornamental property of the clipped video is improved.
The duration of the alternative video segment in the video can be obtained through the following three ways:
in a first implementation manner, the starting playing time and the ending playing time of the alternative video segment may be obtained, and the starting playing time is subtracted from the ending playing time to obtain a subtracted duration, which is the duration of the alternative video segment.
In a second implementation manner, the alternative video segment may be played, and the playing duration of the alternative video segment may be synchronously recorded.
In a third implementation manner, the frame number of the video frame in the candidate video segment and the frame rate of the video may also be obtained, and the obtained frame number and the obtained frame rate are divided, so that the divided duration is the duration of the candidate video segment.
After the duration of the alternative video segments is obtained, the video segments of which the duration is greater than the gap duration of the alternative video segments can be determined as the video segments to be clipped, while the video segments of which the duration is less than or equal to the gap duration are not determined as the video segments to be clipped, in other words, the alternative video segments of which the duration is greater than the gap duration can be clipped, and the alternative video segments of which the duration is less than or equal to the gap duration can be retained.
As can be seen from the above, when the scheme provided by the embodiment of the present invention is applied to video clipping, the video to be clipped in the alternative video segment is determined according to the size relationship between the duration of the alternative video segment and the duration of the gap, so that the alternative video segment with the duration less than or equal to the duration of the gap is retained, and the alternative video segment with the duration greater than the duration of the gap is clipped, thereby achieving fast clipping of the video, ensuring the continuity of the video, and improving the ornamental performance of the video.
In obtaining the clipping decibel threshold, in addition to the implementation listed at step S102 in the embodiment shown in fig. 1, the implementation given in the embodiment shown in fig. 2 below may be adopted.
In one embodiment of the present invention, referring to fig. 2, a flowchart of a second video clipping method is provided, and compared with the foregoing embodiment shown in fig. 1, in this embodiment, the foregoing step S102 can be implemented by the following steps S102A-S102B.
Step S102A: and displaying the sound wave shape of the sound in the video to the user according to the sound wave information.
The sound wave information may include a decibel value of sound in the video, and since the sound in the video changes with the video playing time, the decibel value of sound included in the sound wave information also changes with the video playing time. Based on the above, a graph of the variation relationship between the video playing time and the decibel value of the sound in the video can be generated as the sound wave waveform diagram of the sound in the video.
As shown in fig. 3a, fig. 3a is a schematic diagram of a first sound wave form, where the horizontal direction of fig. 3a can be regarded as a time axis of video playing, and the vertical direction of fig. 3a can be regarded as a decibel value of sound in the video. In fig. 3a, the higher the white area corresponding to a video playing time, the higher the decibel value of the sound in the video corresponding to the video playing time.
Step S102B: and obtaining a clipping decibel threshold set by the user according to the sound wave shape.
The user can set the clipping decibel threshold value according to the displayed sound wave form, and the set clipping decibel threshold value is input through the external input equipment.
Fig. 3b is a schematic diagram of a second acoustic waveform, as shown in fig. 3 b. As can be seen from fig. 3b, the decibel upper limit and the decibel lower limit in the graph are clipping decibel thresholds set by the user, the white area higher than the decibel upper limit corresponds to the upper limit acoustic wave band in the acoustic wave information, and the white area lower than the decibel lower limit corresponds to the lower limit acoustic wave band in the acoustic wave information.
Therefore, when the scheme provided by the embodiment of the invention is applied to video editing, the sound wave waveform of the sound in the video is displayed to the user, so that the user can visually acquire the sound wave information of the sound in the video according to the sound wave waveform, and the editing decibel threshold value is set according to the sound wave waveform. Therefore, when the scheme provided by the embodiment of the invention is applied, the clipping decibel threshold value can be set more humanized, so that the clipped video is more in line with the expectation of the user.
Corresponding to the video clipping method, the embodiment of the invention also provides a video clipping device.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a first video editing apparatus according to an embodiment of the present invention, where the apparatus includes:
an information extraction module 401, configured to extract sound wave information of sound in a video;
a threshold obtaining module 402, configured to obtain a clipping decibel threshold determined according to the sound wave information;
a segment determining module 403, configured to determine a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
a video clipping module 404, configured to clip the video segment to be clipped from the video.
Therefore, when the scheme provided by the embodiment of the invention is applied to video clipping, the sound wave information of sound in a video is extracted, the clipping decibel threshold value determined according to the sound wave information is also obtained, and the video segment to be clipped in the video is determined according to the clipping decibel threshold value and the sound wave information on the basis, so that a user does not need to watch the video repeatedly and subjectively determine the video segment to be clipped, and the video clipping efficiency can be improved.
In an embodiment of the present invention, referring to fig. 5, a schematic structural diagram of a second video editing apparatus is provided, and compared with the foregoing embodiment shown in fig. 4, in this embodiment, the clipping decibel threshold value includes: an upper decibel threshold and/or a lower decibel threshold;
the segment determining module 403 includes:
a first segment determining submodule 403A, configured to determine, when the clipping decibel threshold includes the decibel upper threshold, an upper limit vocal segment in the vocal information where a decibel value is greater than the decibel upper threshold, and determine a video segment in the video corresponding to the upper limit vocal segment as a video segment to be clipped;
the second segment determining submodule 403B is configured to determine, when the clipping decibel threshold includes the decibel lower limit threshold, a lower sound limiting band in the sound wave information, where a decibel value is smaller than the decibel lower limit threshold, and determine, according to a video segment in the video corresponding to the lower sound wave band, a video segment to be clipped.
As can be seen from the above, when the scheme provided by the embodiment of the present invention is applied to video editing, because a noise or silent video segment in a video is usually a video segment to be edited, and a decibel value of the noise is usually large, and a silence decibel value is usually small, by comparing a magnitude relationship between a decibel value of sound wave information and an editing decibel threshold value, an upper limit sound wave band and a lower limit sound wave band in the sound wave information can be accurately determined, so that the noise video segment to be edited in the video is accurately determined according to the determined upper limit sound wave band, and the silence video segment to be edited in the video is accurately determined according to the determined lower limit sound wave band.
In an embodiment of the present invention, the second segment determining submodule 403B is specifically configured to:
under the condition that the clipping decibel threshold value comprises the decibel lower limit threshold value, determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold value in the sound wave information, obtaining the duration of an alternative video segment in the video, and determining a video segment of which the duration is larger than the gap duration in the alternative video segment as a video segment to be clipped, wherein the alternative video segment is as follows: video segments of the video corresponding to each of the lower sound-limiting bands.
As can be seen from the above, when the scheme provided by the embodiment of the present invention is applied to video clipping, the video to be clipped in the alternative video segment is determined according to the size relationship between the duration of the alternative video segment and the duration of the gap, so that the alternative video segment with the duration less than or equal to the duration of the gap is retained, and the alternative video segment with the duration greater than the duration of the gap is clipped, thereby achieving fast clipping of the video, ensuring the continuity of the video, and improving the ornamental performance of the video.
In one embodiment of the present invention, the gap duration is: and the duration is determined according to the duration of the alternative video clip.
As can be seen from the above, when the scheme provided by the embodiment of the present invention is applied to video clipping, the gap duration is used to determine the video segment to be clipped, and the gap duration is determined according to the duration of the alternative video segment, so that the number of the video segments to be clipped can be controlled, and thus, while the video is rapidly clipped, the continuity of the clipped video can be ensured, and the observability of the clipped video is improved.
In an embodiment of the present invention, the threshold obtaining module 402 is specifically configured to:
displaying the sound wave form of the sound in the video to a user according to the sound wave information;
and obtaining a clipping decibel threshold value set by the user according to the sound wave shape.
Therefore, when the scheme provided by the embodiment of the invention is applied to video editing, the sound wave waveform of the sound in the video is displayed to the user, so that the user can visually acquire the sound wave information of the sound in the video according to the sound wave waveform, and the editing decibel threshold value is set according to the sound wave waveform. Therefore, when the scheme provided by the embodiment of the invention is applied, the clipping decibel threshold value can be set more humanized, so that the clipped video is more in line with the expectation of the user.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
extracting sound wave information of sound in the video;
obtaining a clipping decibel threshold value determined according to the sound wave information;
determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
and clipping the video segments to be clipped from the video.
Besides, the electronic device may also implement other video clipping methods as described in the previous embodiments, and will not be described in detail here.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment provided by the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the video clipping methods described above.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the video clipping methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method of video clipping, the method comprising:
extracting sound wave information of sound in the video;
obtaining a clipping decibel threshold value determined according to the sound wave information;
determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
and clipping the video segments to be clipped from the video.
2. The method of claim 1, wherein clipping the decibel threshold comprises: an upper decibel threshold and/or a lower decibel threshold;
determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information, wherein the determining comprises the following steps:
under the condition that the clipping decibel threshold value comprises the decibel upper limit threshold value, determining an upper limit sound wave band of which the decibel value is greater than the decibel upper limit threshold value in the sound wave information, and determining a video segment corresponding to the upper limit sound wave band in the video as a video segment to be clipped;
and under the condition that the clipping decibel threshold value comprises the decibel lower limit threshold value, determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold value in the sound wave information, and determining a video segment to be clipped according to a video segment corresponding to the lower limit sound wave segment in the video.
3. The method of claim 2, wherein determining a video segment to be clipped from a video segment in the video corresponding to the lower bound sound wave segment comprises:
obtaining the duration of an alternative video clip in the video, wherein the alternative video clip is: video clips in the video corresponding to the lower sound-limiting bands;
and determining the video segments with the duration being greater than the gap duration in the alternative video segments as the video segments to be clipped.
4. The method of claim 3,
the void duration is: and the duration is determined according to the duration of the alternative video clip.
5. The method according to any one of claims 1-4, wherein the obtaining a clipping decibel threshold determined from the sonic information comprises:
displaying the sound wave form of the sound in the video to a user according to the sound wave information;
and obtaining a clipping decibel threshold value set by the user according to the sound wave shape.
6. A video clipping apparatus, characterized in that the apparatus comprises:
the information extraction module is used for extracting sound wave information of sound in the video;
the threshold value obtaining module is used for obtaining a clipping decibel threshold value determined according to the sound wave information;
the segment determining module is used for determining a video segment to be clipped in the video according to the clipping decibel threshold and the sound wave information;
and the video clipping module is used for clipping the video segments to be clipped from the video.
7. The apparatus of claim 6, wherein the clipping decibel threshold comprises: an upper decibel threshold and/or a lower decibel threshold;
the segment determination module includes:
the first segment determining submodule is used for determining an upper limit sound wave band of which the decibel value is greater than the decibel upper limit threshold in the sound wave information under the condition that the clipping decibel threshold comprises the decibel upper limit threshold, and determining a video segment corresponding to the upper limit sound wave band in the video as a video segment to be clipped;
and the second segment determining submodule is used for determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold in the sound wave information under the condition that the clipping decibel threshold comprises the decibel lower limit threshold, and determining a video segment to be clipped according to a video segment corresponding to the lower sound limiting band in the video.
8. The apparatus of claim 7, wherein the second segment determination submodule is specifically configured to:
under the condition that the clipping decibel threshold value comprises the decibel lower limit threshold value, determining a lower sound limiting band of which the decibel value is smaller than the decibel lower limit threshold value in the sound wave information, obtaining the duration of an alternative video segment in the video, and determining a video segment of which the duration is larger than the gap duration in the alternative video segment as a video segment to be clipped, wherein the alternative video segment is as follows: video segments of the video corresponding to each of the lower sound-limiting bands.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the video clipping method steps of any of claims 1-5 when executing a program stored in the memory.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the video clipping method steps of any one of claims 1 to 5.
CN202111437367.6A 2021-11-30 2021-11-30 Video editing method and device Pending CN114268831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111437367.6A CN114268831A (en) 2021-11-30 2021-11-30 Video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111437367.6A CN114268831A (en) 2021-11-30 2021-11-30 Video editing method and device

Publications (1)

Publication Number Publication Date
CN114268831A true CN114268831A (en) 2022-04-01

Family

ID=80825807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111437367.6A Pending CN114268831A (en) 2021-11-30 2021-11-30 Video editing method and device

Country Status (1)

Country Link
CN (1) CN114268831A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150660A (en) * 2022-06-09 2022-10-04 深圳市大头兄弟科技有限公司 Video editing method based on subtitles and related equipment
CN115150660B (en) * 2022-06-09 2024-05-10 深圳市闪剪智能科技有限公司 Video editing method based on subtitles and related equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017473A1 (en) * 2002-07-27 2004-01-29 Sony Computer Entertainment Inc. Man-machine interface using a deformable device
JP2005333205A (en) * 2004-05-18 2005-12-02 Nippon Telegr & Teleph Corp <Ntt> Device, program, and method for editing content
US20170092323A1 (en) * 2014-03-10 2017-03-30 Paul Goldman Audio/Video Merge Tool
US9699523B1 (en) * 2014-09-08 2017-07-04 ProSports Technologies, LLC Automated clip creation
US10057651B1 (en) * 2015-10-05 2018-08-21 Twitter, Inc. Video clip creation using social media
CN109889856A (en) * 2019-01-21 2019-06-14 南京微特喜网络科技有限公司 A kind of live streaming editing system based on artificial intelligence
CN110121103A (en) * 2019-05-06 2019-08-13 郭凌含 The automatic editing synthetic method of video and device
CN110430425A (en) * 2019-07-31 2019-11-08 北京奇艺世纪科技有限公司 A kind of video fluency determines method, apparatus, electronic equipment and medium
CN110992993A (en) * 2019-12-17 2020-04-10 Oppo广东移动通信有限公司 Video editing method, video editing device, terminal and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017473A1 (en) * 2002-07-27 2004-01-29 Sony Computer Entertainment Inc. Man-machine interface using a deformable device
JP2005333205A (en) * 2004-05-18 2005-12-02 Nippon Telegr & Teleph Corp <Ntt> Device, program, and method for editing content
US20170092323A1 (en) * 2014-03-10 2017-03-30 Paul Goldman Audio/Video Merge Tool
US9699523B1 (en) * 2014-09-08 2017-07-04 ProSports Technologies, LLC Automated clip creation
US10057651B1 (en) * 2015-10-05 2018-08-21 Twitter, Inc. Video clip creation using social media
CN109889856A (en) * 2019-01-21 2019-06-14 南京微特喜网络科技有限公司 A kind of live streaming editing system based on artificial intelligence
CN110121103A (en) * 2019-05-06 2019-08-13 郭凌含 The automatic editing synthetic method of video and device
CN110430425A (en) * 2019-07-31 2019-11-08 北京奇艺世纪科技有限公司 A kind of video fluency determines method, apparatus, electronic equipment and medium
CN110992993A (en) * 2019-12-17 2020-04-10 Oppo广东移动通信有限公司 Video editing method, video editing device, terminal and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150660A (en) * 2022-06-09 2022-10-04 深圳市大头兄弟科技有限公司 Video editing method based on subtitles and related equipment
CN115150660B (en) * 2022-06-09 2024-05-10 深圳市闪剪智能科技有限公司 Video editing method based on subtitles and related equipment

Similar Documents

Publication Publication Date Title
CN110430425B (en) Video fluency determination method and device, electronic equipment and medium
US8971549B2 (en) Audio signal processing apparatus, audio signal processing method, and program
US10146868B2 (en) Automated detection and filtering of audio advertisements
CN109348274B (en) Live broadcast interaction method and device and storage medium
CN107910014A (en) Test method, device and the test equipment of echo cancellor
CN110784768B (en) Multimedia resource playing method, storage medium and electronic equipment
US11469731B2 (en) Systems and methods for identifying and remediating sound masking
JP3886372B2 (en) Acoustic inflection point extraction apparatus and method, acoustic reproduction apparatus and method, acoustic signal editing apparatus, acoustic inflection point extraction method program recording medium, acoustic reproduction method program recording medium, acoustic signal editing method program recording medium, acoustic inflection point extraction method Program, sound reproduction method program, sound signal editing method program
US11910060B2 (en) System and method for automatic detection of periods of heightened audience interest in broadcast electronic media
US8159775B2 (en) Vibration identification and attenuation system and method
CN111031329B (en) Method, apparatus and computer storage medium for managing audio data
CN110688518A (en) Rhythm point determining method, device, equipment and storage medium
US20180175816A1 (en) Using Averaged Audio Measurements to Automatically Set Audio Compressor Threshold Levels
US10431242B1 (en) Systems and methods for identifying speech based on spectral features
CN104954934A (en) Audio play method and electronic equipment
CN114268831A (en) Video editing method and device
CN115731943A (en) Plosive detection method, plosive detection system, storage medium and electronic equipment
CN115273826A (en) Singing voice recognition model training method, singing voice recognition method and related device
KR20160056104A (en) Analyzing Device and Method for User&#39;s Voice Tone
CN111354383B (en) Audio defect positioning method and device and terminal equipment
CN112612688A (en) Method and device for testing equipment fluency, electronic equipment and storage medium
CN108205550B (en) Audio fingerprint generation method and device
CN113556605A (en) Illegal advertisement determination method and device, electronic equipment and storage medium
JP4336362B2 (en) Sound reproduction apparatus and method, sound reproduction program and recording medium therefor
CN115484503A (en) Bullet screen generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230818

Address after: 12 Dachenglian Road, Singapore # 04-01B (534233)

Applicant after: Bright Jupiter Private Ltd.

Address before: 120 Robinson Road # 13-01 Singapore

Applicant before: Lemei Technology Co.,Ltd.

TA01 Transfer of patent application right