CN104469546B - A kind of method and apparatus for handling video segment - Google Patents

A kind of method and apparatus for handling video segment Download PDF

Info

Publication number
CN104469546B
CN104469546B CN201410812127.3A CN201410812127A CN104469546B CN 104469546 B CN104469546 B CN 104469546B CN 201410812127 A CN201410812127 A CN 201410812127A CN 104469546 B CN104469546 B CN 104469546B
Authority
CN
China
Prior art keywords
video segment
fragment
mrow
cutting
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410812127.3A
Other languages
Chinese (zh)
Other versions
CN104469546A (en
Inventor
龚云波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tvmining Juyuan Media Technology Co Ltd
Original Assignee
Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Tvmining Juyuan Media Technology Co Ltd filed Critical Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority to CN201410812127.3A priority Critical patent/CN104469546B/en
Publication of CN104469546A publication Critical patent/CN104469546A/en
Application granted granted Critical
Publication of CN104469546B publication Critical patent/CN104469546B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method and apparatus for handling video segment, wherein, method includes:Obtain the cutting accuracy rate of video segment and each video segment;Unqualified video segment by cutting accuracy rate less than predetermined threshold value carries out fragment cutting, wherein, the stem fragment and/or afterbody fragment of the unqualified video segment are at least included in the fragment after cutting;The color histogram similarity between stem fragment and previous video fragment is calculated, and/or, calculate the color histogram similarity between afterbody fragment and latter video segment;When color histogram similarity is more than predetermined threshold value, stem fragment is included into previous video fragment, afterbody fragment is included into latter video segment.The stem or afterbody of unqualified video segment are included into the close previous video fragment of color histogram or latter video segment by the present invention, so as to play the purpose being modified to the video segment after cutting.

Description

A kind of method and apparatus for handling video segment
Technical field
The present invention relates to video field, more particularly, to a kind of method and apparatus for providing and handling video segment.
Background technology
Either on computers, or on the mobile terminal such as smart mobile phone, tablet personal computer, video playback is all that user makes With more function.
Video provider is experienced to provide the user with flexible viewing, it usually needs turn into video progress cutting Multiple such as video segments based on content.For example, one section of halfhour news program is carried out according to each news footage Cutting, is divided into multiple independent news footages, facilitates user to select the news footage for oneself wanting to see.
Video slicing can be carried out by the means of human-edited., can also be further in order to improve cutting efficiency Cutting is carried out to video using by modes such as face, sound or captions.For example, identifying personage by face and sound, really Determine character, by taking news as an example, such as host, outdoor scene host, outdoor scene welcome guest etc..Role and title etc. can also be believed Breath is combined, and determines content switching position, and the switching position is the position of video slicing, for example, the switching of host is usually interior Hold switching, the switching of host to outdoor scene host is still generally a content.
However, when carrying out video slicing using aforesaid way, it may appear that the undesirable feelings of the video segment effect of cutting Condition, therefore, how the video segment progress processing undesirable to cutting effect is a urgent problem to be solved.
The content of the invention
In view of this, the purpose of the embodiment of the present invention is to propose a kind of method and apparatus for handling video segment, and it can Effectively handle the underproof video segment of cutting.
In order to achieve the above object, the embodiment of the present invention proposes a kind of method for handling video segment, comprises the following steps:
Obtain the cutting accuracy rate that video is carried out to video segment and each video segment after cutting;
Unqualified video segment by cutting accuracy rate less than the first predetermined threshold value carries out fragment cutting, wherein, after cutting Fragment at least include the stem fragment and/or afterbody fragment of the unqualified video segment;
Calculate the stem fragment of the unqualified video segment and the previous video fragment of the unqualified video segment Between the first color histogram similarity, and/or, calculate the afterbody fragment of the unqualified video segment with it is described unqualified The second color histogram similarity between latter video segment of video segment;
When the first color histogram similarity be more than the second predetermined threshold value when, by the stem fragment be included into it is described before One video segment;When the second color histogram similarity is more than three predetermined threshold values, the afterbody fragment is included into The latter video segment.
In an embodiment of the present invention, the unqualified video segment by cutting accuracy rate less than the first predetermined threshold value enters Row fragment cutting, including:
Calculate the color histogram similarity between the adjacent video frames in the unqualified video segment;
The adjacent video frames that the color histogram similarity for calculating acquisition is more than the 4th predetermined threshold value are included into same broken Piece.
In an embodiment of the present invention, the cutting accuracy rate for obtaining each video segment, including:
Obtain cutting feature used in each video segment;
According to the cutting feature of acquisition and the weight of default cutting feature, the cutting for calculating each video segment is accurate Rate.
In an embodiment of the present invention, the cutting accuracy rate is:
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y classifies for ownership, λ tables Show the weight of cutting feature, Σiλi=1, i represent i-th of cutting feature.
In an embodiment of the present invention, the cutting feature include it is following in one or more:Face, sound, title And color histogram.
In an embodiment of the present invention, the color histogram similarity is calculated, including:Using each color histogram as Vector, calculates two vectorial distances.
The embodiment of the present invention also proposes a kind of device for handling video segment, including:
Acquisition module, carries out video for obtaining the cutting accuracy rate of video segment and each video segment after cutting;
Cutting module, cuts for cutting accuracy rate to be carried out into fragment less than the unqualified video segment of the first predetermined threshold value Point, wherein, the stem fragment and/or afterbody fragment of the unqualified video segment are at least included in the fragment after cutting;
Before computing module, the stem fragment and the unqualified video segment for calculating the unqualified video segment The first color histogram similarity between one video segment, and/or, calculate the afterbody fragment of the unqualified video segment The second color histogram similarity between latter video segment of the unqualified video segment;
Processing module, for when the first color histogram similarity is more than the second predetermined threshold value, by the stem Fragment is included into the previous video fragment;When the second color histogram similarity is more than three predetermined threshold values, by institute State afterbody fragment and be included into the latter video segment.
In an embodiment of the present invention, the cutting module includes:
First computing unit, for calculating the color histogram between the adjacent video frames in the unqualified video segment Similarity;
Processing unit, for adjacent video frames of the color histogram similarity obtained more than the 4th predetermined threshold value will to be calculated It is included into same fragment.
In an embodiment of the present invention, the acquisition module includes:
Acquiring unit, obtains cutting feature used in each video segment;
Second computing unit, for the cutting feature according to acquisition and the weight of default cutting feature, calculating is respectively regarded The cutting accuracy rate of frequency fragment.
In an embodiment of the present invention, the cutting accuracy rate is:
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y classifies for ownership, λ tables Show the weight of cutting feature, Σiλi=1, i represent i-th of cutting feature.
In an embodiment of the present invention, the cutting feature include it is following in one or more:Face, sound, title And color histogram.
Technical scheme provided in an embodiment of the present invention can include the following benefits:
By the way that the color histogram of the stem of unqualified video segment and/or afterbody is entered with front and rear adjacent video segment Row compares, and the stem or afterbody of the unqualified video segment are included into the close previous video fragment or latter of color histogram Individual video segment, so as to play the purpose being modified to the video segment after cutting.
The further feature and advantage of the embodiment of the present invention will illustrate in the following description, also, partly from explanation Become apparent, or understood by implementing the present invention in book.The purpose of the present invention and other advantages can be by being write Specification, claims and accompanying drawing in specifically noted structure realize and obtain.
Below by drawings and examples, the technical scheme to the embodiment of the present invention is described in further detail.
Brief description of the drawings
Accompanying drawing is used for providing further understanding the embodiment of the present invention, and constitutes a part for specification, with this hair Bright embodiment is used to explain the present invention together, does not constitute the limitation to the embodiment of the present invention.In the accompanying drawings:
Fig. 1 is the method flow diagram for handling video segment in one embodiment of the invention.
Fig. 2 is the method flow diagram for handling video segment in one embodiment of the invention.
Fig. 3 is the method flow diagram of processing video segment in one embodiment of the invention.
Fig. 4 is the structural representation of the device of the processing video segment in one embodiment of the invention.
Fig. 5 is the structural representation of the cutting module of the device of the processing video segment in one embodiment of the invention.
Fig. 6 is the structural representation of the acquisition module of the device of the processing video segment in one embodiment of the invention.
Embodiment
The preferred embodiments of the present invention are illustrated below in conjunction with accompanying drawing, it will be appreciated that preferred reality described herein Apply example and be merely to illustrate and explain the present invention embodiment, be not intended to limit the present invention embodiment.
It is as shown in Figure 1 the flow chart of the method for handling video segment in the embodiment of the present invention, this method includes:
Step S11:Obtain the cutting accuracy rate that video is carried out to video segment and each video segment after cutting.
Step S12:Unqualified video segment by cutting accuracy rate less than the first predetermined threshold value carries out fragment cutting, its In, the stem fragment and/or afterbody fragment of the unqualified video segment are at least included in the fragment after cutting.
Step S13:The stem fragment and the first color of previous video fragment for calculating the unqualified video segment are straight Square figure similarity, and/or, the second color for calculating the afterbody fragment and latter video segment of the unqualified video segment is straight Square figure similarity.
Step S14:When the first color histogram similarity is more than the second predetermined threshold value, the stem fragment is returned Enter previous video fragment;When the second color histogram similarity is more than three predetermined threshold values, by the afterbody fragment It is included into latter video segment.
In the embodiment of the present invention, by by the color histogram of the stem of unqualified video segment and/or afterbody with it is front and rear Adjacent video segment is compared, and the stem or afterbody of the unqualified video segment are included into close previous of color histogram Individual video segment or latter video segment, so as to play the purpose being modified to the video segment after cutting.
Another embodiment of the method for the processing video segment proposed in the embodiment of the present invention is illustrated in figure 2, in the reality Apply in example, cutting is carried out to unqualified video segment according to color histogram.The embodiment comprises the following steps:
Step S21:Obtain the cutting accuracy rate that video is carried out to video segment and each video segment after cutting.
Step S22:According to the cutting accuracy rate and the first predetermined threshold value of each video segment, unqualified video segment is determined.
Step S23:Calculate the color histogram similarity between the adjacent video frames in unqualified video segment.
Color histogram similarity is calculated, including:Using the color histogram of frame of video as vector, two frame of video are calculated Between color histogram similarity be exactly to calculate two vectorial distances.
Step S24:The adjacent video frames that the color histogram similarity for calculating acquisition is more than the 4th predetermined threshold value are included into Same fragment.
Step S25:Calculate the stem fragment of unqualified video segment and the previous video piece of the unqualified video segment The first color histogram similarity between section.
Step S26:Judge whether the first color histogram similarity is more than the second predetermined threshold value, if so, performing step S27。
Step S27:The stem fragment of the unqualified video segment is included into the previous video of the unqualified video segment Fragment.
Step S28:Calculate the afterbody fragment of unqualified video segment and latter piece of video of the unqualified video segment The second color histogram similarity between section.
Step S29:Judge whether the second color histogram similarity is more than the 3rd predetermined threshold value, if so, performing step S210。
Step S210:The afterbody fragment of the unqualified video segment is included into latter video of the unqualified video segment Fragment.
It should be noted that above-mentioned steps S25-S27 and S28-S210 execution sequence are not limited to above-mentioned order, also may be used With parallel execution of steps S25-S27 and S28-S210, or first carry out and perform step S25-S27 after step S28-S210 again. In the other embodiment of the present invention, step S25-S27 or step S28-S210 can also be only performed.For example, being regarded when this is unqualified When frequency fragment is last fragment in video, step S25-S27 is only performed, when the unqualified video segment in video During first fragment, step S28-S210 is only performed.
Another embodiment of the method for the processing video segment proposed in the embodiment of the present invention is illustrated in figure 3, in the reality Apply in example, including calculate the cutting accuracy rate of video segment.The embodiment comprises the following steps:
Step S31:Obtain and video is carried out to the video segment after cutting.
Step S32:Obtain cutting feature used in each video segment.
Step S33:According to the cutting feature of acquisition and the weight of default cutting feature, cutting for each video segment is calculated Divide accuracy rate.
In an embodiment of the present invention, the calculation formula of cutting accuracy rate is as follows:
Wherein, fi(x, y) represents two-value aspect of model function, and its value is special by video segment ownership classification y and video segment Levy vector x to determine, be 1 when x, y meet f values when video segment sorts out condition.For example, the cutting feature that the video segment is used It is face, then, x is face feature vector, and whether y can use everything dtex of face to levy for the video segment is cut Point, if so, then the value of the f is 1, which part whether can use which or which cutting feature to carry out cutting in video, This pre-sets, and λ represents the weight of cutting feature, Σiλi=1, i represent i-th of cutting feature.
Cutting feature can include it is following in one or more:Face, sound, title and color histogram.
Step S34:According to the cutting accuracy rate and the first predetermined threshold value of each video segment, unqualified video segment is determined.
Hereinafter, following steps are performed for each unqualified video segment:
Step S35:Calculate the color histogram similarity between the adjacent video frames in unqualified video segment.
Color histogram similarity is calculated, including:Using the color histogram of frame of video as vector, two frame of video are calculated Between color histogram similarity be exactly to calculate two vectorial distances.
Step S36:The adjacent video frames that the color histogram similarity for calculating acquisition is more than the 4th predetermined threshold value are included into Same fragment.
Step S37:Calculate the stem fragment of unqualified video segment and the previous video piece of the unqualified video segment The first color histogram similarity between section, and, calculate the afterbody fragment and the unqualified video of unqualified video segment The second color histogram similarity between latter video segment of fragment.
Step S38:Judge whether the first color histogram similarity is more than the second predetermined threshold value, and, judge the second face Whether Color Histogram similarity is more than the 3rd predetermined threshold value;When the first color histogram similarity is more than the second predetermined threshold value, Perform step S39;When the second color histogram similarity is more than three predetermined threshold values, step S310 is performed.
Step S39:The stem fragment of the unqualified video segment is included into the previous video of the unqualified video segment Fragment.
Step S310:The afterbody fragment of the unqualified video segment is included into latter video of the unqualified video segment Fragment.
Correspondingly, as shown in figure 4, the embodiment of the present invention also proposes a kind of device for handling video segment, including:
Acquisition module 401, the cutting that video is carried out to video segment and each video segment after cutting for obtaining is accurate Rate.
Cutting module 402, fragment is carried out for the unqualified video segment by cutting accuracy rate less than the first predetermined threshold value Cutting, wherein, the stem fragment and/or afterbody fragment of the unqualified video segment are at least included in the fragment after cutting.
Computing module 403, stem fragment and the unqualified video segment for calculating the unqualified video segment Previous video fragment between the first color histogram similarity, and/or, calculate the afterbody of the unqualified video segment The second color histogram similarity between fragment and latter video segment of the unqualified video segment.
Processing module 404, for when the first color histogram similarity is more than the second predetermined threshold value, by the head Portion's fragment is included into the previous video fragment;, will when the second color histogram similarity is more than three predetermined threshold values The afterbody fragment is included into the latter video segment.
As shown in figure 5, the cutting module 402 includes:
First computing unit 4021 is straight for calculating the color between the adjacent video frames in the unqualified video segment Square figure similarity;
Processing unit 4022, is regarded for will calculate the color histogram similarity obtained more than the 4th the adjacent of predetermined threshold value Frequency frame is included into same fragment.
As shown in fig. 6, the acquisition module 401 includes:
Acquiring unit 4011, obtains cutting feature used in each video segment;
Second computing unit 4012, for the cutting feature according to acquisition and the weight of default cutting feature, is calculated The cutting accuracy rate of each video segment.
The cutting accuracy rate is:
Wherein, fi (x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y classifies for ownership, λ tables Show the weight of cutting feature, Σiλi=1, i represent i-th of cutting feature.
Cutting feature include it is following in one or more:Face, sound, title and color histogram.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can be using the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.Moreover, the present invention can be used in one or more computers for wherein including computer usable program code The shape for the computer program product that usable storage medium is implemented on (including but is not limited to magnetic disk storage and optical memory etc.) Formula.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product Figure and/or block diagram are described.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which is produced, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that in meter Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in individual square frame or multiple square frames.
Obviously, those skilled in the art can carry out the essence of various changes and modification without departing from the present invention to the present invention God and scope.So, if these modifications and variations of the present invention belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising including these changes and modification.

Claims (11)

1. a kind of method for handling video segment, it is characterised in that comprise the following steps:
Obtain the cutting accuracy rate that video is carried out to video segment and each video segment after cutting;
Unqualified video segment by cutting accuracy rate less than the first predetermined threshold value carries out fragment cutting, wherein, it is broken after cutting At least include the stem fragment and/or afterbody fragment of the unqualified video segment in piece;
When the fragment after cutting includes the stem fragment of the unqualified video segment, the unqualified video segment is calculated Stem fragment and the unqualified video segment previous video fragment between the first color histogram similarity;Work as institute When stating the first color histogram similarity more than the second predetermined threshold value, the stem fragment is included into the previous video piece Section;
When the fragment after cutting includes the afterbody fragment of the unqualified video segment, the unqualified video segment is calculated Afterbody fragment and the unqualified video segment latter video segment between the second color histogram similarity;Work as institute When stating the second color histogram similarity more than three predetermined threshold values, the afterbody fragment is included into the latter piece of video Section.
2. according to the method described in claim 1, it is characterised in that described that cutting accuracy rate is less than the first predetermined threshold value not Qualified video segment carries out fragment cutting, including:
Calculate the color histogram similarity between the adjacent video frames in the unqualified video segment;
The color histogram similarity obtained will be calculated and be included into same fragment more than the adjacent video frames of the 4th predetermined threshold value.
3. according to the method described in claim 1, it is characterised in that the cutting accuracy rate for obtaining each video segment, including:
Obtain cutting feature used in each video segment;
According to the cutting feature of acquisition and the weight of default cutting feature, the cutting accuracy rate of each video segment is calculated.
4. method according to claim 3, it is characterised in that the cutting accuracy rate is:
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>/</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;Sigma;</mi> <mi>i</mi> </msub> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>exp</mi> <mo>(</mo> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>i</mi> </msub> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>f</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
Wherein, fi(x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y classifies for ownership, and λ represents cutting The weight of feature, ∑iλi=1, i represent i-th of cutting feature.
5. method according to claim 3, it is characterised in that the cutting feature include it is following in one or more: Face, sound, title and color histogram.
6. according to the method described in claim 1, it is characterised in that calculate the color histogram similarity, including:Will be each Color histogram calculates two vectorial distances as vector.
7. a kind of device for handling video segment, it is characterised in that including:
Acquisition module, carries out video for obtaining the cutting accuracy rate of video segment and each video segment after cutting;
Cutting module, fragment cutting is carried out for the unqualified video segment by cutting accuracy rate less than the first predetermined threshold value, its In, the stem fragment and/or afterbody fragment of the unqualified video segment are at least included in the fragment after cutting;
When computing module, stem fragment for including the unqualified video segment when the fragment after cutting, calculate described The first color between the previous video fragment of the stem fragment of unqualified video segment and the unqualified video segment is straight Square figure similarity;When the fragment after cutting includes the afterbody fragment of the unqualified video segment, calculate described unqualified The second color histogram phase between the afterbody fragment of video segment and latter video segment of the unqualified video segment Like degree;
When processing module, stem fragment for including the unqualified video segment when the fragment after cutting, when described When one color histogram similarity is more than the second predetermined threshold value, the stem fragment is included into the previous video fragment;When When fragment after cutting includes the afterbody fragment of the unqualified video segment, when the second color histogram similarity is big When three predetermined threshold values, the afterbody fragment is included into the latter video segment.
8. device according to claim 7, it is characterised in that the cutting module includes:
First computing unit, it is similar for calculating the color histogram between the adjacent video frames in the unqualified video segment Degree;
Processing unit, is included into for will calculate adjacent video frames of the color histogram similarity obtained more than the 4th predetermined threshold value Same fragment.
9. device according to claim 7, it is characterised in that the acquisition module includes:
Acquiring unit, obtains cutting feature used in each video segment;
Second computing unit, for the cutting feature according to acquisition and the weight of default cutting feature, calculates each piece of video The cutting accuracy rate of section.
10. device according to claim 9, it is characterised in that the cutting accuracy rate is:
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>/</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;Sigma;</mi> <mi>i</mi> </msub> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mi>exp</mi> <mo>(</mo> <mrow> <msub> <mi>&amp;Sigma;</mi> <mi>i</mi> </msub> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>f</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
Wherein, fi(x, y) represents two-value aspect of model function, and x is video segment characteristic vector, and y classifies for ownership, and λ represents cutting The weight of feature, ∑iλi=1, i represent i-th of cutting feature.
11. device according to claim 9, it is characterised in that the cutting feature include it is following in one or more: Face, sound, title and color histogram.
CN201410812127.3A 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment Expired - Fee Related CN104469546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410812127.3A CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410812127.3A CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Publications (2)

Publication Number Publication Date
CN104469546A CN104469546A (en) 2015-03-25
CN104469546B true CN104469546B (en) 2017-09-15

Family

ID=52914791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410812127.3A Expired - Fee Related CN104469546B (en) 2014-12-22 2014-12-22 A kind of method and apparatus for handling video segment

Country Status (1)

Country Link
CN (1) CN104469546B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718871B (en) * 2016-01-18 2017-11-28 成都索贝数码科技股份有限公司 A kind of video host's recognition methods based on statistics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method
CN103426176A (en) * 2013-08-27 2013-12-04 重庆邮电大学 Video shot detection method based on histogram improvement and clustering algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4613867B2 (en) * 2005-05-26 2011-01-19 ソニー株式会社 Content processing apparatus, content processing method, and computer program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method
CN103426176A (en) * 2013-08-27 2013-12-04 重庆邮电大学 Video shot detection method based on histogram improvement and clustering algorithm

Also Published As

Publication number Publication date
CN104469546A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN109819313B (en) Video processing method, device and storage medium
CN110139159B (en) Video material processing method and device and storage medium
US10755102B2 (en) Methods and systems of spatiotemporal pattern recognition for video content development
US20160256780A1 (en) 3D Sports Playbook
CN109688463A (en) A kind of editing video generation method, device, terminal device and storage medium
CN108337532A (en) Perform mask method, video broadcasting method, the apparatus and system of segment
US8649573B1 (en) Method and apparatus for summarizing video data
WO2021120685A1 (en) Video generation method and apparatus, and computer system
CN106534967A (en) Video editing method and device
US20150098691A1 (en) Technology for dynamically adjusting video playback speed
CN110889379B (en) Expression package generation method and device and terminal equipment
CN103200463A (en) Method and device for generating video summary
US8897603B2 (en) Image processing apparatus that selects a plurality of video frames and creates an image based on a plurality of images extracted and selected from the frames
US8856636B1 (en) Methods and systems for trimming video footage
CN106454151A (en) Video image stitching method and device
CN110246110B (en) Image evaluation method, device and storage medium
CN111757175A (en) Video processing method and device
US11508154B2 (en) Systems and methods for generating a video summary
US9990772B2 (en) Augmented reality skin evaluation
CN112367551A (en) Video editing method and device, electronic equipment and readable storage medium
WO2021031733A1 (en) Method for generating video special effect, and terminal
CN108364338B (en) Image data processing method and device and electronic equipment
CN113870133A (en) Multimedia display and matching method, device, equipment and medium
CN110198482A (en) A kind of video emphasis bridge section mask method, terminal and storage medium
CN104822087B (en) A kind of processing method and processing device of video-frequency band

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A method and device for processing video clip

Effective date of registration: 20210104

Granted publication date: 20170915

Pledgee: Inner Mongolia Huipu Energy Co.,Ltd.

Pledgor: WUXI TVMINING MEDIA SCIENCE & TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001517

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170915

CF01 Termination of patent right due to non-payment of annual fee