CN109740530A - Extracting method, device, equipment and the computer readable storage medium of video-frequency band - Google Patents
Extracting method, device, equipment and the computer readable storage medium of video-frequency band Download PDFInfo
- Publication number
- CN109740530A CN109740530A CN201811653235.5A CN201811653235A CN109740530A CN 109740530 A CN109740530 A CN 109740530A CN 201811653235 A CN201811653235 A CN 201811653235A CN 109740530 A CN109740530 A CN 109740530A
- Authority
- CN
- China
- Prior art keywords
- time point
- target
- video
- frequency band
- characteristic value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000002123 temporal effect Effects 0.000 claims description 36
- 238000000605 extraction Methods 0.000 claims description 33
- 230000011218 segmentation Effects 0.000 claims description 14
- 238000012512 characterization method Methods 0.000 claims description 9
- 238000012790 confirmation Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 208000001491 myopia Diseases 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of extracting method of video-frequency band, device, equipment and computer readable storage mediums, by the characteristic information for obtaining target object, the target object lookup is carried out in original video according to the characteristic information, and successively obtains the object time point in the original video comprising the target object;Calculate the time difference between adjacent target time point two-by-two, and according to the time difference between the time point of adjacent target two-by-two, target start time point and objective end time point are determined in the object time point, and the target video section comprising the target object is thus extracted in the original video.The object time point occurred in video according to target object and the time difference between time point, determine target start time point and objective end time point, and target video section is extracted according to above-mentioned time point, it realizes the automatic cutting of video, improves cutting efficiency and accuracy.
Description
Technical field
The present invention relates to technical field of video processing more particularly to a kind of extracting method of video-frequency band, device, equipment and meters
Calculation machine readable storage medium storing program for executing.
Background technique
With the development of artificial intelligence technology, the technology of video structural is also increasingly mature.Video structural is one right
Original video is analyzed, and key message is extracted, the application technology for the semantic description of style of writing originally of going forward side by side.The mesh of video structural
, it is to aid in user and is more rapidly accurately obtained most desired audio-video frequency content in magnanimity audio-video frequency content, such as to view
Specific target object is identified and is positioned in frequency.It is short-sighted that conventional video structuring needs manually to be cut into target object place
Frequently, it not only takes time and effort, but also is easy to happen information mistakes and omissions, cause cutting efficiency low.Therefore, how to solve existing manually to cut
The problem of cutting the low problem of cutting efficiency caused by short-sighted frequency, being current urgent need to resolve.
Summary of the invention
The main purpose of the present invention is to provide a kind of extracting method of video-frequency band, television set and readable storage medium storing program for executing, purports
It is solving existing manually to cut the low technical problem of cutting efficiency caused by short-sighted frequency.
To achieve the above object, the present invention provides a kind of extracting method of video-frequency band, the extracting method packet of the video-frequency band
Include following steps:
The characteristic information for obtaining target object, carries out the target object in original video according to the characteristic information and looks into
It looks for, and successively obtains the object time point in the original video comprising the target object;
The time difference between adjacent target time point two-by-two is calculated, and according between the time point of adjacent target two-by-two
Time difference determines target start time point and objective end time point in the object time point;
According to the target start time point and objective end time point, extracted in the original video comprising described
The target video section of target object.
Optionally, the time difference calculated between adjacent target time point two-by-two, and according to the adjacent target two-by-two
Time difference between time point obtains target start time point and the step of objective end time point in the object time point
Include:
The time difference between adjacent target time point two-by-two is calculated, by the time between the time point of adjacent target two-by-two
Difference is determined as the temporal characteristics value of the latter object time point in the adjacent target time point two-by-two, and when by first aim
Between the temporal characteristics value put be set as preset value;
Classified by default disaggregated model to the temporal characteristics value of each object time point, obtains at least two characteristic values
Cluster, and breakpoint characteristic value cluster is determined in the characteristic value cluster based on pre- breaking characterization rules;
The corresponding object time point of the breakpoint characteristic value cluster point of interruption characteristic value is determined as target start time point, and
The adjacent previous object time point of the target start time point and the last one object time point, which are determined as target, to be terminated
Time point.
Optionally, described according to the target start time point and objective end time point, it is mentioned in the original video
Taking-up the step of including the target video section of the target object includes:
The target start time point and objective end time point are successively sorted according to chronological order, and will two-by-two
Adjacent target start time point and objective end time point is determined as the target start time point and mesh of same target video section
Mark end time point;
According to the corresponding target start time point of each target video section and objective end time point, in the original video
Extract each target video section.
Optionally, described successively to be arranged the target start time point and objective end time point according to chronological order
Sequence, and the target that adjacent two-by-two target start time point and objective end time point are determined as same target video section is originated
After the step of time point and objective end time point, further includes:
The target start time point of each target video section and objective end time point are added into corresponding segmenting video mark;
When playing the original video, shown in the corresponding video of the target object according to segmenting video mark
Hold prompting message, so that user carries out corresponding selection viewing.
Optionally, described to be classified by default disaggregated model to the temporal characteristics value of each object time point, obtain to
Few two characteristic value clusters, and determined in the characteristic value cluster based on pre- breaking characterization rules and wrapped the step of breakpoint characteristic value cluster
It includes:
Classified by temporal characteristics value of the two classification device to each object time point, obtains two characteristic value clusters;
Based on preset threshold, the breakpoint characteristic value cluster is determined in described two characteristic value clusters, wherein the breakpoint is special
Temporal characteristics value in value indicative cluster is less than the preset threshold.
Optionally, described to be classified by default disaggregated model to the temporal characteristics value of each object time point, obtain to
Few two characteristic value clusters, and determined in the characteristic value cluster based on pre- breaking characterization rules and had the step of breakpoint characteristic value cluster
Body includes:
Classified by temporal characteristics value of the multiclass clustering algorithm to each object time point, obtains multiple characteristic value clusters;
According to the segmentation threshold of the multiple characteristic value cluster, the breakpoint characteristic value is determined in the multiple characteristic value cluster
Cluster, wherein the breakpoint characteristic value cluster is less than the segmentation threshold.
Optionally, described according to the target start time point and objective end time point, it is mentioned in the original video
After the step of taking-up includes the target video section of the target object, further includes:
Each target video section is synthesized into the corresponding target video of the target object, so as to described in continuity broadcasting
The corresponding video of target object.
In addition, to achieve the above object, the present invention also provides a kind of extraction element of video-frequency band, the extraction of the video-frequency band
Device includes:
Time point obtains module, for obtaining the characteristic information of target object, according to the characteristic information in original video
It is middle to carry out the target object lookup, and successively obtain the object time point in the original video comprising the target object;
Time point confirmation module, for calculating the time difference between adjacent target time point two-by-two, and according to it is described two-by-two
Time difference between adjacent target time point determines target start time point and objective end time in the object time point
Point;
Video-frequency band extraction module is used for according to the target start time point and objective end time point, described original
The target video section comprising the target object is extracted in video.
In addition, to achieve the above object, the present invention also provides a kind of extract equipment of video-frequency band, the extraction of the video-frequency band
Equipment includes processor, memory and is stored on the memory and can mentioning by the video-frequency band that the processor executes
Program fetch, wherein realizing the extraction of video-frequency band as described above when the extraction procedure of the video-frequency band is executed by the processor
The step of method.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
It is stored with the extraction procedure of video-frequency band on storage medium, when wherein the extraction procedure of video-frequency band is executed by processor, realizes as above
The step of extracting method for the video-frequency band stated.
The present invention provides a kind of extracting method of video-frequency band, by obtaining the characteristic information of target object, according to the spy
Reference breath carries out the target object lookup in original video, and successively obtains in the original video comprising the object
The object time point of body;The time difference between adjacent target time point two-by-two is calculated, and according to the adjacent target time two-by-two
Time difference between point determines target start time point and objective end time point in the object time point;According to described
Target start time point and objective end time point extract the target view comprising the target object in the original video
Frequency range.By the above-mentioned means, the object time point occurred in video according to target object and the time difference between time point,
Determine the corresponding target start time point of each target video section and objective end time point that target object occurs, it thus can root
Target video section is extracted according to target start time point and objective end time point, the automatic cutting of video is realized, improves cutting
Efficiency and accuracy solve and existing manually cut the low technical problem of cutting efficiency caused by short-sighted frequency.
Detailed description of the invention
Fig. 1 is the television set hardware structural diagram that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of the extracting method first embodiment of video-frequency band of the present invention;
Fig. 3 is the flow diagram of the extracting method second embodiment of video-frequency band of the present invention;
Fig. 4 is the flow diagram of the extracting method 3rd embodiment of video-frequency band of the present invention.
Fig. 5 is the functional block diagram of the extraction element first embodiment of video-frequency band of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The main thought of the embodiment of the present invention is: the characteristic information by obtaining target object, according to the feature
Information carries out the target object lookup in original video, and successively obtains in the original video comprising the target object
Object time point;The time difference between adjacent target time point two-by-two is calculated, and according to the adjacent target time point two-by-two
Between time difference, target start time point and objective end time point are determined in the object time point;According to the mesh
Start time point and objective end time point are marked, the target video comprising the target object is extracted in the original video
Section solves and existing manually cuts the low technical problem of cutting efficiency caused by short-sighted frequency.
The present embodiments relate to the extracting method of video-frequency band be mainly used in the extract equipment of video-frequency band, the video-frequency band
Extract equipment can be accomplished in several ways.For example, mobile terminal, computer, tablet computer etc..
The extract equipment using mobile terminal as video-frequency band is illustrated in subsequent descriptions, those skilled in the art will manage
Solution, other than being used in particular for the element of mobile purpose, the construction of embodiment according to the present invention be can also apply to
Other types of terminal.
Referring to Fig.1, Fig. 1 is the extract equipment hardware structural diagram of video-frequency band involved in the embodiment of the present invention.
In the embodiment of the present invention, the extract equipment of video-frequency band may include processor 1001 (such as CPU), communication bus 1002, user
Interface 1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is for realizing the connection between these components
Communication;User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard);Network interface
1004 may include optionally standard wireline interface and wireless interface (such as WI-FI interface);Memory 1005 can be high speed
RAM memory is also possible to stable memory (non-volatile memory), such as magnetic disk storage, memory 1005
It optionally can also be the storage device independently of aforementioned processor 1001.
It will be understood by those skilled in the art that the extract equipment hardware configuration of video-frequency band shown in Fig. 1 is not constituted pair
The restriction of the extract equipment of video-frequency band may include perhaps combining certain components or not than illustrating more or fewer components
Same component layout.
With continued reference to Fig. 1, the memory 1005 in Fig. 1 as a kind of computer readable storage medium may include operation system
System, network communication module, user interface template and video-frequency band extraction procedure.
In Fig. 1, network communication module is mainly used for connecting server, carries out data communication with server;And processor
1001 can call the extraction procedure of the video-frequency band stored in memory 1005, and execute following operation:
The characteristic information for obtaining target object, carries out the target object in original video according to the characteristic information and looks into
It looks for, and successively obtains the object time point in the original video comprising the target object;
The time difference between adjacent target time point two-by-two is calculated, and according between the time point of adjacent target two-by-two
Time difference determines target start time point and objective end time point in the object time point;
According to the target start time point and objective end time point, extracted in the original video comprising described
The target video section of target object.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
The time difference between adjacent target time point two-by-two is calculated, by the time between the time point of adjacent target two-by-two
Difference is determined as the temporal characteristics value of the latter object time point in the adjacent target time point two-by-two, and when by first aim
Between the temporal characteristics value put be set as preset value;
Classified by default disaggregated model to the temporal characteristics value of each object time point, obtains at least two characteristic values
Cluster, and breakpoint characteristic value cluster is determined in the characteristic value cluster based on pre- breaking characterization rules;
The corresponding object time point of the breakpoint characteristic value cluster point of interruption characteristic value is determined as target start time point, and
The adjacent previous object time point of the target start time point and the last one object time point, which are determined as target, to be terminated
Time point.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
The target start time point and objective end time point are successively sorted according to chronological order, and will two-by-two
Adjacent target start time point and objective end time point is determined as the target start time point and mesh of same target video section
Mark end time point;
According to the corresponding target start time point of each target video section and objective end time point, in the original video
Extract each target video section.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
The target start time point of each target video section and objective end time point are added into corresponding segmenting video mark;
When playing the original video, shown in the corresponding video of the target object according to segmenting video mark
Hold prompting message, so that user carries out corresponding selection viewing.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
Classified by temporal characteristics value of the two classification device to each object time point, obtains two characteristic value clusters;
Based on preset threshold, the breakpoint characteristic value cluster is determined in described two characteristic value clusters, wherein the breakpoint is special
Temporal characteristics value in value indicative cluster is less than the preset threshold.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
Classified by temporal characteristics value of the multiclass clustering algorithm to each object time point, obtains multiple characteristic value clusters;
According to the segmentation threshold of the multiple characteristic value cluster, the breakpoint characteristic value is determined in the multiple characteristic value cluster
Cluster, wherein the breakpoint characteristic value cluster is less than the segmentation threshold.
Further, processor 1001 may call upon the extraction procedure of the video-frequency band stored in memory 1005, and hold
The following operation of row:
Each target video section is synthesized into the corresponding target video of the target object, so as to described in continuity broadcasting
The corresponding video of target object.
Extract equipment hardware configuration based on above-mentioned video-frequency band proposes each implementation of extracting method of video-frequency band of the present invention
Example.
The present invention provides a kind of extracting method of video-frequency band.
It is the flow diagram of the extracting method first embodiment of video-frequency band of the present invention referring to Fig. 2, Fig. 2.
In the present embodiment, the extracting method of the video-frequency band the following steps are included:
Step S10 obtains the characteristic information of target object, and the mesh is carried out in original video according to the characteristic information
It marks object to search, and successively obtains the object time point in the original video comprising the target object;
In the present embodiment, with the improvement of living standards, the time of people, gradually fragmentation, short-sighted frequency were increasingly becoming people
Life & amusement important way.In order to solve it is existing manually cut the low technical problem of cutting efficiency caused by short-sighted frequency,
A kind of extracting method of video-frequency band is provided, realizes automatically extracting for video-frequency band.Specifically, it is first determined the corresponding spy of target object
Reference breath, wherein the target object can be specific personage or specific object, such as particular persons role or specific product
Board vehicle etc., characteristic information include the recognizable information such as subject image or sound.Based on the characteristic information and pass through identification
Technology, positioning includes the video frame of the target object in original video, and obtains above-mentioned video frame corresponding time point, i.e.,
For object time point.Wherein it is possible to carry out a target object detection using shorter a period of time is spaced, i.e., multiple closeer
The specified time point of collection is obtained at the specified time point using identification class algorithm with the presence or absence of the target object.And export presence
A series of time points of the target object, i.e. one group of time point range represent and occur at the above-mentioned time point of original video
The target object.And the object time point is successively sorted according to sequencing.In specific embodiment, if the object
Body is multiple objects, then available multiple groups time point range.
Step S20 calculates the time difference between adjacent target time point two-by-two, and according to the adjacent target time two-by-two
Time difference between point determines target start time point and objective end time point in the object time point;
In the present embodiment, the time point that the target object occurs in original video is continuous, as video-frequency band, but
Be above-mentioned object time point be discrete time point, need according to video structural export discrete time point obtain continuity
Period.Specifically, in each object time point successively to sort, two adjacent two object times points are successively obtained,
As one group of object time point.And the latter object time point in one group of object time point is subtracted into the previous object time
Point obtains the time difference of this group of object time point.The explanation of value, the time difference used in the present embodiment refer to time point it
Between distance, not only refer in particular to the subtraction value of two time values, distance is a general designation saying, and difference is that one kind of distance is special
Situation (Euclidean distance).In the present embodiment, uses the time difference for ease of description to be described, may be used also in specific embodiment
To use other that can indicate the distance between time point in a manner of.And so on, obtain each adjacent target time point two-by-two
Between time difference.Judge whether each time difference meets video-frequency band segmentation condition, and meets video-frequency band point in some time difference
When section condition, the corresponding one group of object time point of the video difference is determined as to the target start time point and target of target video section
End time point, wherein the target start time point is less than the objective end time point.Whether such as judge each time difference
Greater than preset threshold, and the previous object time point for the time difference corresponding one group of object time point that will be greater than preset threshold is true
It is set to the objective end time point of a target video section, and the latter object time point of the group is determined as next target
The target start time point of video-frequency band.The rest may be inferred so that it is determined that the target object occur each group target video section target
Start time point and objective end time point.It is worth noting that first aim time point is target object in original video
The time point that middle first time occurs, therefore first aim time point one is set to the target start time of first aim video-frequency band
Point.Similar, the last one object time point is the time point that target object occurs for the last time in original video, therefore most
The latter object time point one is set to the objective end time point of the last one target video section.
Step S30 is extracted in the original video according to the target start time point and objective end time point
Target video section comprising the target object.
In the present embodiment, according to each group target start time point and objective end time point, in the original video really
Each target video section that the fixed target object occurs.Intercept in the target video comprising each group target start time point and
Each section of target video section of objective end time point.It, can also be corresponding by each section of target video section addition in specific embodiment
Video segment identification, and carry out corresponding name storage.
Further, after the step S30, further includes:
Each target video section is synthesized into the corresponding target video of the target object, so as to described in continuity broadcasting
The corresponding video of target object.
Each section of target video section can be synthesized an entire target video, pass through institute by the present embodiment for the ease of user's viewing
It states target video continuity and plays the video-frequency band that the target object occurs in the original video, user can be whole at this
Continuity watches the target object in a target video.
The present embodiment provides a kind of extracting method of video-frequency band, device, equipment and computer readable storage mediums, by obtaining
The characteristic information for taking target object carries out the target object lookup according to the characteristic information in original video, and successively
Obtain the object time point in the original video comprising the target object;Calculate two-by-two between adjacent target time point when
Between it is poor, and according to the time difference between the time point of adjacent target two-by-two, target starting is determined in the object time point
Time point and objective end time point;According to the target start time point and objective end time point, in the original video
In extract the target video section comprising the target object.By the above-mentioned means, occurred in video according to target object
Object time point and the time difference between time point determine the corresponding target starting of each target video section that target object occurs
Thus time point and objective end time point can extract target video according to target start time point and objective end time point
Section, realizes the automatic cutting of video, improves cutting efficiency and accuracy, solve existing manually cut and cut caused by short-sighted frequency
The technical issues of cutting inefficiency.
It is the flow diagram of the extracting method second embodiment of video-frequency band of the present invention referring to Fig. 3, Fig. 3.
Based on above-mentioned embodiment illustrated in fig. 2, step S20 is specifically included:
Step S21 calculates the time difference two-by-two between adjacent target time point, will the adjacent target time point two-by-two it
Between time difference be determined as the temporal characteristics value of the latter object time point in the adjacent target time point two-by-two, and by first
The temporal characteristics value of a object time point is set as preset value;
In the present embodiment, for the original video, video structure analyzing is carried out by target of the target object, is obtained
The n non-sequential target time point that target object occurs in original video is arrived, and each object time point is denoted as vector number
Group X, i.e. x1, x2, x3..., xn-1, xn.N object time point is linked to be m coherent target video sections, wherein m is unknown
's.Namely the target start time point of m target video section and the target knot of m video-frequency band are determined from n object time point
Beam time point, and they are in pairs.Wherein, since this programme is that the result based on video structural output is implemented,
It needs to carry out some hypothesis to the result situation of video structural.Firstly, because video structural process be themselves based on be
Differentiate class algorithm, have certain error rate, it is therefore assumed that obtained output is the result is that entirely accurate, i.e., video structural is given
The object time point that target object out occurs is true, and there is no have target object appearance on detection object time point
But the object time point not detected.Assume that the object time point of video structural setting densely covers original enough simultaneously
Beginning video.
Since object time point successively being sorted, first object time point one is set to target start time point, and most
The latter object time point one is set to target state pause judgments point.Set the target start time point x in n object time point1
For the target start time point of first aim video-frequency band, terminating point xnFor the objective end time point of m-th of target video section.
In addition to this, in n non-sequential target time point, when a upper target for the target start time point of t-th of target video section
Between point be the t-1 target video section objective end time point.Therefore, it need to only determine comprising target start time point x1M
A target video section target start time point can determine whole m target video sections.M mesh is found out in n time point
It marks video-frequency band target start time point (referred to as breakpoint).Judge whether each object time point is breakpoint, and so on, i.e., pair
N object time point carries out n times deterministic process.In conclusion the deterministic process of above-mentioned breakpoint can be exchanged into a classification problem, solution
Certainly classification problem namely build a suitable classifier.Wherein, it builds classifier specific step is as follows:
First have to determine the object of input with output, output object is whether input time point is breakpoint as a result, input
Object is the characteristic information of object time point to be sorted.Specifically, the distance of n point point adjacent thereto between any two is calculated,
Sequence remembers n-1 characteristic value, vector λ is denoted as, as temporal characteristics value.Because first aim time point must be that target rises
Begin time point, so increasing by 0 value or other parameters value in the first place of n-1 characteristic value queue to get having arrived n feature
It is worth queue.Wherein, λiFor object time point xiFeature, and by above-mentioned temporal characteristics value input classifier.
Step S22 classifies to the temporal characteristics value of each object time point by default disaggregated model, obtains at least two
A characteristic value cluster, and breakpoint characteristic value cluster is determined in the characteristic value cluster based on pre- breaking characterization rules;
Wherein it is possible to include:
Classified by temporal characteristics value of the two classification device to each object time point, obtains two characteristic value clusters;
Based on preset threshold, the breakpoint characteristic value cluster is determined in described two characteristic value clusters, wherein the breakpoint is special
Temporal characteristics value in value indicative cluster is less than the preset threshold.
In the present embodiment, after determining input and output object, need to further determine that specific disaggregated model.But when above-mentioned
Between point data there is no comprising a time point whether be breakpoint information, that is, the time point data used do not contain class categories
Label.Therefore, it is necessary to use a non-supervisory sorter model, such as: K-means model.It, can also be in specific embodiment
Other non-supervisory classifiers are selected as the case may be.Since above-mentioned temporal characteristics Value Data is one-dimensional, the work of classifier
With being determining segmentation threshold in fact.Assuming that the detection object time point of detection video structural setting is intensive enough, so non-disconnected
The corresponding λ of point dataiValue all should be smaller.Conversely, the corresponding λ of breakpoint dataiValue should be larger, but may have simultaneously it is medium or
It is very big situation.Therefore, classifier can be two classification device, can also be multi classifier.The classification that should actually use
Number can specifically be set according to the actual situation, the class of the multi-class non-breakpoint of correspondence and the other multiclass for being breakpoint.More embodiments
In, the class categories number of actual use is often difficult to determine, can be further by the way of adaptive by reference index
It is selected, such as: ELBOW method.By classifier by qualified temporal characteristics value input value breakpoint characteristic value queue, that is, break
Point feature cluster.
The corresponding object time point of the breakpoint characteristic value cluster point of interruption characteristic value is determined as target starting by step S23
Time point, and the adjacent previous object time point of the target start time point and the last one object time point are determined as
Objective end time point.
In the present embodiment, each breakpoint characteristic value in the breakpoint feature cluster is obtained, and obtain each breakpoint characteristic value
The front and back target video point that the corresponding object time point of corresponding object time point, i.e. the breakpoint characteristic value is is two targets
The cut-point of video-frequency band, the corresponding object time point of the breakpoint characteristic value are the target start time of a target video section
Point, the corresponding previous object time point of the corresponding object time point of the breakpoint characteristic value are the mesh of a upper target video section
Mark end time point.
Further, based on the above embodiment, the step S30 is specifically included:
The target start time point and objective end time point are successively sorted according to chronological order, and will two-by-two
Adjacent target start time point and objective end time point is determined as the target start time point and mesh of same target video section
Mark end time point;
According to the corresponding target start time point of each target video section and objective end time point, in the original video
Extract each target video section.
In the present embodiment, according to each group target start time point and objective end time point, in the original video really
Each target video section that the fixed target object occurs.Intercept in the target video comprising each group target start time point and
Each section of target video section of objective end time point.
Further, it is described according to chronological order by the target start time point and objective end time point successively
Sequence, and the target that adjacent two-by-two target start time point and objective end time point are determined as same target video section is risen
After the step of time point beginning and objective end time point, further includes:
The target start time point of each target video section and objective end time point are added into corresponding segmenting video mark;
When playing the original video, shown in the corresponding video of the target object according to segmenting video mark
Hold prompting message, so that user carries out corresponding selection viewing.
In the present embodiment, in original video, each group target video section is subjected to corresponding content-label, so as to user into
The corresponding quick broadcasting of row.
It is the flow diagram of the extracting method 3rd embodiment of video-frequency band of the present invention referring to Fig. 4, Fig. 4.
Based on above-mentioned embodiment illustrated in fig. 2, step S22 is specifically included:
Step S221 is classified by temporal characteristics value of the multiclass clustering algorithm to each object time point, is obtained multiple
Characteristic value cluster;
In the present embodiment, classifier can be two classification device, can also be multi classifier.The classification that should actually use
Number can specifically be set according to the actual situation, the class of the multi-class non-breakpoint of correspondence and the other multiclass for being breakpoint.More embodiments
In, the class categories number of actual use is often difficult to determine, can be further by the way of adaptive by reference index
It is selected, such as: multiclass clustering algorithm ELBOW method.The temporal characteristics value of each object time point is classified as multiple characteristic value clusters, i.e.,
The category division of characteristic value cluster is carried out according to the finer time difference.
Step S222, according to the segmentation threshold of the multiple characteristic value cluster, in the multiple characteristic value cluster described in determination
Breakpoint characteristic value cluster, wherein the breakpoint characteristic value cluster is less than the segmentation threshold.
In the present embodiment, it is assumed that the detection object time point of detection video structural setting is intensive enough, so non-breakpoint
The corresponding λ of dataiValue all should be smaller.Conversely, the corresponding λ of breakpoint dataiValue should be larger, but may have simultaneously it is medium or
Very big situation.The class categories number of actual use is often difficult to determine, can with further by the way of adaptive by
Reference index is selected.It is realized according to specific time difference segmentation threshold, in the multiple characteristic value cluster by E LBOW method
The characteristic value cluster that breakpoint characteristic value cluster is less than the segmentation threshold is determined into the breakpoint characteristic value cluster.
It is the functional block diagram of the extraction element first embodiment of video-frequency band of the present invention referring to Fig. 5, Fig. 5.
The embodiment of the present invention also proposes that a kind of extraction element of video-frequency band, the extraction element of the video-frequency band include:
Time point obtains module 10, for obtaining the characteristic information of target object, according to the characteristic information in original view
The target object lookup is carried out in frequency, and successively obtains the object time in the original video comprising the target object
Point;
Time point confirmation module 20, for calculating the time difference between adjacent target time point two-by-two, and according to described two
Time difference between 2 adjacent object time points, at the end of target start time point and target are determined in the object time point
Between point;
Video-frequency band extraction module 30 is used for according to the target start time point and objective end time point, in the original
The target video section comprising the target object is extracted in beginning video.
Further, the extraction element of the video-frequency band further includes;
Video-frequency band synthesis module is regarded for each target video section to be synthesized the corresponding target of the target object
Frequently, so that continuity plays the corresponding video of the target object.
Further, the time point confirmation module 20 specifically includes:
Characteristic value computing unit will be described adjacent two-by-two for calculating the time difference between adjacent target time point two-by-two
The time that time difference between object time point is determined as the latter object time point in the adjacent target time point two-by-two is special
Value indicative, and preset value is set by the temporal characteristics value at first aim time point;
Characteristic value judging unit, for by presetting disaggregated model to the temporal characteristics value minute of each object time point
Class obtains at least two characteristic value clusters, and determines breakpoint characteristic value in the characteristic value cluster based on pre- breaking characterization rules
Cluster;
Time point confirmation unit, for determining the corresponding object time point of the breakpoint characteristic value cluster point of interruption characteristic value
For target start time point, and when by the adjacent previous object time point of the target start time point and the last one target
Between point be determined as objective end time point.
Further, the video-frequency band extraction module 30 is also used to:
The target start time point and objective end time point are successively sorted according to chronological order, and will two-by-two
Adjacent target start time point and objective end time point is determined as the target start time point and mesh of same target video section
Mark end time point;
According to the corresponding target start time point of each target video section and objective end time point, in the original video
Extract each target video section.
The target start time point of each target video section and objective end time point are added into corresponding segmenting video mark;
When playing the original video, shown in the corresponding video of the target object according to segmenting video mark
Hold prompting message, so that user carries out corresponding selection viewing.
Further, the characteristic value judging unit specifically includes:
Multiclass judgment sub-unit, for the temporal characteristics value minute by multiclass clustering algorithm to each object time point
Class obtains multiple characteristic value clusters;
Multiclass divides subelement, for the segmentation threshold according to the multiple characteristic value cluster, in the multiple characteristic value cluster
The middle determination breakpoint characteristic value cluster, wherein the breakpoint characteristic value cluster is less than the segmentation threshold.
Further, the present invention also provides a kind of computer readable storage mediums.
The extraction procedure of video-frequency band is stored on computer readable storage medium of the present invention, wherein the extraction of the video-frequency band
When program is executed by processor, realize as above-mentioned video-frequency band extracting method the step of.
Wherein, the extraction procedure of video-frequency band, which is performed realized method, can refer to the extracting method of video-frequency band of the present invention
Each embodiment, details are not described herein again.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of extracting method of video-frequency band, which is characterized in that the extracting method of the video-frequency band the following steps are included:
The characteristic information for obtaining target object, the target object lookup is carried out according to the characteristic information in original video,
And successively obtain the object time point in the original video comprising the target object;
The time difference between adjacent target time point two-by-two is calculated, and according to the time between the time point of adjacent target two-by-two
Difference determines target start time point and objective end time point in the object time point;
According to the target start time point and objective end time point, extracted in the original video comprising the target
The target video section of object.
2. the extracting method of video-frequency band as described in claim 1, which is characterized in that described to calculate adjacent target time point two-by-two
Between time difference obtained in the object time point and according to the time difference between the time point of adjacent target two-by-two
The step of target start time point and objective end time point includes:
The time difference between adjacent target time point two-by-two is calculated, the time difference between the time point of adjacent target two-by-two is true
It is set to the temporal characteristics value of the latter object time point in the adjacent target time point two-by-two, and by first aim time point
Temporal characteristics value be set as preset value;
Classified by default disaggregated model to the temporal characteristics value of each object time point, obtain at least two characteristic value clusters,
And breakpoint characteristic value cluster is determined in the characteristic value cluster based on pre- breaking characterization rules;
The corresponding object time point of the breakpoint characteristic value cluster point of interruption characteristic value is determined as target start time point, and by institute
It states the adjacent previous object time point of target start time point and the last one object time point is determined as objective end time
Point.
3. the extracting method of video-frequency band as claimed in claim 2, which is characterized in that described according to the target start time point
With objective end time point, the step of including the target video section of target object packet is extracted in the original video
It includes:
The target start time point and objective end time point are successively sorted according to chronological order, and will be adjacent two-by-two
Target start time point and objective end time point be determined as the target start time point and target knot of same target video section
Beam time point;
According to the corresponding target start time point of each target video section and objective end time point, extracted in the original video
Each target video section out.
4. the extracting method of video-frequency band as claimed in claim 3, which is characterized in that it is described will be described according to chronological order
Target start time point and objective end time point successively sort, and adjacent two-by-two target start time point and target are terminated
Time point was determined as after the step of target start time point and objective end time point of same target video section, further includes:
The target start time point of each target video section and objective end time point are added into corresponding segmenting video mark;
When playing the original video, show that the corresponding video content of the target object mentions according to segmenting video mark
Awake information, so that user carries out corresponding selection viewing.
5. the extracting method of video-frequency band as claimed in claim 2, which is characterized in that described by presetting disaggregated model to each mesh
The temporal characteristics value at mark time point is classified, and obtains at least two characteristic value clusters, and based on pre- breaking characterization rules in institute
State in characteristic value cluster determine breakpoint characteristic value cluster the step of include:
Classified by temporal characteristics value of the two classification device to each object time point, obtains two characteristic value clusters;
Based on preset threshold, the breakpoint characteristic value cluster is determined in described two characteristic value clusters, wherein the breakpoint characteristic value
Temporal characteristics value in cluster is less than the preset threshold.
6. the extracting method of video-frequency band as claimed in claim 2, which is characterized in that described by presetting disaggregated model to each mesh
The temporal characteristics value at mark time point is classified, and obtains at least two characteristic value clusters, and based on pre- breaking characterization rules in institute
The step of stating determining breakpoint characteristic value cluster in characteristic value cluster specifically includes:
Classified by temporal characteristics value of the multiclass clustering algorithm to each object time point, obtains multiple characteristic value clusters;
According to the segmentation threshold of the multiple characteristic value cluster, the breakpoint characteristic value cluster is determined in the multiple characteristic value cluster,
Wherein, the breakpoint characteristic value cluster is less than the segmentation threshold.
7. the extracting method of the video-frequency band as described in any one of claim 1-6, which is characterized in that described according to the mesh
Start time point and objective end time point are marked, the target video comprising the target object is extracted in the original video
After the step of section, further includes:
Each target video section is synthesized into the corresponding target video of the target object, so that continuity plays the target
The corresponding video of object.
8. a kind of extraction element of video-frequency band, which is characterized in that the extraction element of the video-frequency band includes:
Time point obtain module, for obtaining the characteristic information of target object, according to the characteristic information in original video into
The row target object is searched, and successively obtains the object time point in the original video comprising the target object;
Time point confirmation module, for calculating the time difference between adjacent target time point two-by-two, and according to described adjacent two-by-two
Time difference between object time point determines target start time point and objective end time point in the object time point;
Video-frequency band extraction module is used for according to the target start time point and objective end time point, in the original video
In extract the target video section comprising the target object.
9. a kind of extract equipment of video-frequency band, which is characterized in that the extract equipment of the video-frequency band include processor, memory,
And it is stored in the extraction procedure for the video-frequency band that can be executed on the memory and by the processor, wherein the video-frequency band
When extraction procedure is executed by the processor, the extracting method for realizing the video-frequency band as described in any one of claims 1 to 7
Step.
10. a kind of computer readable storage medium, which is characterized in that be stored with video-frequency band on the computer readable storage medium
Extraction procedure realize such as any one of claims 1 to 7 wherein when the extraction procedure of the video-frequency band is executed by processor
The step of extracting method of the video-frequency band.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811653235.5A CN109740530B (en) | 2018-12-29 | 2018-12-29 | Video segment extraction method, device, equipment and computer-readable storage medium |
PCT/CN2019/129318 WO2020135756A1 (en) | 2018-12-29 | 2019-12-27 | Video segment extraction method, apparatus and device, and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811653235.5A CN109740530B (en) | 2018-12-29 | 2018-12-29 | Video segment extraction method, device, equipment and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109740530A true CN109740530A (en) | 2019-05-10 |
CN109740530B CN109740530B (en) | 2022-05-03 |
Family
ID=66363014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811653235.5A Active CN109740530B (en) | 2018-12-29 | 2018-12-29 | Video segment extraction method, device, equipment and computer-readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109740530B (en) |
WO (1) | WO2020135756A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110990387A (en) * | 2019-11-29 | 2020-04-10 | 广东电网有限责任公司 | Method and device for processing power failure record error of electric energy metering equipment |
CN111314665A (en) * | 2020-03-07 | 2020-06-19 | 上海中科教育装备集团有限公司 | Key video segment extraction system and method for video post-scoring |
WO2020135756A1 (en) * | 2018-12-29 | 2020-07-02 | 深圳Tcl新技术有限公司 | Video segment extraction method, apparatus and device, and computer-readable storage medium |
CN113810751A (en) * | 2020-06-12 | 2021-12-17 | 阿里巴巴集团控股有限公司 | Video processing method and device, electronic device and server |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117411987B (en) * | 2023-12-13 | 2024-05-28 | 深圳万物安全科技有限公司 | Drop-out time detection method, drop-out time detection equipment and storage medium for monitoring video |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455625A (en) * | 2013-09-18 | 2013-12-18 | 武汉烽火众智数字技术有限责任公司 | Quick target rearrangement method for video abstraction |
CN106021496A (en) * | 2016-05-19 | 2016-10-12 | 海信集团有限公司 | Video search method and video search device |
CN107888988A (en) * | 2017-11-17 | 2018-04-06 | 广东小天才科技有限公司 | Video editing method and electronic equipment |
CN108307230A (en) * | 2018-02-07 | 2018-07-20 | 北京奇艺世纪科技有限公司 | A kind of extracting method and device of video highlight segment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731944A (en) * | 2015-03-31 | 2015-06-24 | 努比亚技术有限公司 | Video searching method and device |
CN106412690B (en) * | 2015-07-23 | 2020-01-10 | 无锡天脉聚源传媒科技有限公司 | Video playing determination method and device |
CN107609149B (en) * | 2017-09-21 | 2020-06-19 | 北京奇艺世纪科技有限公司 | Video positioning method and device |
CN108009516A (en) * | 2017-12-15 | 2018-05-08 | 暴风集团股份有限公司 | Video matching method, device and terminal |
CN109740530B (en) * | 2018-12-29 | 2022-05-03 | 深圳Tcl新技术有限公司 | Video segment extraction method, device, equipment and computer-readable storage medium |
-
2018
- 2018-12-29 CN CN201811653235.5A patent/CN109740530B/en active Active
-
2019
- 2019-12-27 WO PCT/CN2019/129318 patent/WO2020135756A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455625A (en) * | 2013-09-18 | 2013-12-18 | 武汉烽火众智数字技术有限责任公司 | Quick target rearrangement method for video abstraction |
CN106021496A (en) * | 2016-05-19 | 2016-10-12 | 海信集团有限公司 | Video search method and video search device |
CN107888988A (en) * | 2017-11-17 | 2018-04-06 | 广东小天才科技有限公司 | Video editing method and electronic equipment |
CN108307230A (en) * | 2018-02-07 | 2018-07-20 | 北京奇艺世纪科技有限公司 | A kind of extracting method and device of video highlight segment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020135756A1 (en) * | 2018-12-29 | 2020-07-02 | 深圳Tcl新技术有限公司 | Video segment extraction method, apparatus and device, and computer-readable storage medium |
CN110990387A (en) * | 2019-11-29 | 2020-04-10 | 广东电网有限责任公司 | Method and device for processing power failure record error of electric energy metering equipment |
CN110990387B (en) * | 2019-11-29 | 2024-02-27 | 广东电网有限责任公司 | Power failure record error processing method and device for electric energy metering equipment |
CN111314665A (en) * | 2020-03-07 | 2020-06-19 | 上海中科教育装备集团有限公司 | Key video segment extraction system and method for video post-scoring |
CN113810751A (en) * | 2020-06-12 | 2021-12-17 | 阿里巴巴集团控股有限公司 | Video processing method and device, electronic device and server |
CN113810751B (en) * | 2020-06-12 | 2022-10-28 | 阿里巴巴集团控股有限公司 | Video processing method and device, electronic device and server |
Also Published As
Publication number | Publication date |
---|---|
WO2020135756A1 (en) | 2020-07-02 |
CN109740530B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740530A (en) | Extracting method, device, equipment and the computer readable storage medium of video-frequency band | |
CN111858869B (en) | Data matching method and device, electronic equipment and storage medium | |
CN102930553B (en) | Bad video content recognition method and device | |
US9613296B1 (en) | Selecting a set of exemplar images for use in an automated image object recognition system | |
CN110263215B (en) | Video emotion positioning method and system | |
CN111680543B (en) | Action recognition method and device and electronic equipment | |
CN112511854A (en) | Live video highlight generation method, device, medium and equipment | |
CN111081257A (en) | Voice acquisition method, device, equipment and storage medium | |
CN111429341B (en) | Video processing method, device and computer readable storage medium | |
CN104598127A (en) | Method and device for inserting emoticon in dialogue interface | |
CN111435369B (en) | Music recommendation method, device, terminal and storage medium | |
CN113676671B (en) | Video editing method, device, electronic equipment and storage medium | |
CN111444819B (en) | Cut frame determining method, network training method, device, equipment and storage medium | |
CN111368867A (en) | Archive classification method and system and computer readable storage medium | |
CN113010736B (en) | Video classification method and device, electronic equipment and storage medium | |
CN111385659B (en) | Video recommendation method, device, equipment and storage medium | |
CN114187558A (en) | Video scene recognition method and device, computer equipment and storage medium | |
CN108614810A (en) | Complain hot spot automatic identifying method and device | |
CN111079757A (en) | Clothing attribute identification method and device and electronic equipment | |
CN113033282B (en) | Image recognition method, device and medium based on small object detection | |
CN114627556A (en) | Motion detection method, motion detection device, electronic apparatus, and storage medium | |
CN112101387A (en) | Salient element identification method and device | |
US8284319B2 (en) | Device and method for processing images to determine a signature of a film | |
CN110490226A (en) | A kind of recognition methods and equipment | |
CN106933847B (en) | Method and device for establishing data classification model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |