CN107493520A - A kind of video abstraction generating method and device - Google Patents
A kind of video abstraction generating method and device Download PDFInfo
- Publication number
- CN107493520A CN107493520A CN201610425059.4A CN201610425059A CN107493520A CN 107493520 A CN107493520 A CN 107493520A CN 201610425059 A CN201610425059 A CN 201610425059A CN 107493520 A CN107493520 A CN 107493520A
- Authority
- CN
- China
- Prior art keywords
- characteristic information
- area characteristic
- target area
- result
- true
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of video abstraction generating method and device, methods described includes:The summary handle for preserving the whole state of video frequency abstract is established, output result, target area characteristic information and the area characteristic information quantity for extracting the analysis of frame video identification are obtained using the method for taking out frame;Judge whether the output result is true;When output result is true, according to target area characteristic information noise threshold, target area characteristic information is filtered, updates the output result, and the robustness threshold value whether occurred according to each target filters to the output result;Judge it is described summary handle in historical results and the output result whether be true;Processing, generating function return value are updated to the target area characteristic information, area characteristic information quantity and summary handle according to judged result;When the function return value is true, the frame extracted is encoded as video frequency abstract.
Description
Technical field
The present invention relates to video identification technology field, more particularly to a kind of video abstraction generating method and device.
Background technology
As people are to the demand more and more higher of safety, effective solution of the video monitoring as safety precaution field,
Application is more and more wider, and demand is improving constantly.But video monitoring generates magnanimity information, cause the data volume of storage more next
Bigger, the amount of video information of redundancy is more and more, obtains the effective information of video and spends the time increasingly longer.
Video identification technology can carry out structured analysis to attention object, but not to the result of structured analysis
Excavate and utilize well.Video frequency abstract reduces video information redundancy, quick obtaining valuable information, reduction as a kind of
The new solution of memory space is used and given birth to.The purpose of video frequency abstract is that reduction code stream and memory space, use are as short as possible
Time obtains user as much as possible information interested.
Video frequency abstract is by video structure and content analysis, significant part being extracted from original video, and incite somebody to action
The content that is compact, can giving full expression to video semanteme that they are combined in some way, it is to the simple of video
Summarize, spectators can accurately obtain video content by watching video frequency abstract.Video frequency abstract can be divided into static video frequency abstract
With dynamic video two classes of summary, static video frequency abstract is usually to extract keyframe sequence, and video is combined into by multiple key frames
Obtain video frequency abstract.Dynamic video summary is also referred to as video skimming, and compared to keyframe sequence, it is moved by being extracted in original video
Information forms video, so as to increase the expressiveness of video in terms of semanteme.
During the present invention is realized, inventor has found following technical problem in the prior art at least be present:
Static video frequency abstract is obtained by extracting keyframe sequence, camera lens as much as possible can not be covered so as to obtain most
Complete video frequency abstract, the loss of bulk information can be therefore caused, can not fully meet the needs of user obtains comprehensive information;It is dynamic
State video frequency abstract is substantially forms video with single video analysis information, and it is emerging can not to meet that user thinks to obtain multiple senses simultaneously
The demand of interesting aspect.Existing video abstraction generating method can not generate according to multifarious user's request complete and accurate to make us
Satisfied video frequency abstract.
The content of the invention
Video abstraction generating method and device provided by the invention, can according to multifarious user's request complete and accurate
Generate gratifying video frequency abstract.
On the one hand, the present invention provides a kind of video abstraction generating method, including:
The summary handle for preserving whole video frequency abstract state is established, is obtained using the method for taking out frame and extracts frame video identification point
Output result, target area characteristic information and the area characteristic information quantity of analysis;
Judge whether the output result is true;
When output result is true, according to target area characteristic information noise threshold, target area characteristic information is carried out
Filtering, updates the output result, and the robustness threshold value whether occurred according to each target filters to the output result;
Judge it is described summary handle in historical results and the output result whether be true;
The target area characteristic information, area characteristic information quantity and summary handle are updated according to judged result
Processing, generating function return value;
When the function return value is true, the frame extracted is encoded as video frequency abstract.
On the other hand, the present invention provides a kind of video frequency abstract generating means, including:
Acquiring unit, for preserving the summary handle of whole video frequency abstract state for establishing, obtained using the method for taking out frame
Take output result, target area characteristic information and the area characteristic information quantity for extracting the analysis of frame video identification;
First judging unit, for judging whether the output result is true;
First filter element, for when output result is true, according to target area characteristic information noise threshold, to target
Area characteristic information is filtered, and updates the output result;
Second filter element, the robustness threshold value for whether being occurred according to each target were carried out to the output result
Filter;
Second judging unit, for judge it is described summary handle in historical results and the output result whether be true;
Generation unit, for the target area characteristic information, area characteristic information quantity and being plucked according to judged result
Handle is wanted to be updated processing, generating function return value;
Coding unit, for when the function return value is true, being encoded to the frame extracted as video frequency abstract.
Video abstraction generating method and device provided by the invention, establish the summary sentence for preserving whole video frequency abstract state
Handle, output result, target area characteristic information and the provincial characteristics for extracting the analysis of frame video identification are obtained using the method for taking out frame
Information content, when it is true to judge the output result, according to target area characteristic information noise threshold, to target area feature
Information is filtered, and updates the output result, and the robustness threshold value whether occurred according to each target is to the output result
Filtered, whether the historical results and the output result in the summary handle are genuine judged result to the mesh
Mark area characteristic information, area characteristic information quantity and summary handle are updated processing, generating function return value, when the letter
When number return value is true, the frame extracted is encoded as video frequency abstract.Compared with prior art, the present invention can use
The method for taking out frame, carry out user's video analysis interested to extracting frame, obtain the provincial characteristics result of video analysis, according to when
Empty relevance, using trace as angle, extraction satisfaction meets regarding for trace from acquired video analysis provincial characteristics result
Frequency frame, and it is encoded, so as to generate gratifying video according to multifarious user's request complete and accurate
Summary.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, make required in being described below to embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those of ordinary skill in the art, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other
Accompanying drawing.
Fig. 1 is the flow chart for the video abstraction generating method that one embodiment of the invention provides;
Fig. 2 is the flow chart for the video abstraction generating method that another embodiment of the present invention provides;
Fig. 3 is the flow chart of fitting scanning provided in an embodiment of the present invention;
Fig. 4 is the structural representation for the video frequency abstract generating means that one embodiment of the invention provides;
Fig. 5 is the structural representation of generation unit 46 in Fig. 4;
Fig. 6 is the structural representation of the 4th generation module 464 in Fig. 5.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art are obtained all other under the premise of creative work is not made
Embodiment, belong to the scope of protection of the invention.
One embodiment of the invention provides a kind of video abstraction generating method, as shown in figure 1, the video abstraction generating method
Including:
S11, the summary handle for preserving whole video frequency abstract state is established, frame is uniformly taken out using fixed intervals set in advance
Method obtain output result, target area characteristic information and the area characteristic information quantity for extracting the analysis of frame video identification;
Wherein, the video frequency abstract state includes:
maxRectCnt:The maximum region feature quantity obtained from one frame of analysis that can be handled;
boolValThresd:Whether filtering analyzes the robustness threshold value of the provincial characteristics testing result of needs from a frame;
rectAreaThresd:The area noise threshold of the provincial characteristics analyzed from a frame;
posThresd:Position threshold between the provincial characteristics of two adjacent extraction frames;
sizeThresd:Area threshold between the provincial characteristics of two adjacent extraction frames;
traceTriggerThresd:Path activation threshold value.
S12, judge whether the output result is true;
Wherein, the output result for extracting the analysis of frame video identification includes whether patrolling containing target area characteristic information
Collect value.
S13, when output result is true, according to target area characteristic information area noise threshold, to target area feature
Information is filtered, and updates the output result, and according to robustness threshold filtering output result;
Specifically, remove the area characteristic information less than area noise threshold, leave more than or equal to area noise threshold
Area characteristic information, and according to whether area characteristic information is left, to update output result.
S14, judge it is described summary handle in historical results and the output result whether be true;
Wherein, compared with the output result in step S12, output result here refers to removing less than area noise threshold
The area characteristic information of value, after leaving the area characteristic information more than or equal to area noise threshold, if contain target area
The logical value of characteristic information.
S15, according to judged result to the target area characteristic information, area characteristic information quantity and summary handle carry out
Renewal is handled, and carries out the analysis with the presence or absence of trace, generating function return value;
Specifically, the function return value of generation whether there is trace.
S16, when the function return value is true, the frame extracted is encoded as video frequency abstract.
Video abstraction generating method provided in an embodiment of the present invention, establish the summary sentence for preserving whole video frequency abstract state
Handle, output result, target area characteristic information and the provincial characteristics for extracting the analysis of frame video identification are obtained using the method for taking out frame
Information content, when it is true to judge the output result, according to target area characteristic information noise threshold, to target area feature
Information is filtered, and updates the output result, and the robustness threshold value whether occurred according to each target is to the output result
Filtered, whether the historical results and the output result in the summary handle are genuine judged result to the mesh
Mark area characteristic information, area characteristic information quantity and summary handle are updated processing, generating function return value, when the letter
When number return value is true, the frame extracted is encoded as video frequency abstract.Compared with prior art, the present invention can use
The provincial characteristics result of the video identification analysis interested of user corresponding to frame is extracted in the method crawl for taking out frame, interested from user
Video identification analysis area results according to target space-time relevance using trace be angle extraction video information progress
Coding, so as to generate gratifying video frequency abstract according to multifarious user's request complete and accurate.
Another embodiment of the present invention provides a kind of video abstraction generating method, as shown in Fig. 2 the video frequency abstract generation side
Method includes:
S21, using the method for taking out frame, the output result of some in the extracted frame of crawl or the analysis of multiple video identifications
Result, the target area characteristic information pARect and area characteristic information quantity cnt of current interest and summary handle h.
Wherein, video identification analysis operation include but is not limited to mobile detection, Face datection, recognition of face, it is humanoid detection,
Vehicle detection etc..
Specifically, it is as follows to take out frame strategy:
(1) framing bit figure bitmap is taken out in initialization:
First, arrange parameter step initial value is inFrmRate*1000/outFrmRate, form int;Set
Bitmap initial value is 0, form uint128_t;The initial value for setting i, p, n is 0, form int;
Then, i values plus 1, and perform following circulation:
Binary number bitmap is moved to left one;
Judge whether i is equal to n, if i is equal to n, bitmap values plus 1, p value adds 1, n=(step*p+500)/1000;
Then, i values continue to add 1, repeat above-mentioned circulation, until during i=inFrmRate, terminate above-mentioned circulation.
(2) whether present frame extracts, and determines usage, usage is true:Then extract, it is false:Then skip.
Usage computational methods are:
Bitmap=(((bitmap<<1)|(bitmap>>(inFrmRate-1))) & (~((uint128_t) 1<<
inFrmRate)));
Usage=bitmap&0x1;
Wherein:InFrmRate is input frame rate, and outFrmRate is to take out frame per second after frame, i.e. output frame rate.
Wherein, inFrmRate 8,10,15,20,25,50,75 and 100, corresponding outFrmRate are inFrmRate
1/16 to 1/4 between extract effect it is preferable.
S22, each result of scanning, determine whether true, if true, then perform step S23, otherwise perform step S24.
S23, the area noise threshold rectAreaThresd by each target area feature, are filtered to pARect,
If pARect is all noise, result is set to vacation, otherwise, result keeps true, and filters out noise area characteristic information,
Update pArect and cnt.
Specifically, remove the area characteristic information less than area noise threshold, leave more than or equal to area noise threshold
Area characteristic information, and according to whether area characteristic information is left, to update target area characteristic information and area characteristic information
Quantity.
Wherein, the area noise threshold rectAreaThresd of the target area feature is preferably 256 or 512.
S24, the robustness threshold value boolValThresd whether occurred according to each target filter to result.
Specifically, the output result by the robustness the threshold value whether output result of video analysis occurs to video analysis
The result whether occurred is filtered, and is increased the robustness that the output result of video analysis occurs, is reduced the output of video analysis
As a result the robustness to disappear, to extract user's interesting target in time, and the shake that the output result for reducing video analysis occurs
Noise.Wherein, robustness threshold value boolValThresd is preferably 2.
Historical results rsltLast and result in S25, judgement summary handle h, if rsltLast and result are
Jia && is true, then performs step S261;If rsltLast and result are Zhen && vacations, step S262 is performed;If rsltLast and
Result is Jia && vacations, then performs step S263;If rsltLast and result are that Zhen && is true, step S264 is performed.
Specifically, the result the combination whether result and current goal whether occurred according to history target occurs is taken respectively
Fill, remove, ignoring, being fitted four points of processing steps of filtering.
S261, target be not by occurring (quiet) to appearance (dynamic), the area characteristic information and provincial characteristics that each target is occurred
Information content preserves, and the result that more new historical target occurs is appearance.
Specifically, pARect and cnt are filled into the mrect and mrectCnt in summary handle h, and in the handle that will make a summary
RsltLast be set to very, function returns false.
To not occurring (quiet), the saved area characteristic information occurred is emptied by appearance (dynamic) for S262, target,
Area characteristic information zeroing number, and the result that more new historical target occurs is not occur.
Specifically, the mrectCnt in the handle h that makes a summary is set to 0, rsltLast is set to vacation, function returns false.
By not occurring (quiet) to not occurring (quiet), attonity, function returns false for S263, target.
S264, target are done fitting between each target and scanned by there is (dynamic) to there is (dynamic), and according to whether be fitted and
Whether fitting amount triggers trace threshold value to determine whether trace.
Specifically, the history area characteristic information mrect in pARect and summary handle h is fitted scanning,
RsltLast is very constant, if being fitted to very, function returns to true, otherwise return vacation.
Whether it is true that S27, discriminant function return, and if true, then performs step S28, otherwise performs step S29.
If S28, premode (bDynamicMode), i.e., before frame whether be that trace is false, then be set to it is true, and
Force to compile IDR or I frames, otherwise normal encoding, its corresponding timestamp information is superimposed to encoded video frame before coding, to obtain sense
The time that interesting event occurs.
Wherein, what IDR and I frames used is all intraframe coding, and every IDR frame will can all lose for encoded digital information before
Re-establish, I frames then will not.
Specifically, if the non-trace of history is currently trace, every IDR frame or I frames coding are forced;If history is trace
Trace, then encoded according to the default setting of encoder.
S29, by premode (bDynamicMode) be set to vacation, skip coding.
Specifically, if the current non-trace of the non-trace of history, is not encoded.
As shown in figure 3, the process that scanning is fitted in the step S264 is specific as follows:
S31, scanning current goal area characteristic information pRect and history target area characteristic information mRect, to each time
Scanning input is:Invalid fitting history matrix character information oldRect, current invalid fit metric characteristic information curRect, position
Put threshold value posThresd, size threshold value sizeThresd.
S32, calculate in current goal area characteristic information curRect and history target area characteristic information oldRect
Heart position displacement, when center displacement is less than positional jitter threshold value, it is judged to shaking;Otherwise center is worked as
Displacement is judged to changing when being less than change in location threshold value, otherwise, it is determined that being invalid.
Wherein, the positional jitter threshold value is preferably 8, and the change in location threshold value is preferably 64.
S33, the face for calculating current goal area characteristic information curRect and history target area characteristic information oldRect
Product difference in size value, when area difference in size value is less than area dithering threshold, is judged to shaking;Otherwise when area difference in size
Value is judged to changing when being less than area change threshold value, otherwise, it is determined that being invalid.
Wherein, the area dithering threshold is preferably 1024, and the area change threshold value is preferably 4096.
S34, to history area characteristic information and current region characteristic information from two angles in position and size to target area
Domain is fitted scanning, draw respectively shake, change, it is invalid three in a result, and with position result and size result
Combination determines whether trace.
Specifically, decision content is fitted according to position and size is fitted decision content according to grand combination is fitted to judge two matrixes
Whether fit, if being equal to ALGO_MATCH_CHANGE (p, s), i.e. change in location and size is non-invalid or size variation and position
Non- invalid, then fitting result is change, 1 is returned to, otherwise equal to ALGO_MATCH_SHAKE (p, s), i.e. positional jitter and size
Shake, then fitting result is shake, returns to 0, if being otherwise equal to ALGO_MATCH_INV (p, s), i.e., position is invalid or size without
Effect, then fitting result is invalid to return to -1.
Wherein, it is fitted grand be combined as:
#define ALGO_MATCH_SHAKE_VAL(1<<0)
#define ALGO_MATCH_CHANGE_VAL(1<<1)
#define ALGO_MATCH_INV_VAL(1<<2)
#define ALGO_MATCH_SHAKE (p, s) (((p)==ALGO_MATCH_SHAKE_VAL)s && ((s)==
ALGO_MATCH_SHAKE_VAL))
#define ALGO_MATCH_CHANGE (p, s) ((((p)==ALGO_MATCH_CHANGE_VAL) && ((s)!
=ALGO_MATCH_INV_VAL)) | | (((p)==ALGO_MATCH_INV_VAL)s && ((s)==ALGO_MATCH_
CHANGE_VAL)))
#define ALGO_MATCH_INV (p, s) (((p)==ALGO_MATCH_INV_VAL) | | ((s)==
ALGO_MATCH_INV_VAL))
S35, judge that fitting result for change, shake or invalid, if change, then performs step S361;If shake,
Then perform step S362;If invalid, then step S363 is performed.
S361, for this eigenmatrix information, it is curRect to update its oldRect, and it is fitted value and adds 1, if fitting amount
More than trace activation threshold value traceTriggerThresd, then it is determined as trace.
If change in location and size is non-invalid or size variation and position it is non-invalid, for change, update this target area
Characteristic information, and increase it and be fitted quantized value, when quantized value exceedes trace activation threshold value, it is determined as trace;
Wherein, the trace activation threshold value traceTriggerThresd is preferably 2.
S362, for this eigenmatrix information, it is curRect to update its oldRect, is determined as non-trace.
If positional jitter and size shake, for shake, update this target area characteristic information, but maintain its fitting to quantify
State, it is determined as non-trace.
S363, when last curRect or oldRect without be effectively matched when, by curRect renewal into mrect,
OldRect in mrect will be replaced, and its corresponding fitting amount can be cleared, and be determined as non-trace.
If position is invalid or size is invalid, the characteristic information of this target area is emptied, and the fitting quantized value that is zeroed, judge
For non-trace.
S37, history or the current characteristic information region do not scanned is determined whether, if so, then return to step S31, otherwise
Perform step S38.
S38, there is trace, return to true, no trace return vacation, termination scanning.
Following table is to use video abstraction generating method provided in an embodiment of the present invention, one section to scene for office on daytime
Video carries out the Contrast on effect before and after video compress:
Project | Before compression | After compression | Compression ratio |
File size | 1986MB | 41MB | 45.14 times |
Time | 1986s | 51.7s | 38.4 times |
As can be seen from the table, using video abstraction generating method provided in an embodiment of the present invention, by office on daytime field
33.06min video reduction is 51.7s under scape, can greatly save the time for searching for useful video, reaches the mesh of fast browsing
, and 1896M code stream is compressed into 41M, code stream can be highly desirable reduced, reduces the spending of storage.
The embodiment of the present invention also provides a kind of video frequency abstract generating means, as shown in figure 4, the video frequency abstract generating means
Including:
Acquiring unit 41, the summary handle of whole video frequency abstract state is preserved for establishing, obtained using the method for taking out frame
Extract output result, target area characteristic information and the area characteristic information quantity of frame video identification analysis;
First judging unit 42, for judging whether the output result is true;
First filter element 43, for when output result is true, according to target area characteristic information noise threshold, to mesh
Mark area characteristic information is filtered, and updates the output result;
Second filter element 44, the robustness threshold value for whether being occurred according to each target were carried out to the output result
Filter;
Second judging unit 45, for judge it is described summary handle in historical results and the output result whether be
Very;
Generation unit 46, for according to judged result to the target area characteristic information, area characteristic information quantity and
Summary handle is updated processing, generating function return value;
Coding unit 47, for when the function return value is true, being compiled to the frame extracted as video frequency abstract
Code.
Video frequency abstract generating means provided by the invention, the summary handle for preserving whole video frequency abstract state is established, used
The method for taking out frame obtains output result, target area characteristic information and the area characteristic information number for extracting the analysis of frame video identification
Amount, when it is true to judge the output result, according to target area characteristic information noise threshold, enters to target area characteristic information
Row filtering, updates the output result, and the robustness threshold value whether occurred according to each target was carried out to the output result
Whether filter, historical results and the output result in the summary handle are genuine judged result to the target area
Characteristic information, area characteristic information quantity and summary handle are updated processing, generating function return value, when the function returns
When value is true, the frame extracted is encoded as video frequency abstract.Compared with prior art, the present invention can use and smoke frame
The provincial characteristics result of the video identification analysis interested of user corresponding to frame, the video interested from user are extracted in method crawl
Encoded in the area results of discriminance analysis according to relevance of the target in space-time by angle extraction video information of trace, from
And gratifying video frequency abstract can be generated according to multifarious user's request complete and accurate.
Alternatively, first filter element 43, for when target area characteristic information is all noise, by the output
As a result vacation is set to, otherwise, it is true to keep the output result, and filters out noise area characteristic information.
Alternatively, as shown in figure 5, the generation unit 46 includes:
First generation module 461, for when the historical results and the output result are respectively false and true, will described in
Target area characteristic information and area characteristic information quantity be filled into it is described summary handle in, and by it is described summary handle in going through
History result is set to very, and generating function return value is false;
Second generation module 462, for when the historical results and the output result are respectively true and false, will described in
History area characteristic information quantity in summary handle is set to 0, and historical results are set to vacation, and generating function return value is false;
3rd generation module 463, for being respectively false and fictitious time when the historical results and the output result, generate letter
Number return value is false;
4th generation module 464, for when the historical results and the output result are respectively true and true, by target
History area characteristic information in area characteristic information and summary handle is fitted scanning, and the historical results remain very,
If fitting scanning result is true, generating function return value is true, and otherwise generating function return value is false.
Alternatively, as shown in fig. 6, the 4th generation module 464 includes:
Submodule 4641 is scanned, it is right for scanning current goal area characteristic information and history target area characteristic information
Scanning input each time is:Invalid fitting history matrix character information, invalid fitting current matrix characteristic information, position threshold,
Size threshold value;
First decision sub-module 4642, for history area characteristic information and current region characteristic information from position angle
Target area is judged, draw respectively shake, change, it is invalid three in a result;
Second decision sub-module 4643, for history area characteristic information and current region characteristic information from big low-angle
Judgement is fitted to target area, draw respectively shake, change, it is invalid three in a result;
Judging submodule 4644 is combined, for determining whether track with location determination result and the combination of size result of determination
Mark;
Scanning result generation submodule 4645 is fitted, for the history that ought do not scanned or current characteristic information region
When, if judged result is has trace, scanning result is true, if judged result is without trace, scanning result is false.
Alternatively, first decision sub-module 4642, for calculating current goal area characteristic information and history target
The center displacement of area characteristic information, when center displacement is less than positional jitter threshold value, it is judged to trembling
It is dynamic;Otherwise it is judged to changing when center displacement is less than change in location threshold value, otherwise, it is determined that being invalid.
Alternatively, second decision sub-module 4643, for calculating current goal area characteristic information and history target
The size difference value of area characteristic information, when area difference in size value is less than area dithering threshold, it is judged to shaking;It is no
Then it is judged to changing when area difference in size value is less than area change threshold value, otherwise, it is determined that being invalid.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium
In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic
Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access
Memory, RAM) etc..
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, the change or replacement that can readily occur in, all should
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (12)
- A kind of 1. video abstraction generating method, it is characterised in that including:The summary handle for preserving whole video frequency abstract state is established, is obtained using the method for taking out frame and extracts the analysis of frame video identification Output result, target area characteristic information and area characteristic information quantity;Judge whether the output result is true;When the output result is true, according to target area characteristic information noise threshold, target area characteristic information is carried out Filtering, updates the output result, and the robustness threshold value whether occurred according to each target filters to the output result;Judge it is described summary handle in historical results and the output result whether be true;Place is updated to the target area characteristic information, area characteristic information quantity and summary handle according to judged result Reason, generating function return value;When the function return value is true, the frame extracted is encoded as video frequency abstract.
- 2. according to the method for claim 1, it is characterised in that it is described according to target area characteristic information noise threshold, it is right Target area characteristic information is filtered, and updating the output result includes:If target area characteristic information is all noise, the output result is set to vacation, otherwise, keeps the output result to be Very, and noise area characteristic information is filtered out.
- 3. according to the method for claim 2, it is characterised in that the target area feature is believed according to the judged result Breath, area characteristic information quantity and summary handle are updated processing, and generating function return value includes:It is if the historical results and the output result are respectively false and true, the target area characteristic information and region is special Sign information content is filled into the summary handle, and the historical results in the summary handle are set to very, and generating function is returned It is false to return value;If the historical results and the output result are respectively true and false, by the history area feature in the summary handle Information content is set to 0, and historical results are set to vacation, and generating function return value is false;If the historical results and the output result are respectively false and false, generating function return value is false;If the historical results and the output result are respectively true and true, by target area characteristic information and summary handle History area characteristic information be fitted scanning, the historical results remain very, if fitting scanning result be true, generate Function return value is true, and otherwise generating function return value is false.
- 4. according to the method for claim 3, it is characterised in that described by target area characteristic information and summary handle History area characteristic information, which is fitted scanning, to be included:Current goal area characteristic information and history target area characteristic information are scanned, is to scanning input each time:Invalid plan Close history matrix character information, invalid fitting current matrix characteristic information, position threshold, size threshold value;History area characteristic information and current region characteristic information are judged target area from position angle, drawn respectively Shake, change, it is invalid three in a result;Judgement is fitted to target area from big low-angle to history area characteristic information and current region characteristic information, respectively Draw shake, change, it is invalid three in a result;Trace is determined whether with location determination result and the combination of size result of determination;When the history target area characteristic information or current goal area characteristic information not scanned, if judged result is to have Trace, then scanning result is true that otherwise judged result is that then scanning result is false without trace.
- 5. according to the method for claim 4, it is characterised in that described to history area characteristic information and current region feature Information judged target area from position angle, including:Current goal area characteristic information and the center displacement of history target area characteristic information are calculated, works as centre bit When putting displacement and being less than positional jitter threshold value, it is judged to shaking;Otherwise when center displacement is less than change in location threshold It is judged to changing during value, otherwise, it is determined that being invalid.
- 6. according to the method for claim 4, it is characterised in that described to history area characteristic information and current region feature Information is fitted judgement from big low-angle to target area to be included:Current goal area characteristic information and the size difference value of history target area characteristic information are calculated, works as size When difference value is less than area dithering threshold, it is judged to shaking;Otherwise sentence when area difference in size value is less than area change threshold value It is set to change, otherwise, it is determined that being invalid.
- A kind of 7. video frequency abstract generating means, it is characterised in that including:Acquiring unit, the summary handle of whole video frequency abstract state is preserved for establishing, obtained using the method for taking out frame and extract frame Output result, target area characteristic information and the area characteristic information quantity of video identification analysis;First judging unit, for judging whether the output result is true;First filter element, for when output result is true, according to target area characteristic information noise threshold, to target area Characteristic information is filtered, and updates the output result;Second filter element, the robustness threshold value for whether being occurred according to each target filter to the output result;Second judging unit, for judge it is described summary handle in historical results and the output result whether be true;Generation unit, for according to judged result to the target area characteristic information, area characteristic information quantity and summary sentence Handle is updated processing, generating function return value;Coding unit, for when the function return value is true, being encoded to the frame extracted as video frequency abstract.
- 8. device according to claim 7, it is characterised in that first filter element, for when target area feature When information is all noise, the output result is set to vacation, otherwise, it is true to keep the output result, and filters out noise regions Characteristic of field information.
- 9. device according to claim 8, it is characterised in that the generation unit includes:First generation module, for when the historical results and the output result are respectively false and true, by the target area Characteristic of field information and area characteristic information quantity be filled into it is described summary handle in, and by it is described summary handle in historical results It is set to very, generating function return value is false;Second generation module, for when the historical results and the output result are respectively true and false, by the summary sentence History area characteristic information quantity in handle is set to 0, and historical results are set to vacation, and generating function return value is false;3rd generation module, for being respectively false and fictitious time when the historical results and the output result, generating function returns Value is false;4th generation module, it is for when the historical results and the output result are respectively true and true, target area is special The history area characteristic information that reference ceases and made a summary in handle is fitted scanning, and the historical results remain very, if fitting Scanning result is true, and generating function return value is true, and otherwise generating function return value is false.
- 10. device according to claim 9, it is characterised in that the 4th generation module includes:Submodule is scanned, for scanning current goal area characteristic information and history target area characteristic information, to sweeping each time Retouching input is:Invalid fitting history matrix character information, invalid fitting current matrix characteristic information, position threshold, size threshold value;First decision sub-module, for history area characteristic information and current region characteristic information from position angle to target area Domain judged, draw respectively shake, change, it is invalid three in a result;Second decision sub-module, for history area characteristic information and current region characteristic information from big low-angle to target area Domain is fitted judgement, draw respectively shake, change, it is invalid three in a result;Judging submodule is combined, for determining whether trace with location determination result and the combination of size result of determination;Scanning result generation submodule is fitted, for when the history or current characteristic information region not scanned, if judging As a result to there is trace, then scanning result is true, if judged result is without trace, scanning result is false.
- 11. device according to claim 10, it is characterised in that first decision sub-module, for calculating current mesh Area characteristic information and the center displacement of history target area characteristic information are marked, displacement is less than when center During positional jitter threshold value, it is judged to shaking;Otherwise it is judged to changing when center displacement is less than change in location threshold value, Otherwise, it is determined that it is invalid.
- 12. device according to claim 10, it is characterised in that second decision sub-module, for calculating current mesh Area characteristic information and the size difference value of history target area characteristic information are marked, when area difference in size value is less than area During dithering threshold, it is judged to shaking;Otherwise it is judged to changing when area difference in size value is less than area change threshold value, otherwise, It is invalid to be determined as.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610425059.4A CN107493520A (en) | 2016-06-13 | 2016-06-13 | A kind of video abstraction generating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610425059.4A CN107493520A (en) | 2016-06-13 | 2016-06-13 | A kind of video abstraction generating method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107493520A true CN107493520A (en) | 2017-12-19 |
Family
ID=60642058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610425059.4A Pending CN107493520A (en) | 2016-06-13 | 2016-06-13 | A kind of video abstraction generating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107493520A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103259964A (en) * | 2012-11-21 | 2013-08-21 | 上海申瑞电网控制系统有限公司 | Image signal filtering method of video dynamic target capture |
CN103929685A (en) * | 2014-04-15 | 2014-07-16 | 中国华戎控股有限公司 | Video abstract generating and indexing method |
CN104424638A (en) * | 2013-08-27 | 2015-03-18 | 深圳市安芯数字发展有限公司 | Target tracking method based on shielding situation |
KR20150084567A (en) * | 2014-01-14 | 2015-07-22 | 한화테크윈 주식회사 | System and method for browsing summary image |
CN105184258A (en) * | 2015-09-09 | 2015-12-23 | 苏州科达科技股份有限公司 | Target tracking method and system and staff behavior analyzing method and system |
-
2016
- 2016-06-13 CN CN201610425059.4A patent/CN107493520A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103259964A (en) * | 2012-11-21 | 2013-08-21 | 上海申瑞电网控制系统有限公司 | Image signal filtering method of video dynamic target capture |
CN104424638A (en) * | 2013-08-27 | 2015-03-18 | 深圳市安芯数字发展有限公司 | Target tracking method based on shielding situation |
KR20150084567A (en) * | 2014-01-14 | 2015-07-22 | 한화테크윈 주식회사 | System and method for browsing summary image |
CN103929685A (en) * | 2014-04-15 | 2014-07-16 | 中国华戎控股有限公司 | Video abstract generating and indexing method |
CN105184258A (en) * | 2015-09-09 | 2015-12-23 | 苏州科达科技股份有限公司 | Target tracking method and system and staff behavior analyzing method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jiang et al. | Scene change detection techniques for video database systems | |
US20130293776A1 (en) | System and method for extracting text captions from video and generating video summaries | |
CN110751224A (en) | Training method of video classification model, video classification method, device and equipment | |
CN101389029B (en) | Method and apparatus for video image encoding and retrieval | |
CN111598026A (en) | Action recognition method, device, equipment and storage medium | |
CN109922334B (en) | Video quality identification method and system | |
CN111783712A (en) | Video processing method, device, equipment and medium | |
CN109195011B (en) | Video processing method, device, equipment and storage medium | |
CN112434608A (en) | Human behavior identification method and system based on double-current combined network | |
CN113111823A (en) | Abnormal behavior detection method and related device for building construction site | |
CN112818955A (en) | Image segmentation method and device, computer equipment and storage medium | |
CN116363554A (en) | Method, system, medium, equipment and terminal for extracting key frames of surveillance video | |
CN109660762A (en) | Size figure correlating method and device in intelligent candid device | |
CN105245817A (en) | Video playback method and video playback device | |
Fu et al. | Texture segmentation based video compression using convolutional neural networks | |
CN113971831A (en) | Dynamically updated face recognition method and device and electronic equipment | |
CN107493520A (en) | A kind of video abstraction generating method and device | |
CN109299132B (en) | SQL data processing method and system and electronic equipment | |
CN117112814A (en) | False media content mining and identification system and identification method thereof | |
JPH05197810A (en) | Command input processing method by image | |
CN107368532A (en) | A kind of user agent's field information processing method and system | |
KR102504321B1 (en) | Apparatus and method for online action detection | |
CN114626994A (en) | Image processing method, video processing method, computer equipment and storage medium | |
KR100772788B1 (en) | Motion activity description method and apparatus for video | |
CN112929662B (en) | Coding method for solving object overlapping problem in code stream structured image coding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171219 |
|
RJ01 | Rejection of invention patent application after publication |