CN104063883A - Surveillance video abstract generating method based on combination of object and key frames - Google Patents

Surveillance video abstract generating method based on combination of object and key frames Download PDF

Info

Publication number
CN104063883A
CN104063883A CN201410321281.0A CN201410321281A CN104063883A CN 104063883 A CN104063883 A CN 104063883A CN 201410321281 A CN201410321281 A CN 201410321281A CN 104063883 A CN104063883 A CN 104063883A
Authority
CN
China
Prior art keywords
frame
video
picture
moving objects
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410321281.0A
Other languages
Chinese (zh)
Other versions
CN104063883B (en
Inventor
裘加林
陈建群
王亚沛
张标标
李仁旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Original Assignee
HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd filed Critical HANGZHOU YINJIANG ZHIHUI MEDICAL GROUP CO Ltd
Priority to CN201410321281.0A priority Critical patent/CN104063883B/en
Publication of CN104063883A publication Critical patent/CN104063883A/en
Application granted granted Critical
Publication of CN104063883B publication Critical patent/CN104063883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of video image processing, in particular to a surveillance video abstract generating method based on a combination of an object and key frames. The surveillance video abstract generating method includes the steps of extracting movement fragments, setting up a mixed Gaussian background model, detecting the movement object in a background deduction mode, tracking the movement object based on Kalman prediction, extracting the key frames after image frames are accurately clustered through preliminary extraction and further classification, deducting the movement object and synthesizing an abstract. The surveillance video abstract generating method has the advantages that a main algorithm flow of the video abstract based on the object is adopted, an improved key frame extracting method is integrated, the key frames are extracted after the image frames are accurately clustered through preliminary extraction and further classification, video redundant information is reduced as far as possible on the premise of not leaving out key information, and the compression degree of videos is good; meanwhile, a reasonable video synthesis step is provided so that a user can read the surveillance content of the videos in a short time and position useful information rapidly and effectively.

Description

A kind of monitor video abstraction generating method combining based on object and key frame
Technical field
The present invention relates to field of video image processing, relate in particular to a kind of monitor video abstraction generating method combining based on object and key frame.
Background technology
Along with the development of digital video technology, increasing monitoring video equipment is deployed in each place of society, various monitoring video device real-time monitorings, all collect the video information of magnanimity every day, therefore the requirement of video analysis and processing are also improved constantly.General monitoring video equipment all needs monitor staff to monitor video, but how fast browsing video data from the video information of magnanimity, accurately, certain time period occurrence of fast finding, a job extremely taking time and effort for monitor staff, and for monitor staff, often to pay close attention to many watch-dogs simultaneously, may not be concerned about in real time each monitoring details, be easy to omit because of the absent minded of personnel.
In order to help monitor staff's fast browsing video data, convenient inquiry video information, the analysis based on video content has proposed video summarization method.Video frequency abstract claims again video concentrated, to video content simplified summary, by to moving target analysis, extract moving target, then the movement locus of each target is analyzed, different targets is spliced in a common background scene, and they are combined in some way and obtain new video content.The research of existing video frequency abstract is mainly with separately based on key frame be basic based on object separately.The keyframe sequence that video summarization method based on key frame is chosen in former video combines video intentionally, and the method exists shortcoming that can not expressed intact video semanteme, causes multitude of video information loss, cannot Overall Acquisition video information; Although object-based video summarization method can by Moving Objects track give depicted, this method video compress degree is not strong.
Summary of the invention
The present invention overcomes above-mentioned weak point, and object is to provide a kind of monitor video abstraction generating method combining based on object and key frame, and the method can solve the problem that existing video frequency abstract video semanteme is imperfect, compression degree is not strong.
The present invention achieves the above object by the following technical programs: a kind of monitor video abstraction generating method combining based on object and key frame, comprises the following steps:
1) adjacent two two field pictures are contrasted and obtain changing value, changing value and threshold value compare, and the image that reservation changing value is greater than threshold value is motion fragment, and motion fragment forms new video;
2) new video is evenly divided into several fragments according to video time, utilizes mixture Gaussian background model to the picture frame production background model in each fragment, background model adds that timestamp information deposits in background database;
3) any t moment, current frame image I twith background model B trelatively obtain the difference value at (x, y) some place, difference value and threshold value T comparison, the part that difference value is greater than T is Moving Objects, all the other are background;
4) based on Kalman filtering algorithm, Moving Objects is followed the tracks of, the trace information of all Moving Objects deposits in object database;
5) the same Moving Objects detecting is marked, utilize the color characteristic of object to carry out similarity contrast, sort out for the picture frame that similarity is high, to of a sort picture frame calculating mean value, choosing with the frame of mean value difference minimum is key frame;
6) adopt GraphCut algorithm to carry out Moving Objects to take off to marking the region of Moving Objects, from background database, take out the background image of different time sections and copy, the Moving Objects of taking off is put in chronological order successively to the correspondence position of background image, formed video frequency abstract.
As preferably, described step 5) in, taking the first frame as reference frame, its color histogram is V 1, the color histogram of subsequent frame is V i, V iwith V 1similarity than for S i, every frame is divided into MxN piece, and computing formula is:
S i = Σ j = 1 M × N S ij M j / M × N
S ij=|V ij-V 1j|
Wherein, S ijthe similarity value of i frame j piece, M jit is the weight factor of j piece;
According to above-mentioned formula, picture frame is sorted out, if S ibe less than or equal to threshold alpha, picture frame and V 1be classified as a class picture frame L 1if, α <S i≤ α 0, picture frame is classified as and treats framing, if α is >S 0, picture frame is classified as Equations of The Second Kind L 2; Then with Equations of The Second Kind L 2the first frame as reference frame, carry out similarity contrast with each frame below, complete the contrast of all picture frames.
As preferably, the described framing for the treatment of is set up to worker thread, adopt the color similarity contrast of channel separation, specifically comprise the following steps:
To input reference frame image and two field picture undetermined utilize cvSplit function by the separation of images of rgb format the image for B, G, tri-passages of R, bImg, gImg, rImg, three of reference frame image and two field picture undetermined Color Channels are carried out respectively to similarity contrast, if the similarity of three passages all in threshold value two field picture undetermined and reference frame image be same class, otherwise treat that framing is classified as next class.
As preferably, described same Moving Objects adopts minimum rectangle frame to mark, and rectangle frame is that similarity comparison domain and Moving Objects are taken off region.
As preferably, described step 6) specifically comprise the following steps:
6.1) take out the background image frame obtaining in first time period;
6.2) all Moving Objects of taking off in this time period are placed and numbered at the correspondence position of background image in chronological order;
6.3) object part overlapped in a two field picture is carried out to translucentization processing;
6.4) in the time that being greater than certain threshold value, the object number of placing in a two field picture stops placing;
6.5) copy background image frame and continue placing objects until discharge;
6.6) take out the background image frame in the next time period and the object of taking off in this time period, repeating step 6.2)-6.5);
6.7) all picture frames after synthetic are combined, form final video frequency abstract.
Beneficial effect of the present invention is: the inventive method adopts a kind of main algorithm flow process of object-based video frequency abstract, and incorporate improved extraction method of key frame, to after accurate picture frame cluster, extract key frame by preliminary extraction and further classification, under the prerequisite of not omitting key message, reduce as much as possible video redundant information, video compress degree is better; Propose comparatively reasonably video synthesis step of one simultaneously, made user can read at short notice video monitoring content, fast and effeciently located useful information.
Brief description of the drawings
Fig. 1 is method step process flow diagram of the present invention;
Fig. 2 is the flow chart of steps of key-frame extraction;
Fig. 3 is that in the embodiment of the present invention, the weights of picture frame are cut apart figure.
Embodiment
Below in conjunction with specific embodiment, the present invention is described further, but protection scope of the present invention is not limited in this:
Embodiment 1: as shown in Figure 1, a kind of monitor video abstraction generating method combining based on object and key frame, it comprises the following steps:
(1) extract motion fragment:
Adopt the method for consecutive frame contrast continuously adjacent two frames to be contrasted, if change and be greater than threshold value between two frames, regarded as Moving Objects and occurred, be extracted as motion fragment, otherwise give up to fall as non-motion fragment.
Taking the picture frame of m × n size as example, the gray-scale value of establishing each pixel is G ij, the pixel matrix of K two field picture is:
G K = G K 11 G K 12 &CenterDot; &CenterDot; &CenterDot; G K 1 n G K 21 G K 22 &CenterDot; &CenterDot; &CenterDot; G K 2 n &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; G K m 1 G K m 2 &CenterDot; &CenterDot; &CenterDot; G K mn
If m row row vector A=(11 ... 1), n ranks vector B=(11 ... 1) t,
C K = A &times; G K = ( &Sigma; i = 1 m G i 1 K , &Sigma; i = 1 m G i 2 K , &Sigma; i = 1 m G i 3 K &CenterDot; &CenterDot; &CenterDot; &Sigma; i = 1 m G in K ) - - - ( 1 )
R K = G K &times; B = ( &Sigma; j = 1 n G 1 j K , &Sigma; j = 1 n G 2 j K , &Sigma; j = 1 n G 3 j K &CenterDot; &CenterDot; &CenterDot; &Sigma; j = 1 n G mj K ) T - - - ( 2 )
In like manner can obtain the C of K+1 frame k+1and R k+1be respectively:
C K + 1 = A &times; G K + 1 = ( &Sigma; i = 1 m G i 1 K + 1 , &Sigma; i = 1 m G i 2 K + 1 , &Sigma; i = 1 m G i 3 K + 1 &CenterDot; &CenterDot; &CenterDot; &Sigma; i = 1 m G in K + 1 ) - - - ( 3 )
R K + 1 = G K + 1 &times; B = ( &Sigma; j = 1 n G 1 j K + 1 , &Sigma; j = 1 n G 2 j K + 1 , &Sigma; j = 1 n G 3 j K + 1 &CenterDot; &CenterDot; &CenterDot; &Sigma; j = 1 n G mj K + 1 ) T - - - ( 4 )
The corresponding row column position of K frame and K+1 frame is subtracted each other and is taken absolute value:
C d = ( | C K + 1 - C K | ) = ( | &Sigma; i = 1 m G i 1 K + 1 - &Sigma; i = 1 m G i 1 K | , | &Sigma; i = 1 m G i 2 K + 1 - &Sigma; i = 1 m G i 2 K | , &CenterDot; &CenterDot; &CenterDot; , | &Sigma; i = 1 m G in K + 1 - &Sigma; i = 1 m G in K | ) - - - ( 5 )
R d = ( | R K + 1 - R K | ) = ( | &Sigma; i = 1 m G Ij K + 1 - &Sigma; i = 1 m G 1 j K | , | &Sigma; i = 1 m G 2 j K + 1 - &Sigma; i = 1 m G 2 j K | , &CenterDot; &CenterDot; &CenterDot; , | &Sigma; i = 1 m G mj K + 1 - &Sigma; i = 1 m G mj K | ) T - - - ( 5 )
Order
Σ C=C d×B
Σ R=A×R d
Finally draw two value Σ that represent respectively row and column contrast cand Σ r, then respectively with threshold value T c, T rcontrast, if meet
Σ C>T C||Σ R≤T R(7)
The correlative value that is row and column has a threshold value that exceedes setting, is judged as motion fragment, otherwise is non-motion fragment, gives up non-motion fragment to reduce redundancy video.
(2) mixed Gaussian background modeling:
Each pixel is set up to K Gauss model and reflect time dependent real background, new picture frame is mated, and continuous Renewal model.Each pixel is set up to mixed Gauss model as follows:
P ( x t ) = &Sigma; i = 1 K w i , t * &eta; ( x t , u i , t , &Sigma; i , t ) - - - ( 8 )
&eta; ( x t , u i , t , &Sigma; i , t ) = 1 ( 2 &pi; ) n 2 | &Sigma; | 1 2 e - 1 2 ( x t - u t ) T &Sigma; - 1 ( x t - u t ) - - - ( 9 )
Wherein w i,tbe i the weights that Gauss model is corresponding, η (x t, u i,t, Σ i,t) be the probability density function of i Gauss model, Σ i,tit is the covariance matrix of i Gauss model.
The weight w of t moment i Gauss model i,tmore new formula is as follows:
w i,t=(1-α)w i,t-1+α(M i,t) (10)
Wherein α is learning rate, M i,tfor matching degree, be 1 to the model value of coupling, remaining value is 0, constant to unmatched model maintenance, the model modification of coupling is as follows:
u t=(1-ρ)u t-1+ρx t(11)
σ 2 t=(1-ρ)σ 2 t-1+ρ(x t-u t) T(x t-u t) (12)
ρ=αη(x t|u kk) (13)
All Gaussian distribution order is according to priority arranged, and the top n of getting wherein distributes according to weight generation background model, and the selection rule of N is:
N=argmin n(w k>T) (14)
Wherein T is predefined threshold value.
The mode that adopts timing extraction background model, is divided into N time period uniformly by video time, the picture frame in each time period is adopted to the method generation background model of mixed Gauss model, adds that timestamp information deposits background database in.
(3) background subtraction detects Moving Objects:
Can obtain the background image Model B of any moment t according to background modeling above t, current frame image I twith background image in the gray-scale value difference result at (x, y) some place be:
Δd t(x,y)=|I t(x,y)-B t(x,y)| (15)
Wherein Δ d t(x, y), I t(x, y), B t(x, y) represents respectively the gray-scale value that difference image, current frame image, the background image in t moment are located at (x, y), and by difference image gray-scale value and threshold value T contrast, the part that is greater than T is defined as Moving Objects part, otherwise is background parts.
d t ( x , y ) = 0 , d t ( x , y ) < T 1 , d t ( x , y ) &GreaterEqual; T - - - ( 16 )
(4) moving object tracking based on Kalman prediction
First adopt the position that in Kalman filtering algorithm prediction next frame image, Moving Objects occurs, obtain the center point coordinate of forecasting object rectangle frame, then calculate the two-dimensional distance between the actual object rectangle frame center point coordinate detecting, if distance is less than threshold value, think that the match is successful, and upgating object, if mate unsuccessful, think that former object disappears, and new object is followed the tracks of, the Moving Objects of each tracking is added to timestamp information deposits object database in.
Kalman filtering algorithm flow process is as follows:
The predictive equation of state vector:
X ^ ( k , k - 1 ) = A ( k , k - 1 ) X ^ ( k - 1 ) - - - ( 17 )
The prediction variance of error covariance:
p(k,k-1)=A(k,k-1)P(k-1)A T(k,k-1)+Q(k-1) (18)
The gain of Kalman filtering:
K (k)=P (k, k-1) H t(k) (H (k) P (k, k-1) H t(k)+R (k)) -1(19) to state vector correction:
X ^ ( k ) = X ^ ( k , k - 1 ) + K ( k ) ( Z ( k ) - H ( k ) X ^ ( k , k - 1 ) ) - - - ( 20 )
To the correction of error covariance matrix:
P(k)=P(k,k-1)-K(k)H(k)P(k,k-1) (21)
If i the object centers point coordinate obtaining by Kalman filter forecasting is j the object centers point coordinate detecting is X j(x j, n+1, y j, n+1), the two-dimensional distance recording is:
D i , j = ( x ^ i , n + 1 - x j , n + 1 ) 2 + ( y ^ i , n + 1 - y j , n + 1 ) 2 - - - ( 22 )
Judged result after binaryzation is:
X j = 1 , D i , j &le; T 0 , D i , j > T - - - ( 23 )
(5) key-frame extraction, as shown in Figure 2:
The present invention adopts improved extraction method of key frame, the method contrasting by weighting in system main thread is carried out preliminary extraction, set up worker thread simultaneously, do not affecting under the prerequisite of system real time, the method that adopts split tunnel to contrast respectively is further differentiated.
1) weighting is tentatively extracted:
For the same Moving Objects detecting, all with minimum rectangle collimation mark note, taking its rectangle frame as matching area, the pixel in different images frame is carried out to similarity Characteristic Contrast at each frame.
Utilize the color characteristic of object to carry out similarity contrast, adopt the color characteristic of color histogram V presentation graphs picture frame, carry out the similarity between indicated object with S (Similar writes a Chinese character in simplified form).
Taking the first frame as reference frame, its color histogram is V 1, the color histogram V of subsequent frame irespectively with V 1carry out similarity contrast, result is S iif, S ibe less than or equal to threshold alpha, with V 1be classified as a class picture frame L 1if, α <S i≤ α 0, be classified as and treat framing, if α is >S 0, be classified as Equations of The Second Kind L 2, then with the first frame of Equations of The Second Kind as reference frame, carry out similarity contrast with each frame below, by that analogy.
Consider the interior different pixels point of rectangle frame shared importance degree difference in similarity contrast of object, with the matrix of a 4*4, rectangle frame is divided into 16 pieces herein, as shown in Fig. 3-(b), first each piece is carried out to similarity contrast, then participate in the similarity computing of this frame according to the weights of the each point of matrix, weights distribute as shown in Fig. 3-(a).
The similarity of i frame j piece is S ij, can calculate respectively the similarity in each piece region and the 1st frame corresponding region in i frame, can calculate i two field picture V after adding the weight factor M of every iwith V 1between similarity S i, computing formula is as follows:
S ij=|V ij-V 1j| (24)
S i = &Sigma; j = 1 16 S ij M j / 36 - - - ( 25 )
Through the processing of above process, can complete the preliminary classification of similar frame, for being classified as the picture frame for the treatment of framing, set up worker thread, the mode that adopts channel separation and each Color Channel is carried out to similarity contrast is carried out more accurate pixel degree contrast to this frame, the foundation of worker thread is in order further to differentiate under the prerequisite not affecting host process real-time, and assisted process is only processed the reference frame therewith time until framing of sending into after the contrast of each host process similarity.
2) channel separation is accurately extracted:
Worker thread is processed as follows:
For each width reference frame image and the two field picture undetermined of input, utilize cvSplit function in opencv by the separation of images of rgb format image bImg, gImg, the rImg for B, G, tri-passages of R, then respectively the similarity of three Color Channels is contrasted, if three passage similarities are all in threshold value, be classified as a class with reference class, otherwise be classified as next class.
S in formula ir, S ig, S ibrepresent respectively R, G, tri-similarity α corresponding to passage of B r, α g, α brepresent respectively the similarity threshold of three path settings.
For the picture frame V that is classified as a class 1~V i, get the mean value of all frames using with the frame of mean value difference minimum as key frame, delete all the other frames, to reduce image redundancy.
V &OverBar; = &Sigma; j = 1 i V j / i - - - ( 28 )
Describedly choose one section of monitor video of 58 minutes and test, test result is as shown in table 1:
Table 1
(6) Moving Objects is taken off and synthesizing of making a summary:
Moving Objects is carried out to foreground detection and mark above, it is main next that but the object at this moment detecting is also not enough to cut apart from background as the object of taking off, for reducing calculated amount, taking rectangle frame that object detected as region, in region, adopt GraphCut algorithm to carry out taking off of Moving Objects, the object of taking off is added to timestamp information deposits Moving Objects database in.
Take out the background image of the different time sections of extracting above and copy, the Moving Objects of taking off being put in chronological order successively to the correspondence position of background image, finally forming the form of video frequency abstract, concrete steps are as follows:
1) take out the background image frame obtaining in first time period;
2) all Moving Objects of taking off in this time period are placed and numbered at the correspondence position of background image in chronological order;
3) object part overlapped in a two field picture is carried out to translucentization processing, in order to avoid affect visual effect;
4) in the time that being greater than certain threshold value, the object number of placing in a two field picture stops placing;
5) copy background image frame and continue placing objects until discharge;
6) take out the background image frame in the next time period and the object of taking off in this time period and place, method is the same, until discharge;
7) all picture frames after synthetic are combined, form final video frequency abstract.
Described in above, be specific embodiments of the invention and the know-why used, if the change of doing according to conception of the present invention, when its function producing does not exceed spiritual that instructions and accompanying drawing contain yet, must belong to protection scope of the present invention.

Claims (5)

1. the monitor video abstraction generating method combining based on object and key frame, is characterized in that comprising the following steps:
1) adjacent two two field pictures are contrasted and obtain changing value, changing value and threshold value compare, and the image that reservation changing value is greater than threshold value is motion fragment, and motion fragment forms new video;
2) new video is evenly divided into several fragments according to video time, utilizes mixture Gaussian background model to the picture frame production background model in each fragment, background model adds that timestamp information deposits in background database;
3) any t moment, current frame image I twith background model B trelatively obtain the difference value at (x, y) some place, difference value and threshold value T comparison, the part that difference value is greater than T is Moving Objects, all the other are background;
4) based on Kalman filtering algorithm, Moving Objects is followed the tracks of, the trace information of all Moving Objects deposits in object database;
5) the same Moving Objects detecting is marked, utilize the color characteristic of object to carry out similarity contrast, sort out for the picture frame that similarity is high, to of a sort picture frame calculating mean value, choosing with the frame of mean value difference minimum is key frame;
6) adopt GraphCut algorithm to carry out Moving Objects to take off to marking the region of Moving Objects, from background database, take out the background image of different time sections and copy, the Moving Objects of taking off is put in chronological order successively to the correspondence position of background image, formed video frequency abstract.
2. a kind of monitor video abstraction generating method combining based on object and key frame according to claim 1, is characterized in that described step 5) in, taking the first frame as reference frame, its color histogram is V 1, the color histogram of subsequent frame is V i, V iwith V 1similarity than for S i, every frame is divided into MxN piece, and computing formula is:
S i = &Sigma; j = 1 M &times; N S ij M j / M &times; N
S ij=|V ij-V 1j|
Wherein, S ijthe similarity value of i frame j piece, M jit is the weight factor of j piece;
According to above-mentioned formula, picture frame is sorted out, if S ibe less than or equal to threshold alpha, picture frame and V 1be classified as a class picture frame L 1if, α <S i≤ α 0, picture frame is classified as and treats framing, if α is >S 0, picture frame is classified as Equations of The Second Kind L 2; Then with Equations of The Second Kind L 2the first frame as reference frame, carry out similarity contrast with each frame below, complete the contrast of all picture frames.
3. a kind of monitor video abstraction generating method combining based on object and key frame according to claim 2, is characterized in that, the described framing for the treatment of is set up to worker thread, adopts the color similarity contrast of channel separation, specifically comprises the following steps:
To input reference frame image and two field picture undetermined utilize cvSplit function by the separation of images of rgb format the image for B, G, tri-passages of R, bImg, gImg, rImg, three of reference frame image and two field picture undetermined Color Channels are carried out respectively to similarity contrast, if the similarity of three passages all in threshold value two field picture undetermined and reference frame image be same class, otherwise treat that framing is classified as next class.
4. a kind of monitor video abstraction generating method combining based on object and key frame according to claim 1, it is characterized in that, described same Moving Objects adopts minimum rectangle frame to mark, and rectangle frame is that similarity comparison domain and Moving Objects are taken off region.
5. a kind of monitor video abstraction generating method combining based on object and key frame according to claim 1, is characterized in that described step 6) specifically comprise the following steps:
6.1) take out the background image frame obtaining in first time period;
6.2) all Moving Objects of taking off in this time period are placed and numbered at the correspondence position of background image in chronological order;
6.3) object part overlapped in a two field picture is carried out to translucentization processing;
6.4) in the time that being greater than certain threshold value, the object number of placing in a two field picture stops placing;
6.5) copy background image frame and continue placing objects until discharge;
6.6) take out the background image frame in the next time period and the object of taking off in this time period, repeating step 6.2)-6.5);
6.7) all picture frames after synthetic are combined, form final video frequency abstract.
CN201410321281.0A 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame Active CN104063883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410321281.0A CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410321281.0A CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Publications (2)

Publication Number Publication Date
CN104063883A true CN104063883A (en) 2014-09-24
CN104063883B CN104063883B (en) 2018-03-16

Family

ID=51551575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410321281.0A Active CN104063883B (en) 2014-07-07 2014-07-07 A kind of monitor video abstraction generating method being combined based on object and key frame

Country Status (1)

Country Link
CN (1) CN104063883B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284158A (en) * 2014-10-23 2015-01-14 南京信必达智能技术有限公司 Event-oriented intelligent camera monitoring method
CN104394379A (en) * 2014-12-05 2015-03-04 北京厚吉科技有限公司 Fast previewing system and fast viewing method of surveillance video
CN104469547A (en) * 2014-12-10 2015-03-25 西安理工大学 Video abstraction generation method based on arborescence moving target trajectory
CN104867161A (en) * 2015-05-14 2015-08-26 国家电网公司 Video-processing method and device
CN104954893A (en) * 2015-06-25 2015-09-30 西安理工大学 Falsely-detected target chain deleting method for video abstract generation
CN104980707A (en) * 2015-06-25 2015-10-14 浙江立元通信技术股份有限公司 Intelligent video patrol system
CN105451023A (en) * 2015-11-20 2016-03-30 南京杰迈视讯科技有限公司 Motion sensing video storage system and method
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
CN105611430A (en) * 2014-11-14 2016-05-25 索尼公司 Method and system for processing video content
CN106534653A (en) * 2016-12-30 2017-03-22 哈尔滨理工大学 Network imaging device for detecting indoor moving target and detection method thereof
CN106991111A (en) * 2015-11-20 2017-07-28 晶睿通讯股份有限公司 System and method for video file playback with pre-screening capability and computer program product
CN107018352A (en) * 2016-01-27 2017-08-04 北京中科晶上科技有限公司 A kind of method of video image processing, apparatus and system
CN107133580A (en) * 2017-04-24 2017-09-05 杭州空灵智能科技有限公司 A kind of synthetic method of 3D printing monitor video
CN107223344A (en) * 2017-01-24 2017-09-29 深圳大学 The generation method and device of a kind of static video frequency abstract
CN107360386A (en) * 2016-05-09 2017-11-17 杭州登虹科技有限公司 Reduce the method for multi-medium file size
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108446373A (en) * 2018-03-16 2018-08-24 五八有限公司 Animation playing method, device, equipment and storage medium
CN108848422A (en) * 2018-04-19 2018-11-20 清华大学 A kind of video abstraction generating method based on target detection
CN109145771A (en) * 2018-08-01 2019-01-04 武汉普利商用机器有限公司 A kind of face snap method and device
CN110798656A (en) * 2018-08-03 2020-02-14 中国移动通信有限公司研究院 Method, device, medium and equipment for processing monitoring video file
CN111385672A (en) * 2018-12-31 2020-07-07 杭州登虹科技有限公司 Event-based video concentration technology
CN111510724A (en) * 2019-01-31 2020-08-07 北京小犀智能科技中心(有限合伙) Equivalent video compression storage method and system based on image feature extraction
CN111754512A (en) * 2020-07-17 2020-10-09 成都盛锴科技有限公司 Pantograph state information acquisition method and system
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN113873200A (en) * 2021-09-26 2021-12-31 珠海研果科技有限公司 Image identification method and system
CN114245033A (en) * 2021-11-03 2022-03-25 浙江大华技术股份有限公司 Video synthesis method and device
CN114422848A (en) * 2022-01-19 2022-04-29 腾讯科技(深圳)有限公司 Video segmentation method and device, electronic equipment and storage medium
CN114697761A (en) * 2022-04-07 2022-07-01 脸萌有限公司 Processing method, processing device, terminal equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101996307A (en) * 2009-08-10 2011-03-30 上海理视微电子有限公司 Intelligent video human body identification method
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101996307A (en) * 2009-08-10 2011-03-30 上海理视微电子有限公司 Intelligent video human body identification method
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284158A (en) * 2014-10-23 2015-01-14 南京信必达智能技术有限公司 Event-oriented intelligent camera monitoring method
CN105611430A (en) * 2014-11-14 2016-05-25 索尼公司 Method and system for processing video content
CN105611430B (en) * 2014-11-14 2018-10-12 索尼公司 Method and system for handling video content
CN104394379A (en) * 2014-12-05 2015-03-04 北京厚吉科技有限公司 Fast previewing system and fast viewing method of surveillance video
CN104394379B (en) * 2014-12-05 2017-10-13 北京厚吉科技有限公司 Monitoring video rapid preview system and rapid preview method
CN104469547B (en) * 2014-12-10 2017-06-06 西安理工大学 A kind of video abstraction generating method based on tree-shaped movement objective orbit
CN104469547A (en) * 2014-12-10 2015-03-25 西安理工大学 Video abstraction generation method based on arborescence moving target trajectory
CN104867161B (en) * 2015-05-14 2018-03-23 国家电网公司 A kind of method for processing video frequency and device
CN104867161A (en) * 2015-05-14 2015-08-26 国家电网公司 Video-processing method and device
CN104980707B (en) * 2015-06-25 2019-03-08 浙江立元通信技术股份有限公司 A kind of intelligent video patrol system
CN104954893A (en) * 2015-06-25 2015-09-30 西安理工大学 Falsely-detected target chain deleting method for video abstract generation
CN104954893B (en) * 2015-06-25 2017-11-28 西安理工大学 A kind of flase drop target chain delet method of video frequency abstract generation
CN104980707A (en) * 2015-06-25 2015-10-14 浙江立元通信技术股份有限公司 Intelligent video patrol system
CN105451023B (en) * 2015-11-20 2018-10-02 南京杰迈视讯科技有限公司 A kind of Video Storage System and method of motion perception
CN106991111A (en) * 2015-11-20 2017-07-28 晶睿通讯股份有限公司 System and method for video file playback with pre-screening capability and computer program product
CN105451023A (en) * 2015-11-20 2016-03-30 南京杰迈视讯科技有限公司 Motion sensing video storage system and method
CN105469425A (en) * 2015-11-24 2016-04-06 上海君是信息科技有限公司 Video condensation method
CN107018352A (en) * 2016-01-27 2017-08-04 北京中科晶上科技有限公司 A kind of method of video image processing, apparatus and system
CN107360386A (en) * 2016-05-09 2017-11-17 杭州登虹科技有限公司 Reduce the method for multi-medium file size
CN106534653A (en) * 2016-12-30 2017-03-22 哈尔滨理工大学 Network imaging device for detecting indoor moving target and detection method thereof
CN107223344A (en) * 2017-01-24 2017-09-29 深圳大学 The generation method and device of a kind of static video frequency abstract
CN107133580A (en) * 2017-04-24 2017-09-05 杭州空灵智能科技有限公司 A kind of synthetic method of 3D printing monitor video
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108446373A (en) * 2018-03-16 2018-08-24 五八有限公司 Animation playing method, device, equipment and storage medium
CN108848422A (en) * 2018-04-19 2018-11-20 清华大学 A kind of video abstraction generating method based on target detection
CN108848422B (en) * 2018-04-19 2020-06-02 清华大学 Video abstract generation method based on target detection
CN109145771B (en) * 2018-08-01 2020-11-20 武汉普利商用机器有限公司 Face snapshot method and device
CN109145771A (en) * 2018-08-01 2019-01-04 武汉普利商用机器有限公司 A kind of face snap method and device
CN110798656A (en) * 2018-08-03 2020-02-14 中国移动通信有限公司研究院 Method, device, medium and equipment for processing monitoring video file
CN111385672B (en) * 2018-12-31 2022-01-04 杭州登虹科技有限公司 Video concentration method based on event
CN111385672A (en) * 2018-12-31 2020-07-07 杭州登虹科技有限公司 Event-based video concentration technology
CN111510724A (en) * 2019-01-31 2020-08-07 北京小犀智能科技中心(有限合伙) Equivalent video compression storage method and system based on image feature extraction
CN111754512A (en) * 2020-07-17 2020-10-09 成都盛锴科技有限公司 Pantograph state information acquisition method and system
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN112333537B (en) * 2020-07-27 2023-12-05 深圳Tcl新技术有限公司 Video integration method, device and computer readable storage medium
CN113873200A (en) * 2021-09-26 2021-12-31 珠海研果科技有限公司 Image identification method and system
CN113873200B (en) * 2021-09-26 2024-02-02 珠海研果科技有限公司 Image identification method and system
CN114245033A (en) * 2021-11-03 2022-03-25 浙江大华技术股份有限公司 Video synthesis method and device
CN114422848A (en) * 2022-01-19 2022-04-29 腾讯科技(深圳)有限公司 Video segmentation method and device, electronic equipment and storage medium
CN114697761A (en) * 2022-04-07 2022-07-01 脸萌有限公司 Processing method, processing device, terminal equipment and medium
US11676385B1 (en) 2022-04-07 2023-06-13 Lemon Inc. Processing method and apparatus, terminal device and medium
CN114697761B (en) * 2022-04-07 2024-02-13 脸萌有限公司 Processing method, processing device, terminal equipment and medium

Also Published As

Publication number Publication date
CN104063883B (en) 2018-03-16

Similar Documents

Publication Publication Date Title
CN104063883A (en) Surveillance video abstract generating method based on combination of object and key frames
CN101778260B (en) Method and system for monitoring and managing videos on basis of structured description
CN103546667B (en) A kind of automatic news demolition method towards magnanimity broadcast television supervision
CN110267061B (en) News splitting method and system
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
Yang et al. Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics
CN102496001A (en) Method of video monitor object automatic detection and system thereof
CN104463232A (en) Density crowd counting method based on HOG characteristic and color histogram characteristic
CN106033548B (en) Crowd abnormity detection method based on improved dictionary learning
CN103530638A (en) Method for matching pedestrians under multiple cameras
CN102222104A (en) Method for intelligently extracting video abstract based on time-space fusion
CN102810208B (en) Based on the criminal investigation video pre-filtering method that direct of travel detects
CN112990122B (en) Complex behavior identification method based on video basic unit analysis
CN104156729B (en) A kind of classroom demographic method
CN103336955A (en) Generation method and generation device of character playing locus in video, and client
CN106960176A (en) A kind of pedestrian&#39;s gender identification method based on transfinite learning machine and color characteristic fusion
CN106792005A (en) A kind of content detection algorithm combined based on audio frequency and video
CN102663777A (en) Target tracking method and system based on multi-view video
CN104504733A (en) Video abstraction method and system based on moving target detection
CN106503170A (en) A kind of based on the image base construction method for blocking dimension
CN106447695A (en) Method and device for judging same object in multi-object tracking
CN106572387A (en) video sequence alignment method and system
CN106127813B (en) The monitor video motion segments dividing method of view-based access control model energy sensing
CN103020094B (en) Video playback number of times statistical method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant