CN103810711A - Keyframe extracting method and system for monitoring system videos - Google Patents

Keyframe extracting method and system for monitoring system videos Download PDF

Info

Publication number
CN103810711A
CN103810711A CN201410074061.2A CN201410074061A CN103810711A CN 103810711 A CN103810711 A CN 103810711A CN 201410074061 A CN201410074061 A CN 201410074061A CN 103810711 A CN103810711 A CN 103810711A
Authority
CN
China
Prior art keywords
key frame
video
pixel
frame
keyframe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410074061.2A
Other languages
Chinese (zh)
Inventor
王书栋
耿静
曹仰杰
郝伟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHENGZHOU RIXING ELECTRONICS Co Ltd
Original Assignee
ZHENGZHOU RIXING ELECTRONICS Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHENGZHOU RIXING ELECTRONICS Co Ltd filed Critical ZHENGZHOU RIXING ELECTRONICS Co Ltd
Priority to CN201410074061.2A priority Critical patent/CN103810711A/en
Publication of CN103810711A publication Critical patent/CN103810711A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a keyframe extracting method and system for monitoring system videos. The method comprises the following steps of (a) utilizing a background differencing method to extract keyframe sequences including movement objects in the videos; (b) calculating the similarity degree of every two adjacent frames in the keyframe sequences based on a joint histogram, extracting keyframes with the similarity degree smaller than a threshold value and adding the keyframes into a keyframe set; (c) judging whether mark numbers of every two adjacent keyframes in the keyframe set are smaller than a certain interval or not, deleting the keyframes with small information entropies and updating the keyframe set if the mark numbers are smaller than the certain interval. By utilizing the keyframe extracting method and system for monitoring the system videos, the keyframes including the movement objects can be extracted from a large amount of monitored videos by using a simple algorithm, redundant data in the keyframe sequences are remarkably decreased, and the storage amount of video data are remarkably decreased.

Description

A kind of extraction method of key frame for supervisory system video and system thereof
Technical field
The present invention relates to intelligent video monitoring field, particularly a kind of extraction method of key frame for supervisory system video and system thereof.
Background technology
Along with the development of Digital Video Processing technology and the raising of social safety consciousness, watch-dog is widely used in all trades and professions, has so just produced the monitor video data of magnanimity, makes the storage of video, and it is complicated and consuming time that operations such as retrieving and browse becomes.Therefore how in the monitoring video information of magnanimity, fast and effeciently to store and to browse useful information, monitor video is now had great significance.For fast browsing and efficiently utilize these monitor datas, key-frame extraction technology just to seem particularly important.
Key frame is the limited frame of video subset that can represent the main contents of video sequence.In recent years,, for different application purposes, key-frame extraction technology has initial development.Occur using the relative entropy (KLD) between Generalized Gaussian density feature vector to carry out choosing of camera lens cluster boundary, and then extracted key frame based on similarity and diversity standard.Also have Saliency map (AVI) based on visual attention model to describe to extract key frame, shot boundary detect with in camera lens, extract key frame, " strengthening three-dimensional key frame " concentrate the distinct methods extraction key frames such as the significant content information of monitor video segment.But, above extraction method of key frame has algorithm complexity, problem that calculated amount is large, and, they are all to calculate the mode of all frame sequences in video to extract key frame, in real monitor video, may contain a large amount of pure background frames, want thereby can not extract pointedly people the video clips that only comprises moving target of checking, extract key frame difficulty large.
Therefore, need a kind of extraction method of key frame and equipment of supervisory system video, realize and from magnanimity monitor video, extract the only key frame containing moving target with simple algorithm.
Summary of the invention
The object of the invention is to the problem existing for above-mentioned prior art, video monitoring is proposed to a kind of extraction method of key frame and equipment thereof based on moving object detection, to overcome the defect of prior art.
According to an aspect of the present invention, provide a kind of extraction method of key frame for supervisory system video, described method comprises the steps: a) to utilize background subtraction point-score to extract the keyframe sequence that contains Moving Objects in video; B) calculate the similarity of adjacent two frames in described keyframe sequence based on joint histogram, and extract the key frame that similarity is less than threshold value, added in key frame set; Whether the label that c) judges adjacent two key frames in described key frame set is less than certain intervals, if delete the wherein little key frame of information entropy, upgrades described key frame set.
Preferably, in step a, in described background subtraction point-score, set up background model with the Gaussian mixture model-universal background model with two Gauss models.
Preferably, in step a, by change detected pixel, detect in frame, whether there is Moving Objects.
Preferably, in step a, in described Gaussian mixture model-universal background model, if the current intensity level of a certain pixel is , the probability calculation formula that this pixel belongs to two background models as shown in the formula:
Figure 81732DEST_PATH_IMAGE002
(1)
Wherein
Figure 76364DEST_PATH_IMAGE003
represent respectively described two Gauss model formula,
Figure 593933DEST_PATH_IMAGE004
with
Figure 493756DEST_PATH_IMAGE005
be respectively
Figure 872916DEST_PATH_IMAGE006
covariance and the average of described two Gauss models in moment.
Preferably, in step a, in the time detecting described variation pixel, also carry out neighborhood territory pixel model accordance and detect.
Preferably, the method that detects described variation pixel is as follows, and the average of two corresponding with it intensity level of each pixel of current pending frame background pixel models is done to poor processing, is judged to be described variation pixel if difference is greater than the threshold value of setting.
Preferably, at described gauss hybrid models model H k
Figure 909005DEST_PATH_IMAGE007
in, whether the pixel of calculating described each current pending frame is that the computing formula of described variation pixel is as follows:
Figure 597475DEST_PATH_IMAGE008
(2)
Wherein
Figure 719015DEST_PATH_IMAGE009
for coefficient constant.
Whether the pixel of preferably, calculating described each current pending frame is that the computing formula of described variation pixel is as follows:
Figure 913584DEST_PATH_IMAGE010
(3)
Preferably, in step a, for a lot of undersized block of pixels forming in prospect because of change of background, by size filtering, it is eliminated, reduced the mistake of supervisory system background model.
Preferably, step b also comprises the steps: b1) using the first key frame in described keyframe sequence as current key frame; B2) described current key frame is put into key frame set; B3) from described keyframe sequence, take out next frame key frame as a comparison; B4) described current key frame and the described relatively similarity of key frame are calculated; B5) judge whether similarity is less than Second Threshold, is to enter step b6; Otherwise enter step b7; B6) using described relatively key frame as described current key frame, and added in described key frame set; Whether be last frame in described keyframe sequence, be to finish if b7) detecting described relatively key frame, described key frame set is the described keyframe sequence after tentatively accurately; Otherwise returning to step b3 continues to process.
Preferably, in step b4, the method for calculating the similarity between described key frame is the extraction method of key frame based on joint histogram, and described method is carried out the similarity degree of process decision chart picture according to the symmetry of joint histogram.
Preferably, be all for size
Figure 132076DEST_PATH_IMAGE011
two width images
Figure 929131DEST_PATH_IMAGE012
, , corresponding pixel value pair
Figure 258929DEST_PATH_IMAGE014
joint probability be expressed as:
Figure 331927DEST_PATH_IMAGE015
(4)
Wherein, .
Preferably, the symmetry of described joint histogram is defined as:
Figure 209065DEST_PATH_IMAGE017
(5)
Wherein,
Figure 982986DEST_PATH_IMAGE018
be the weights on the diagonal line of described joint histogram, be in this case less than 1 normal amount, and
Figure 113753DEST_PATH_IMAGE019
represent the weight away from described diagonal entry, in formula
Figure 65659DEST_PATH_IMAGE020
for integer.
Preferably, in step c, the computing formula of the image information entropy of the key frame in described keyframe sequence is:
Figure 383508DEST_PATH_IMAGE021
(6)
Wherein,
Figure 961120DEST_PATH_IMAGE022
refer to the number of greyscale levels of image, represent pixel
Figure 69202DEST_PATH_IMAGE024
gray-scale value,
Figure 936663DEST_PATH_IMAGE025
for the probability of each gray level appearance.
Preferably, in the time that the image of the key frame in described keyframe sequence is coloured image, use luminance component to replace number of greyscale levels to carry out the calculating of described image information entropy.
Preferably, in described step c described in be spaced apart 15-25.
Preferably, described in, be spaced apart 20.
According to a further aspect in the invention, a kind of monitor video system of utilizing extraction method of key frame is provided, described system comprises acquisition module, compression module, Moving Objects detection module, key-frame extraction module and display module, it is characterized in that, described acquisition module is used for gathering video; Described compression module is for compressing the video of described acquisition module collection; Described Moving Objects detection module, for to carrying out Moving Objects detection through the video of described compression module compression, utilizes background subtraction point-score to extract the keyframe sequence that contains Moving Objects in video; Described key-frame extraction module is for carrying out the extraction of key frame to the video sequence that contains Moving Objects of described Moving Objects detection module output, described extraction comprises following two steps: the similarity of a) calculating adjacent two frames in described keyframe sequence based on joint histogram, and extract the key frame that similarity is less than threshold value, added in key frame set; With whether the label that b) judges adjacent two key frames in described key frame set be less than certain intervals, if delete the wherein little key frame of information entropy, upgrade described key frame set; Described display module is for showing the intrusion alarm video of the collection video of described acquisition module output, the compressed video of described compression module output, the output of described Moving Objects detection module and the described key frame video of described key-frame extraction module output.
Utilize the extraction method of key frame of a kind of video of the present invention, realize and from magnanimity monitor video, extract the only key frame containing moving target with simple algorithm, and can significantly reduce the redundant data in keyframe sequence, thereby reduce significantly the memory space of video data.
Accompanying drawing explanation
With reference to the accompanying drawing of enclosing, the more object of the present invention, function and advantage are illustrated the following description by embodiment of the present invention, wherein:
Fig. 1 has schematically shown the process flow diagram of the extraction method of key frame of a kind of video of the present invention.
Fig. 2 has schematically shown the preliminary accurately process flow diagram of keyframe sequence.
Fig. 3 has schematically shown and has utilized according to the structure of the monitor video system of extraction method of key frame of the present invention.
Embodiment
By reference to one exemplary embodiment, object of the present invention and function and will be illustrated for method and the equipment thereof of realizing these objects and function.But the present invention is not limited to following disclosed one exemplary embodiment; Can be realized it by multi-form.The essence of instructions is only to help various equivalent modifications Integrated Understanding detail of the present invention.
Hereinafter, embodiments of the invention will be described with reference to the drawings.In the accompanying drawings, identical Reference numeral represents same or similar parts, or same or similar step.
Fig. 1 has schematically shown the process flow diagram of the extraction method of key frame of a kind of video of the present invention.As shown in Figure 1:
Step 110, utilizes background subtraction point-score to extract the keyframe sequence that contains Moving Objects in video.Utilize background subtraction point-score to detect one section of video that contains Moving Objects, determine the start frame and the end frame that contain Moving Objects video segment, start frame is the frame that occurs Moving Objects to be detected first, and end frame is the former frame of the frame of Moving Objects disappearance.Obtain the keyframe sequence of this video segment (from start frame to end frame) that contains Moving Objects .Wherein, the background subtraction point-score background that is otherwise known as subtracts, and it detects moving target in the mode of utilizing background model to detect present frame and background difference.
Preferably, use the Gaussian mixture model-universal background model of simplifying.Background model number is fixed as two by the Gauss model of this simplification, increases neighborhood territory pixel model accordance and detect, thereby make background model have good speed and robustness in the time changing pixel detection.Wherein, by detecting whether there is Moving Objects in frame to changing the detection of pixel.
In the Gaussian mixture model-universal background model of simplifying, if the current intensity level of a certain pixel is
Figure 642899DEST_PATH_IMAGE027
, the probability calculation formula that this pixel belongs to two background models as shown in the formula:
Figure 185876DEST_PATH_IMAGE028
(1)
Wherein
Figure 743897DEST_PATH_IMAGE003
represent respectively two Gauss model formula,
Figure 148464DEST_PATH_IMAGE029
with
Figure 108330DEST_PATH_IMAGE030
be respectively
Figure 822208DEST_PATH_IMAGE006
the covariance of moment Gauss model and average.
Use background subtraction point-score that the intensity level background frames pixel intensity value corresponding with it of the each pixel of every frame (current pending frame) done to poor processing, the pixel that its result is greater than setting threshold (following, to be called first threshold) is judged to be to change pixel.Above-mentioned two Gaussian Background model H k
Figure 601945DEST_PATH_IMAGE007
in, whether current pixel is that to change the computing formula of pixel as follows:
Figure 810204DEST_PATH_IMAGE008
(2)
Wherein for coefficient constant.
Change pixel in order to eliminate falseness mixed and disorderly in background, judging that current pixel is whether when changing pixel, except pixel in inspection background
Figure 447039DEST_PATH_IMAGE024
outside, also check its neighborhood pixel simultaneously, use background subtraction point-score that the average of two corresponding with it each pixel intensity value of current pending frame background pixel models is done to poor processing, while only having these pixel value difference results in current pixel and background to be greater than the threshold value of setting, just think to change pixel.Therefore, use represent point
Figure 975289DEST_PATH_IMAGE024
neighborhood territory pixel coordinate, change pixel detection formula and be revised as:
Figure 519534DEST_PATH_IMAGE010
(3)
Preferably, for a lot of undersized block of pixels forming in prospect because of change of background, by size filtering, it is eliminated, reduced the mistake of supervisory system background model.Wherein, such as plant leaf swing in background etc. of change of background.
By above-mentioned formula, can judge whether present frame has Moving Objects, multiple frames that comprise Moving Objects are called a motion fragment continuously, and utilizing frame calculating formula of similarity can find a representative frame to this video segment is key frame, start frame and the end frame of initial and end frame homologous segment.Can construct keyframe sequence
Figure 512898DEST_PATH_IMAGE026
.
Step 120, by calculating the similarity of adjacent two frames, preliminary accurately keyframe sequence.To the keyframe sequence by obtaining in step 110
Figure 267227DEST_PATH_IMAGE026
, calculate the similarity degree between adjacent two frames, and extract the key frame that similarity is less than threshold value (following, to be called Second Threshold), added in key frame set the keyframe sequence after this key frame set is tentatively accurately.Fig. 2 has schematically shown the preliminary accurately process flow diagram of keyframe sequence, and as shown in Figure 2, the step of preliminary accurately keyframe sequence is as follows:
Step 1201, by keyframe sequence
Figure 269818DEST_PATH_IMAGE026
in first key frame
Figure 606253DEST_PATH_IMAGE032
as current key frame
Figure 770518DEST_PATH_IMAGE033
, that is,
Figure 74460DEST_PATH_IMAGE034
, , wherein "=" represents assignment;
Step 1202, by current key frame
Figure 806104DEST_PATH_IMAGE033
put into key frame set
Figure 406850DEST_PATH_IMAGE036
in;
Step 1203 is taken out next frame key frame as a comparison, from keyframe sequence
Figure 932509DEST_PATH_IMAGE037
;
Step 1204, then to current key frame
Figure 89952DEST_PATH_IMAGE033
with keyframe sequence
Figure 322350DEST_PATH_IMAGE038
in key frame
Figure 156314DEST_PATH_IMAGE039
similarity
Figure 106952DEST_PATH_IMAGE040
calculate, wherein, key frame
Figure 802507DEST_PATH_IMAGE039
for for current key frame
Figure 154991DEST_PATH_IMAGE033
carry out the key frame (being called comparison key frame in accompanying drawing) of similarity comparison;
Step 1205, judges similarity
Figure 97539DEST_PATH_IMAGE040
whether be less than Second Threshold
Figure 597791DEST_PATH_IMAGE041
if enter step 1206; Otherwise enter step 1207; Similarity be worth greatlyr, illustrate that the similarity between two frames is larger;
Step 1206, by key frame
Figure 304027DEST_PATH_IMAGE039
as new current key frame,
Figure 417476DEST_PATH_IMAGE035
, and this key frame is joined to key frame set
Figure 405024DEST_PATH_IMAGE036
in;
Step 1207, detects key frame whether be keyframe sequence
Figure 769457DEST_PATH_IMAGE038
in last frame, be to finish, key frame set
Figure 788229DEST_PATH_IMAGE036
keyframe sequence after being tentatively accurately; Otherwise returning to step 1203 continues to process;
Preferably, the method for calculating the similarity between key frame in step 1204 is the extraction method of key frame based on joint histogram, and the method is carried out the similarity degree of process decision chart picture according to the symmetry of joint histogram.
Particularly, joint histogram represents the identical image of two width sizes
Figure 200755DEST_PATH_IMAGE042
with between the frequency that occurs of pixel is right on its correspondence position gray scale combination.Be all for size
Figure 285703DEST_PATH_IMAGE011
two width images
Figure 740955DEST_PATH_IMAGE012
,
Figure 437516DEST_PATH_IMAGE013
, corresponding pixel value pair
Figure 3626DEST_PATH_IMAGE014
joint probability be expressed as:
Figure 106625DEST_PATH_IMAGE015
(4)
Wherein,
Figure 732779DEST_PATH_IMAGE016
.
Known according to above formula, to all possible pixel value pair
Figure 854318DEST_PATH_IMAGE044
ask value, can obtain image with
Figure 52716DEST_PATH_IMAGE047
joint histogram.
Joint histogram symmetry is defined as:
Figure 723868DEST_PATH_IMAGE017
(5)
Wherein, be the weights on joint histogram diagonal line, be in this case less than 1 normal amount.And
Figure 455512DEST_PATH_IMAGE019
represent the weight away from diagonal entry, in formula
Figure 423468DEST_PATH_IMAGE020
for integer.
Figure 581917DEST_PATH_IMAGE049
express intuitively the similarity between two frames, when
Figure 293521DEST_PATH_IMAGE049
more level off to 1, represent that joint histogram is more symmetrical, illustrate that the similarity of two two field pictures is larger.
Step 130, is less than certain threshold value situation of (below, being called the 3rd threshold value) for the label interval of adjacent two key frames in key frame set K, deletes the less key frame of information entropy in these adjacent two key frames, carrys out again accurately keyframe sequence.According to the continuous characteristic of monitor video, in one section of continually varying video sequence, the eigenwert that continuous front and back frame of video exists is gradual change, and the image information value of consecutive frame changes little.Because key frame is the representative of one section of video main contents, and the entropy information of image can embody the quantity of information that image comprises, thereby the contained quantity of information of key frame should be relatively large.Therefore, be to reduce the redundancy of data, be less than the key frame of the 3rd threshold value for the interval in key frame set, delete the little key frame of information entropy, and retain the large key frame of information entropy of image, thus again accurate keyframe sequence.
Preferably, the closely related computing formula of image information is:
Figure 971758DEST_PATH_IMAGE021
(6)
Wherein, refer to the number of greyscale levels of image,
Figure 694044DEST_PATH_IMAGE023
represent pixel
Figure 271656DEST_PATH_IMAGE024
gray-scale value,
Figure 991350DEST_PATH_IMAGE025
for the probability of each gray level appearance.In the time that pending image is coloured image, replace number of greyscale levels to carry out the calculating of image information entropy with the luminance component of image.
Preferably, the 3rd threshold value is 15-25, is preferably 20.
The present invention also provides a kind of utilization according to the monitor video system of extraction method of key frame of the present invention.Fig. 3 has schematically shown and has utilized according to the structure of the monitor video system of extraction method of key frame of the present invention.As shown in Figure 3:
Utilize and comprise according to the monitor video system 300 of extraction method of key frame of the present invention: acquisition module 301, compression module 302, Moving Objects detection module 303, key-frame extraction module 304 and display module 305.
Acquisition module 301 is for gathering video.The video that for example utilizes the equipment collections such as video camera to monitor.
Compression module 302 compresses for the video that acquisition module 301 is gathered.Compression algorithm for example, H.26X MPEG-X and the conventional video coding algorithm such as.
Moving Objects detection module 303 is for carrying out Moving Objects detection to the video compressing through compression module 302.The algorithm that Moving Objects detects is for example described in step 110.In the time being applied in field of video monitoring, once detect after Moving Objects, this module can be judged as intrusion behavior and occur, and the invasion video now detecting can be sent to display module and report to the police.
Key-frame extraction module 304 is carried out the extraction of key frame for the video sequence that contains Moving Objects that Moving Objects detection module 303 is exported.Described in for example step 120 of algorithm and step 130 of extraction key frame.
The key frame video that display module 305 is exported for the collection video that shows acquisition module 301 and export, compressed video that compression module 302 is exported, intrusion alarm video that Moving Objects detection module 303 is exported and key-frame extraction module 304.
Utilize the extraction method of key frame of a kind of video of the present invention, realize and from magnanimity monitor video, extract the only key frame containing moving target with simple algorithm, and can significantly reduce the redundant data in keyframe sequence, thereby reduce significantly the memory space of video data.
In conjunction with the explanation of the present invention and the practice that disclose here, other embodiment of the present invention are easy to expect and understand for those skilled in the art.Illustrate with embodiment and be only considered to exemplary, true scope of the present invention and purport limit by claim.

Claims (18)

1. for an extraction method of key frame for supervisory system video, described method comprises the steps:
A) utilize background subtraction point-score to extract the keyframe sequence that contains Moving Objects in video;
B) calculate the similarity of adjacent two frames in described keyframe sequence based on joint histogram, and extract the key frame that similarity is less than threshold value, added in key frame set;
Whether the label that c) judges adjacent two key frames in described key frame set is less than certain intervals, if delete the wherein little key frame of information entropy, upgrades described key frame set.
2. method according to claim 1, is characterized in that, in step a, in described background subtraction point-score, sets up background model with the Gaussian mixture model-universal background model with two Gauss models.
3. method according to claim 1, is characterized in that, in step a, by change detected pixel, detects in frame, whether there is Moving Objects.
4. method according to claim 2, is characterized in that, in step a, in described Gaussian mixture model-universal background model, if the current intensity level of a certain pixel is
Figure 931724DEST_PATH_IMAGE001
, the probability calculation formula that this pixel belongs to two background models as shown in the formula:
Figure 877814DEST_PATH_IMAGE002
(1)
Wherein
Figure 744270DEST_PATH_IMAGE003
represent respectively described two Gauss model formula,
Figure 584050DEST_PATH_IMAGE004
with
Figure 127027DEST_PATH_IMAGE005
be respectively
Figure 486378DEST_PATH_IMAGE006
covariance and the average of described two Gauss models in moment.
5. method according to claim 3, is characterized in that, in step a, also carries out neighborhood territory pixel model accordance and detect in the time detecting described variation pixel.
6. method according to claim 2, it is characterized in that, the method that detects described variation pixel is as follows, and the average of two corresponding with it intensity level of each pixel of current pending frame background pixel models is done to poor processing, is judged to be described variation pixel if difference is greater than the threshold value of setting.
7. method according to claim 6, is characterized in that, at described gauss hybrid models model H k
Figure 77896DEST_PATH_IMAGE007
in, whether the pixel of calculating described each current pending frame is that the computing formula of described variation pixel is as follows:
Figure 834500DEST_PATH_IMAGE008
(2)
Wherein
Figure 751640DEST_PATH_IMAGE009
for coefficient constant.
8. method according to claim 6, is characterized in that, whether the pixel of calculating described each current pending frame is that the computing formula of described variation pixel is as follows:
Figure 344426DEST_PATH_IMAGE010
(3) 。
9. method according to claim 1, is characterized in that, in step a, for a lot of undersized block of pixels forming in prospect because of change of background, by size filtering, it is eliminated, and reduces the mistake of supervisory system background model.
10. method according to claim 1, is characterized in that, step b also comprises the steps:
B1) using the first key frame in described keyframe sequence as current key frame;
B2) described current key frame is put into key frame set;
B3) from described keyframe sequence, take out next frame key frame as a comparison;
B4) described current key frame and the described relatively similarity of key frame are calculated;
B5) judge whether similarity is less than Second Threshold, is to enter step b6; Otherwise enter step b7;
B6) using described relatively key frame as described current key frame, and added in described key frame set;
Whether be last frame in described keyframe sequence, be to finish if b7) detecting described relatively key frame, described key frame set is the described keyframe sequence after tentatively accurately; Otherwise returning to step b3 continues to process.
11. methods according to claim 10, it is characterized in that, in step b4, the method for calculating the similarity between described key frame is the extraction method of key frame based on joint histogram, and described method is carried out the similarity degree of process decision chart picture according to the symmetry of joint histogram.
12. methods according to claim 11, is characterized in that, are all for size
Figure 801953DEST_PATH_IMAGE011
two width images
Figure 616325DEST_PATH_IMAGE012
,
Figure 251837DEST_PATH_IMAGE013
, corresponding pixel value pair
Figure 518870DEST_PATH_IMAGE014
joint probability be expressed as:
Figure 780087DEST_PATH_IMAGE015
(4)
Wherein,
Figure 448966DEST_PATH_IMAGE016
.
13. methods according to claim 12, is characterized in that, the symmetry of described joint histogram is defined as:
Figure 255379DEST_PATH_IMAGE017
(5)
Wherein, be the weights on the diagonal line of described joint histogram, be in this case less than 1 normal amount, and
Figure 809037DEST_PATH_IMAGE019
represent the weight away from described diagonal entry, in formula
Figure 145472DEST_PATH_IMAGE020
for integer.
14. methods according to claim 1, is characterized in that, in step c, the computing formula of the image information entropy of the key frame in described keyframe sequence is:
Figure 575316DEST_PATH_IMAGE021
(6)
Wherein,
Figure 879258DEST_PATH_IMAGE022
refer to the number of greyscale levels of image,
Figure 967431DEST_PATH_IMAGE023
represent pixel
Figure 610902DEST_PATH_IMAGE024
gray-scale value,
Figure 821435DEST_PATH_IMAGE025
for the probability of each gray level appearance.
15. methods according to claim 14, is characterized in that, in the time that the image of the key frame in described keyframe sequence is coloured image, use luminance component to replace number of greyscale levels to carry out the calculating of described image information entropy.
16. methods according to claim 1, is characterized in that, are spaced apart 15-25 described in described step c.
17. methods according to claim 16, is characterized in that, described in be spaced apart 20.
18. 1 kinds are utilized the monitor video system of extraction method of key frame, and described system comprises acquisition module, compression module, Moving Objects detection module, key-frame extraction module and display module, it is characterized in that:
Described acquisition module is used for gathering video;
Described compression module is for compressing the video of described acquisition module collection;
Described Moving Objects detection module, for to carrying out Moving Objects detection through the video of described compression module compression, utilizes background subtraction point-score to extract the keyframe sequence that contains Moving Objects in video;
Described key-frame extraction module is for carrying out the extraction of key frame to the video sequence that contains Moving Objects of described Moving Objects detection module output, described extraction comprises following two steps: the similarity of a) calculating adjacent two frames in described keyframe sequence based on joint histogram, and extract the key frame that similarity is less than threshold value, added in key frame set; With whether the label that b) judges adjacent two key frames in described key frame set be less than certain intervals, if delete the wherein little key frame of information entropy, upgrade described key frame set;
Described display module is for showing the intrusion alarm video of the collection video of described acquisition module output, the compressed video of described compression module output, the output of described Moving Objects detection module and the described key frame video of described key-frame extraction module output.
CN201410074061.2A 2014-03-03 2014-03-03 Keyframe extracting method and system for monitoring system videos Pending CN103810711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410074061.2A CN103810711A (en) 2014-03-03 2014-03-03 Keyframe extracting method and system for monitoring system videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410074061.2A CN103810711A (en) 2014-03-03 2014-03-03 Keyframe extracting method and system for monitoring system videos

Publications (1)

Publication Number Publication Date
CN103810711A true CN103810711A (en) 2014-05-21

Family

ID=50707432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410074061.2A Pending CN103810711A (en) 2014-03-03 2014-03-03 Keyframe extracting method and system for monitoring system videos

Country Status (1)

Country Link
CN (1) CN103810711A (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284198A (en) * 2014-10-27 2015-01-14 李向伟 Video concentration method
CN104837031A (en) * 2015-04-08 2015-08-12 中国科学院信息工程研究所 Method for high-speed self-adaptive video keyframe extraction
CN104853060A (en) * 2015-04-14 2015-08-19 武汉基数星通信科技有限公司 High-definition video preprocessing method and system
CN104980707A (en) * 2015-06-25 2015-10-14 浙江立元通信技术股份有限公司 Intelligent video patrol system
CN105100776A (en) * 2015-08-24 2015-11-25 深圳凯澳斯科技有限公司 Stereoscopic video screenshot method and stereoscopic video screenshot apparatus
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN105516735A (en) * 2015-12-11 2016-04-20 小米科技有限责任公司 Representation frame acquisition method and representation frame acquisition apparatus
CN105701843A (en) * 2016-04-15 2016-06-22 张志华 Unattended parking lot monitoring system
CN106470323A (en) * 2015-08-14 2017-03-01 杭州海康威视系统技术有限公司 The storage method of video data and equipment
CN106503112A (en) * 2016-10-18 2017-03-15 大唐软件技术股份有限公司 Video retrieval method and device
CN106780429A (en) * 2016-11-16 2017-05-31 重庆金山医疗器械有限公司 The extraction method of key frame of the WCE video sequential redundant image datas based on perceptual color space and crucial angle point
CN106911943A (en) * 2017-02-21 2017-06-30 腾讯科技(深圳)有限公司 A kind of video display method and its device
CN106960211A (en) * 2016-01-11 2017-07-18 北京陌上花科技有限公司 Key frame acquisition methods and device
CN107346547A (en) * 2017-07-04 2017-11-14 易视腾科技股份有限公司 Real-time foreground extracting method and device based on monocular platform
CN107578011A (en) * 2017-09-05 2018-01-12 中国科学院寒区旱区环境与工程研究所 The decision method and device of key frame of video
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
CN108171189A (en) * 2018-01-05 2018-06-15 广东小天才科技有限公司 A kind of method for video coding, video coding apparatus and electronic equipment
WO2018166288A1 (en) * 2017-03-15 2018-09-20 北京京东尚科信息技术有限公司 Information presentation method and device
WO2019007020A1 (en) * 2017-07-05 2019-01-10 优酷网络技术(北京)有限公司 Method and device for generating video summary
WO2019085941A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Key frame extraction method and apparatus, and storage medium
CN109816769A (en) * 2017-11-21 2019-05-28 深圳市优必选科技有限公司 Scene based on depth camera ground drawing generating method, device and equipment
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110944159A (en) * 2019-12-31 2020-03-31 联想(北京)有限公司 Information processing method, electronic equipment and information processing system
CN110996183A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Video abstract generation method, device, terminal and storage medium
CN111289848A (en) * 2020-01-13 2020-06-16 甘肃省安全生产科学研究院有限公司 Composite data filtering method applied to intelligent thermal partial discharge instrument based on safety production
CN111752520A (en) * 2020-06-28 2020-10-09 Oppo广东移动通信有限公司 Image display method, image display device, electronic equipment and computer readable storage medium
CN111836072A (en) * 2020-05-21 2020-10-27 北京嘀嘀无限科技发展有限公司 Video processing method, device, equipment and storage medium
CN112906818A (en) * 2021-03-17 2021-06-04 东南数字经济发展研究院 Method for reducing redundancy of video data set during artificial intelligence training
CN112989112A (en) * 2021-04-27 2021-06-18 北京世纪好未来教育科技有限公司 Online classroom content acquisition method and device
CN113553979A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113596556A (en) * 2021-07-02 2021-11-02 咪咕互动娱乐有限公司 Video transmission method, server and storage medium
CN113794815A (en) * 2021-08-25 2021-12-14 中科云谷科技有限公司 Method, device and controller for extracting video key frame
CN117112833A (en) * 2023-10-24 2023-11-24 北京智汇云舟科技有限公司 Video static frame filtering method and device based on storage space optimization

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008176653A (en) * 2007-01-19 2008-07-31 Omron Corp Monitoring device, method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008176653A (en) * 2007-01-19 2008-07-31 Omron Corp Monitoring device, method, and program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
周兵等: "一种适合于监控视频内容检索的关键帧提取新方法", 《郑州大学学报(工学版)》, vol. 34, no. 3, 31 May 2013 (2013-05-31), pages 102 - 105 *
王瑞: "智能视频监控系统的研究与开发", 《万方学位论文数据库》 *
郝伟伟: "适用于监控视频的关键帧提取", 《万方学位论文数据库》, 8 October 2013 (2013-10-08), pages 19 *

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284198A (en) * 2014-10-27 2015-01-14 李向伟 Video concentration method
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN104837031A (en) * 2015-04-08 2015-08-12 中国科学院信息工程研究所 Method for high-speed self-adaptive video keyframe extraction
CN104837031B (en) * 2015-04-08 2018-01-30 中国科学院信息工程研究所 A kind of method of high-speed adaptive extraction key frame of video
CN104853060A (en) * 2015-04-14 2015-08-19 武汉基数星通信科技有限公司 High-definition video preprocessing method and system
CN104980707A (en) * 2015-06-25 2015-10-14 浙江立元通信技术股份有限公司 Intelligent video patrol system
CN104980707B (en) * 2015-06-25 2019-03-08 浙江立元通信技术股份有限公司 A kind of intelligent video patrol system
CN106470323A (en) * 2015-08-14 2017-03-01 杭州海康威视系统技术有限公司 The storage method of video data and equipment
CN106470323B (en) * 2015-08-14 2019-08-16 杭州海康威视系统技术有限公司 The storage method and equipment of video data
CN105100776B (en) * 2015-08-24 2017-03-15 深圳凯澳斯科技有限公司 A kind of three-dimensional video-frequency screenshot method and device
CN105100776A (en) * 2015-08-24 2015-11-25 深圳凯澳斯科技有限公司 Stereoscopic video screenshot method and stereoscopic video screenshot apparatus
CN105516735A (en) * 2015-12-11 2016-04-20 小米科技有限责任公司 Representation frame acquisition method and representation frame acquisition apparatus
CN105516735B (en) * 2015-12-11 2019-03-22 小米科技有限责任公司 Represent frame acquisition methods and device
CN106960211A (en) * 2016-01-11 2017-07-18 北京陌上花科技有限公司 Key frame acquisition methods and device
CN106960211B (en) * 2016-01-11 2020-04-14 北京陌上花科技有限公司 Key frame acquisition method and device
CN105701843A (en) * 2016-04-15 2016-06-22 张志华 Unattended parking lot monitoring system
CN106503112A (en) * 2016-10-18 2017-03-15 大唐软件技术股份有限公司 Video retrieval method and device
CN106780429A (en) * 2016-11-16 2017-05-31 重庆金山医疗器械有限公司 The extraction method of key frame of the WCE video sequential redundant image datas based on perceptual color space and crucial angle point
CN106780429B (en) * 2016-11-16 2020-04-21 重庆金山医疗器械有限公司 Method for extracting key frame of WCE video time sequence redundant image data based on perception color space and key corner
CN106911943A (en) * 2017-02-21 2017-06-30 腾讯科技(深圳)有限公司 A kind of video display method and its device
CN106911943B (en) * 2017-02-21 2021-10-26 腾讯科技(深圳)有限公司 Video display method and device and storage medium
CN108629224A (en) * 2017-03-15 2018-10-09 北京京东尚科信息技术有限公司 Information demonstrating method and device
WO2018166288A1 (en) * 2017-03-15 2018-09-20 北京京东尚科信息技术有限公司 Information presentation method and device
CN108629224B (en) * 2017-03-15 2019-11-05 北京京东尚科信息技术有限公司 Information demonstrating method and device
CN107346547A (en) * 2017-07-04 2017-11-14 易视腾科技股份有限公司 Real-time foreground extracting method and device based on monocular platform
CN107346547B (en) * 2017-07-04 2020-09-04 易视腾科技股份有限公司 Monocular platform-based real-time foreground extraction method and device
WO2019007020A1 (en) * 2017-07-05 2019-01-10 优酷网络技术(北京)有限公司 Method and device for generating video summary
CN107578011A (en) * 2017-09-05 2018-01-12 中国科学院寒区旱区环境与工程研究所 The decision method and device of key frame of video
WO2019085941A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Key frame extraction method and apparatus, and storage medium
CN107886560B (en) * 2017-11-09 2021-05-25 网易(杭州)网络有限公司 Animation resource processing method and device
CN107886560A (en) * 2017-11-09 2018-04-06 网易(杭州)网络有限公司 The processing method and processing device of animation resource
CN109816769A (en) * 2017-11-21 2019-05-28 深圳市优必选科技有限公司 Scene based on depth camera ground drawing generating method, device and equipment
CN108171189A (en) * 2018-01-05 2018-06-15 广东小天才科技有限公司 A kind of method for video coding, video coding apparatus and electronic equipment
CN110781711A (en) * 2019-01-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Target object identification method and device, electronic equipment and storage medium
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN110996183A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Video abstract generation method, device, terminal and storage medium
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110795595B (en) * 2019-09-10 2024-03-05 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110944159A (en) * 2019-12-31 2020-03-31 联想(北京)有限公司 Information processing method, electronic equipment and information processing system
CN111289848A (en) * 2020-01-13 2020-06-16 甘肃省安全生产科学研究院有限公司 Composite data filtering method applied to intelligent thermal partial discharge instrument based on safety production
CN111836072A (en) * 2020-05-21 2020-10-27 北京嘀嘀无限科技发展有限公司 Video processing method, device, equipment and storage medium
CN111752520A (en) * 2020-06-28 2020-10-09 Oppo广东移动通信有限公司 Image display method, image display device, electronic equipment and computer readable storage medium
CN112906818A (en) * 2021-03-17 2021-06-04 东南数字经济发展研究院 Method for reducing redundancy of video data set during artificial intelligence training
CN112989112B (en) * 2021-04-27 2021-09-07 北京世纪好未来教育科技有限公司 Online classroom content acquisition method and device
CN112989112A (en) * 2021-04-27 2021-06-18 北京世纪好未来教育科技有限公司 Online classroom content acquisition method and device
CN113596556A (en) * 2021-07-02 2021-11-02 咪咕互动娱乐有限公司 Video transmission method, server and storage medium
CN113596556B (en) * 2021-07-02 2023-07-21 咪咕互动娱乐有限公司 Video transmission method, server and storage medium
CN113553979A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113553979B (en) * 2021-07-30 2023-08-08 国电汉川发电有限公司 Safety clothing detection method and system based on improved YOLO V5
CN113794815A (en) * 2021-08-25 2021-12-14 中科云谷科技有限公司 Method, device and controller for extracting video key frame
CN117112833A (en) * 2023-10-24 2023-11-24 北京智汇云舟科技有限公司 Video static frame filtering method and device based on storage space optimization
CN117112833B (en) * 2023-10-24 2024-01-12 北京智汇云舟科技有限公司 Video static frame filtering method and device based on storage space optimization

Similar Documents

Publication Publication Date Title
CN103810711A (en) Keyframe extracting method and system for monitoring system videos
CN110235138B (en) System and method for appearance search
US7957557B2 (en) Tracking apparatus and tracking method
CN108734107B (en) Multi-target tracking method and system based on human face
KR101891225B1 (en) Method and apparatus for updating a background model
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
US20100284670A1 (en) Method, system, and apparatus for extracting video abstract
CN104978567B (en) Vehicle checking method based on scene classification
US9953240B2 (en) Image processing system, image processing method, and recording medium for detecting a static object
US9904868B2 (en) Visual attention detector and visual attention detection method
CN110782433B (en) Dynamic information violent parabolic detection method and device based on time sequence and storage medium
CN111291633A (en) Real-time pedestrian re-identification method and device
CN108229346B (en) Video summarization using signed foreground extraction and fusion
CN112561951B (en) Motion and brightness detection method based on frame difference absolute error and SAD
CN103093198A (en) Crowd density monitoring method and device
Patil et al. Global abnormal events detection in surveillance video—A hierarchical approach
CN101877135B (en) Moving target detecting method based on background reconstruction
Ouyang et al. The comparison and analysis of extracting video key frame
CN112581489A (en) Video compression method, device and storage medium
CN108573217B (en) Compression tracking method combined with local structured information
Chen et al. An image restoration and detection method for picking robot based on convolutional auto-encoder
CN110889347A (en) Density traffic flow counting method and system based on space-time counting characteristics
Zhu et al. Detection and Recognition of Abnormal Running Behavior in Surveillance Video.
CN114694080A (en) Detection method, system and device for monitoring violent behavior and readable storage medium
KR102085034B1 (en) Method and Apparatus for Detecting Foregroud Image with Separating Foregroud and Background in Image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140521

WD01 Invention patent application deemed withdrawn after publication