CN102663452B - Suspicious act detecting method based on video analysis - Google Patents
Suspicious act detecting method based on video analysis Download PDFInfo
- Publication number
- CN102663452B CN102663452B CN 201210108381 CN201210108381A CN102663452B CN 102663452 B CN102663452 B CN 102663452B CN 201210108381 CN201210108381 CN 201210108381 CN 201210108381 A CN201210108381 A CN 201210108381A CN 102663452 B CN102663452 B CN 102663452B
- Authority
- CN
- China
- Prior art keywords
- target
- human body
- frame
- time
- space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 52
- 230000008878 coupling Effects 0.000 claims description 27
- 238000010168 coupling process Methods 0.000 claims description 27
- 238000005859 coupling reaction Methods 0.000 claims description 27
- 239000000284 extract Substances 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 14
- 230000002123 temporal effect Effects 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 2
- 238000005452 bending Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000009434 installation Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 8
- 206010000117 Abnormal behaviour Diseases 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 230000003542 behavioural effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000011410 subtraction method Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a suspicious act detecting method based on video analysis. The method provided by the invention comprises the following three steps: detecting a human body objective, modeling according to track of the human body objective, and extracting and classifying features of the human body objective. The method provided by the invention adopts a computer-assisted means and a video analysis technology to intelligently detect suspicious acts in a monitoring video for timely detection and early warning, so that the threat of suspicious actions to a monitored place can be effectively reduced; and at the same time, the suspicious act detecting method provided by the invention is easy for installation and convenient to use and has obvious economic and social benefits.
Description
Technical field
The present invention relates to the suspicious actions detection method based on video analysis.
Background technology
At present, locate the video camera of widespread use in bank, shop, parking lot etc., usually can only search and investigate a suspect by the video playback mode after abnormal conditions occur, can't Realtime Alerts.If can Intelligent Measurement the suspicious actions of human body in monitoring video, can be when event occurs and alarm, avoid the loss of life and property.
Existing suspicious actions detection method is a lot, Zhang Ruiyu etc. have proposed a kind of abnormal behaviour recognition methods based on run trace in " based on the intelligent monitoring algorithm of run trace ", use background subtraction method and the average weighted target detection method of time differencing method that movement human is detected, judge with record whether someone is suspicious by the tracking to people's run trace, but the method mainly detects Wander behavior, function singleness; Zhang Jins etc. adopt track extraction method that the event of hovering is detected and analyzes in " accident detection method research in monitor video ", but the method also can only detect Wander behavior, function singleness; Zhou Weibai etc. propose the video monitoring system take the pedestrian as target in " pedestrian's abnormal behaviour identification of analyzing based on track characteristic ", utilize pedestrian's motion track feature to judge whether the abnormal behavior, but this locus model element is simple, and is high to false alarm rate and the false dismissed rate of complex behavior; Hu Weiming etc. utilize the locus model of neural network moving target in " the hierarchical self organizing neural network method of novel trajectory pattern study ", by a series of tracing points, locus model is learnt, according to current tracing point and model parameter, moving target is predicted in next direction and position constantly, in order to detect the shadow of doubt of vehicular traffic direction of motion, and whether a suspect etc. arranged in the parking lot, but the method has only been utilized direction and the position feature of target in the track, is difficult to detect comparatively complicated suspicious actions; Hu Zhilan etc. propose a kind of anomaly detection method of based on motion direction in " abnormal behaviour of based on motion direction detects ", adopt piece direction of motion to describe different actions, and utilize support vector machine to carry out the abnormal behaviour classification to the Real Time Monitoring video, computation complexity is little, can realize Real Time Monitoring, but relatively poor for the situation detection effect of complex background or target occlusion; The brave grade in " based on the abnormal behaviour identification that improves the Hu square " of seal proposes a kind of abnormal behaviour recognizer based on improving the Hu square, mainly to jumping, accelerative running, falling down, squat down, wave and hand takes six kinds of suspicious actions of foreign matter to identify, but the method need to be extracted comparatively meticulous human body contour outline, and this is difficult to realize in complex environment.Generally speaking, the function singleness that prior art has can only detect specific suspicious actions; Some models are simple, can't adapt to complex environment; Some real-times or accuracy are not high, often occur false-alarm and false dismissal phenomenon in actual application.
Summary of the invention
For the deficiencies in the prior art, a kind of suspicious actions detection method based on video analysis of the special proposition of the present invention adopts computer auxiliaring means and Video Analysis Technology, and the intelligent suspicious actions that exist in monitoring video that detect are in time found and early warning.
The suspicious actions detection method that the present invention proposes mainly comprises three steps: human body target detections, track modeling, track characteristic extract and classify, flow process as shown in Figure 1, details are as follows:
One, human body target detects
In monitoring scene, our interested target is the human body target of motion, and the present invention proposes based on human body target detection method poor every the frame frame and the profile pairing, and concrete steps are as follows:
Step1: for general Real Time Video Acquisition System (frame per second 25fps), the moving displacement of target between adjacent two frames is very little, and the neighbor frame difference method is difficult to detect moving target, and for this reason, the present invention proposes to detect moving target every the frame frame difference method, specifically chooses the interval
tThree two field pictures of frame
I 0 , I t , I 2t , calculate respectively frame difference image
E 1,
E 2
tBe positive integer, unit is frame, in the present invention, gets
t=3 frames.
Step2: determine adaptive threshold
TCalculate the frame difference image average, it be multiply by a weighting coefficient, with as adaptive threshold.
Wherein,
M*
NBe the video image size,
βBe weighting coefficient, get here
β=10.
Step3: Threshold segmentation obtains bianry image
MR
MRIntermediate value is the impact point that 1 pixel is designated as motion.
Step4: object block mark.Generally, a frame bianry image
MRIn target fracture and " hole " phenomenon are arranged, also have simultaneously many noises, for this reason, at first adopt the level and smooth object block of median filter method, remove noise, filtering window is 3 * 3 pixels; Then adopt the opening operation in mathematical morphology to operate " hole " of filling up object block, merge adjacent object block; Adopt at last 8-in abutting connection with connection method search and target-marking.
Step5: human body target is differentiated.In realizing the Intelligent Measurement process of suspicious actions, our interested target is human body target all the time, and rejects as much as possible the jamming targets such as animal, vehicle, reduces the false alarm rate of system, and therefore, the present invention adopts the profile matching method to reject jamming target.
At first, detect the profile of target.In same frame bianry image, point (
x,
y) satisfied two conditions:
Condition 2:
,
Then, adopt the normalization Fourier descriptors to represent the profile of target.
To coordinate be (
x,
y)
nIndividual point, note
X[
n]=
x,
Y[
n]=
y, calculate Fourier descriptor:
Wherein,
KBe the point sum.Relevant with the curve starting point due to Fourier descriptors and shape yardstick, direction, therefore need carry out normalization, method for normalizing is:
Fourier descriptors after normalization has the unchangeability of translation, rotation, yardstick, can be used for carrying out the profile pairing.
At last, adopt Euclidean distance to carry out the profile pairing, the judgement objective attribute target attribute.
The Fourier descriptors of supposing target to be identified is
d 1(
u), the Fourier descriptors of human body target is
d 2(
u), both shape difference is:
Set fixed threshold
D, get in the present invention
D=0.02.If
d<
D, think that this target is human body target, otherwise think that this target is jamming target, with its rejecting.
Two, track modeling
The trace information of human body is one of important evidence of judgement human body suspicious actions, and how setting up reliable and stable human body locus model is the basis of intelligent distinguishing human body suspicious actions.For this reason, the present invention proposes the track modeling method of time-based window four-tuple.Wherein, the four-tuple of track is designated as:
Wherein,
iThe expression target sequence number,
fExpression video frame number,
P(
x,
y) expression target center-of-mass coordinate,
d(
u) expression objective contour descriptor.
For each human body target on each frame video image, recording of video frame number, target center-of-mass coordinate, objective contour descriptor information successively.Wherein, profile descriptor information is tried to achieve in previous step, and the target center-of-mass coordinate can adopt following formula to calculate:
Wherein,
W*
HBe object block
MRSize.
The labeling method of target sequence number is: for the first frame video image, the sequence number of each human body target of mark successively, and for each human body target that occurs in subsequent video images, at first each human body target with former frame carries out characteristic matching, if the match is successful, this target sequence number is labeled as the target sequence number that former frame is complementary; Otherwise, be the new sequence number of this target label.
Wherein, the human body target feature matching method of coupling is united in the employing of human body target characteristic matching based on spatial domain constraint and time and frequency domain characteristics, and concrete steps are as follows:
Step1: spatial domain constraint
Usually, even the speed that human body is run fast also can not reach the speed of video real-time sampling, in this case, in adjacent two frame video images, the profile of same human body target has overlapping, therefore, we to divide according to the spatial domain confining region be not obviously the target of same human body.Suppose overlapping point for (
x,
y), in the two frame bianry images of front and back, (
x,
y) must satisfy two conditions:
Wherein,
MR 1Expression present frame object block,
MR 0Expression former frame object block.
If front and back two frame human body targets have overlapping point, think that two human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process.
Step2: frequency domain character coupling
Adopt the Fourier descriptors feature of having obtained to carry out the frequency domain character coupling.The Fourier descriptors of supposing the present frame target is
d 1(
u), the Fourier descriptors of former frame target is
d 2(
u), according to foregoing description, the frequency domain character difference between target is:
Set fixed threshold
D 2,
D 2<
D,Get in the present invention
D 2=0.013.If
d<
D 2, before and after thinking, two frame human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process.
Step3: temporal signatures coupling
Adopt the gradient vector feature to carry out the temporal signatures coupling.The gradient vector feature acquiring method of target is as follows:
Calculate each pixel gradient according to gradient operator:
The gradient-norm value is:
Gradient direction is:
The gradient direction of [pi/2, pi/2] evenly is divided into 9 intervals (to be designated as
Area k , 1≤
k≤ 9), the 9 dimension gradient vectors of each pixel on the component interval are characterized as:
The average gradient vector of object block is characterized as:
The gradient vector of supposing the present frame target is characterized as
V 1, the gradient vector of former frame target is characterized as
V 2, the temporal signatures difference between target is:
Set fixed threshold
D 3, get in the present invention
D 3=0.14.If
v<
D 3, before and after thinking, two frame human body targets are same targets; Otherwise, think that two human body targets do not mate.
Because the track of human body target is relevant with the time, so after obtaining the four-tuple of each target in each frame video image, adopt the time window method to obtain the time window four-tuple, be designated as
W TR :
Wherein,
t 0Represent initial video frame number,
t d Expression interval frame number.
Three, track characteristic extracts and classification
After obtaining the time window four-tuple of each human body target, can extract track characteristic.The present invention proposes the mark vector locus feature extracting method based on the space-time discrete curve, is described below:
Step1: space-time discrete curve scalar feature extraction
Center-of-mass coordinate in the time window four-tuple is connected, obtains a space-time discrete curve (as shown in Figure 2), this curve has reflected the movement locus of human body target in time window, is the important evidence of distinguishing suspicious actions.The space-time discrete curve scalar feature that the present invention extracts comprises: broad sense curvature, space-time length and space-time flex point number, and details are as follows:
(1) broad sense curvature
At first calculate each discrete point on the space-time discrete curve and angle at adjacent 2, as the angle character of this discrete point, with
P 2Point is example, and its angle character is:
Wherein,
Then, get the mean value of all discrete point angle characters, as broad sense curvature.Here take the space-time discrete curve of Fig. 2 as example, the angle average of 9 points in the middle of getting, the broad sense curvature that obtains is:
(2) space-time length
The space-time length characteristic can be replaced by the discrete point number on the space-time discrete curve, and namely space-time length equals the discrete point number on the space-time discrete curve.For space-time discrete curve shown in Figure 2, its space-time length characteristic is 11.
(3) space-time flex point number
On the space-time discrete curve, when the angle character of discrete point less than
π, think that this point is the space-time flex point at/2 o'clock.On the space-time discrete curve, the number of space-time flex point is space-time flex point number.On space-time discrete curve shown in Figure 2,
P 5,
P 8Be the space-time flex point, space-time flex point number is 2.
Step2: space-time discrete curve vector characteristic extracts
For each discrete point on the space-time discrete curve, extract spatial domain and two vector characteristics of time domain.
(1) spatial domain vector
The spatial domain vector is used for describing the health tendency of human body target in traveling process, be mainly used to distinguish human body and be upright posture or lean forward or attitude is toppled in hypsokinesis etc., the attitude that falls down to the ground even fully, this helps to distinguish that human body runs, topples over, crawls, bends forward the behaviors such as body.The acquisition methods of spatial domain vector is: at first the profile descriptor by four-tuple recovers the human body contour outline shape; Then adopt the elliptic curve approximating method to obtain the human body elliptical shape; Extract at last oval major axis vector, as the spatial domain vector.
(2) time domain vector
The time domain vector is used for describing the motion conditions of human body target in traveling process.For any point on the space-time discrete curve, the mould value of its time domain vector is this point and lower any Euclidean distance, the direction of time domain vector for this point point under any direction and the angle of horizontal direction.Still with in Fig. 2
P 2Point is example, and the mould value of its time domain vector is:
Direction is:
Step3: tagsort
Because behavioural characteristic randomness is strong, be difficult to classify according to methods such as template matches or minor increments.The present invention adopts the svm classifier method to carry out the classification of behavioural characteristic.SVM is a kind of learning method that grows up on the Statistical Learning Theory basis, can effectively solve small sample problem, problem of model selection and nonlinear problem, and have very strong Generalization Capability.Kernel function is the key of SVM algorithm, and the present invention selects radial basis function as the kernel function of SVM:
Suspicious actions detection method of the present invention, process is:
In the training stage, at first select positive negative sample as much as possible, positive sample is to comprise the video of hovering, run, crawl, topple over or bend forward the behaviors such as body, negative sample is the video that comprises the behaviors such as normal walking, gathering or chat, then the method that adopts the present invention to extract is set up locus model, extract track characteristic, adopt at last the SVM method to train, obtain sorter.
At cognitive phase, at first each human body target in real-time video is set up locus model, extract track characteristic, the sorter that then the track characteristic input was obtained by the training stage is classified, and finally differentiates whether there are suspicious actions in video to be detected.If the suspicious actions of existence start acoustic-optic alarm report alert.
The present invention proposes the suspicious actions detection method based on video analysis, adopt computer auxiliaring means and Video Analysis Technology, the intelligent suspicious actions that exist in monitoring video that detect, in time find and early warning, can effectively reduce suspicious actions to monitoring the threat in place, it is easy, easy to use to install simultaneously, and economic and social benefit is remarkable.
Description of drawings
Fig. 1 is algorithm flow chart of the present invention;
Fig. 2 space-time map discrete curve.
Embodiment
Suspicious actions detection method of the present invention mainly comprises three steps: human body target detection, track modeling, track characteristic extract and classification, and idiographic flow is as follows:
One, human body target detects
Employing is based on human body target detection method poor every the frame frame and the profile pairing, and concrete steps are as follows:
Step1: adopt every the frame frame difference method and detect moving target, specifically choose the interval
tThree two field pictures of frame
I 0 , I t , I 2t , calculate respectively frame difference image
E 1,
E 2
Wherein, get
t=3.
Step2: determine adaptive threshold
TCalculate the frame difference image average, it be multiply by a weighting coefficient, with as adaptive threshold.
Wherein,
M*
NBe the video image size,
βBe weighting coefficient, get here
β=10.
Step3: Threshold segmentation obtains bianry image
MR
MRIntermediate value is the impact point that 1 pixel is designated as motion.
Step4: object block mark.At first adopt the level and smooth object block of median filter method, remove noise, filtering window is 3 * 3; Then adopt the opening operation in mathematical morphology to operate " hole " of filling up object block, merge adjacent object block; Adopt at last 8-in abutting connection with connection method search and target-marking.
Step5: human body target is differentiated.Adopt the profile matching method to reject jamming target.At first, detect the profile of target.Then, adopt the normalization Fourier descriptors to represent the profile of target.At last, adopt Euclidean distance to carry out the profile pairing, the judgement objective attribute target attribute.
Two, track modeling
Adopt the track modeling method of time-based window four-tuple.Wherein, the four-tuple of track is designated as:
Wherein,
iThe expression target sequence number,
fExpression video frame number,
P(
x,
y) expression target center-of-mass coordinate,
d(
u) expression objective contour descriptor.
The human body target feature matching method of coupling is united in employing based on spatial domain constraint and time and frequency domain characteristics, concrete steps are as follows:
Step1: spatial domain constraint
Dividing obviously according to the spatial domain confining region is not the target of same human body.Suppose overlapping point for (
x,
y), it must satisfy two conditions:
Condition 2:
Wherein,
MR 1Expression present frame object block,
MR 0Expression former frame object block.
If front and back two frame human body targets have overlapping point, think that two human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process.
Step2: frequency domain character coupling
Adopt the Fourier descriptors feature of having obtained to carry out the frequency domain character coupling.The Fourier descriptors of supposing the present frame target is
d 1(
u), the Fourier descriptors of former frame target is
d 2(
u), according to foregoing description, the frequency domain character difference between target is:
Set fixed threshold
D 2,
D 2<
D,Get in the present invention
D 2=0.013.If
d<
D 2, before and after thinking, two frame human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process.
Step3: temporal signatures coupling
Adopt the gradient vector feature to carry out the temporal signatures coupling.The gradient vector feature acquiring method of target is as follows:
Calculate each pixel gradient according to gradient operator:
The gradient-norm value is:
Gradient direction is:
The gradient direction of [pi/2, pi/2] evenly is divided into 9 intervals (to be designated as
Area k , 1≤
k≤ 9), the 9 dimension gradient vectors of each pixel on the component interval are characterized as:
The average gradient vector of object block is characterized as:
The gradient vector of supposing the present frame target is characterized as
V 1, the gradient vector of former frame target is characterized as
V 2, the temporal signatures difference between target is:
Set fixed threshold
D 3, get in the present invention
D 3=0.14.If
v<
D 3, before and after thinking, two frame human body targets are same targets; Otherwise, think that two human body targets do not mate.In the present invention, get
D 3=0.14.
After obtaining the four-tuple of each target in each frame video image, adopt the time window method to obtain the time window four-tuple, be designated as
W TR :
Wherein,
t 0Represent initial video frame number,
t d Expression time window width is also the interval frame number.
Three, track characteristic extracts and classification
After obtaining the time window four-tuple of each human body target, can extract track characteristic.Employing is based on the mark vector locus feature extracting method of space-time discrete curve, and step is as follows:
Step1: space-time discrete curve scalar feature extraction
The space-time discrete curve scalar feature of extracting comprises: broad sense curvature, space-time length and space-time flex point number, and details are as follows:
(1) broad sense curvature
At first calculate each discrete point on the space-time discrete curve and angle at adjacent 2, as the angle character of this discrete point, then, get the mean value of all discrete point angle characters, as broad sense curvature.
(2) space-time length
The space-time length characteristic is replaced by the discrete point number on the space-time discrete curve.
(3) space-time flex point number
On the space-time discrete curve, when the angle character of discrete point less than
π, think that this point is the space-time flex point at/2 o'clock.On the space-time discrete curve, the number of space-time flex point is space-time flex point number.
Step2: space-time discrete curve vector characteristic extracts
For each discrete point on the space-time discrete curve, extract spatial domain and two vector characteristics of time domain.
(1) spatial domain vector
The acquisition methods of spatial domain vector is: at first the profile descriptor by four-tuple recovers the human body contour outline shape; Then adopt the elliptic curve approximating method to obtain the human body elliptical shape; Extract at last oval major axis vector, as the spatial domain vector.
(2) time domain vector
The time domain vector is used for describing the motion conditions of human body target in traveling process.For any point on the space-time discrete curve, the mould value of its time domain vector is this point and lower any Euclidean distance, the direction of time domain vector for this point point under any direction and the angle of horizontal direction.
Step3: tagsort.Adopt the svm classifier method to carry out the classification of behavioural characteristic.
The present invention selects radial basis function as the kernel function of SVM:
Claims (5)
1. suspicious actions detection method based on video analysis, be included in camera collection to the basis of monitoring video, at first carrying out human body target detects, then the human body target in the video different frame is carried out the track modeling, carrying out at last track characteristic extracts and classification, judgement monitors in scene whether have suspicious actions, and idiographic flow is as follows:
(A), human body target detects
Employing comprises the following steps Step1-Step5 based on human body target detection method poor every the frame frame and the profile pairing:
Step1: adopt every the frame frame difference method and detect moving target, specifically choose the interval
tThree two field pictures of frame
I 0 , I t , I 2t , calculate respectively frame difference image
E 1,
E 2
tBe positive integer, unit is frame;
Step2: determine adaptive threshold
TCalculate the frame difference image average, it be multiply by a weighting coefficient, with as adaptive threshold;
Wherein,
M*
NBe the video image size,
βBe weighting coefficient;
Step3: Threshold segmentation obtains bianry image
MR
MRIntermediate value is the impact point that 1 pixel is designated as motion;
Step4: object block mark; At first adopt the level and smooth object block of median filter method, remove noise, filtering window is 3 * 3; Then adopt the opening operation in mathematical morphology to operate " hole " of filling up object block, merge adjacent object block; Adopt at last 8-in abutting connection with connection method search and target-marking;
Step5: human body target is differentiated; Adopt the profile matching method to reject jamming target; At first, detect the profile of target; Then, adopt the normalization Fourier descriptors to represent the profile of target; At last, adopt Euclidean distance to carry out the profile pairing, the judgement objective attribute target attribute;
(B), track modeling
Adopt the track modeling method of time-based window four-tuple; Wherein, the four-tuple of track is designated as:
Wherein,
iThe expression target sequence number,
fExpression video frame number,
P(
x,
y) expression target center-of-mass coordinate,
d(
u) expression objective contour descriptor,
,
KBe the point sum;
For the first frame video image, the sequence number of each human body target of mark successively, and for each human body target that occurs in subsequent video images, at first each human body target with former frame carries out characteristic matching, if the match is successful, this target sequence number is labeled as the target sequence number that former frame is complementary; Otherwise, be the new sequence number of this target label;
Wherein, in two two field pictures of front and back, the feature matching method of coupling is united in the coupling employing of human body target based on spatial domain constraint and time and frequency domain characteristics, at first carries out the spatial domain constraint, satisfies constraint condition and continues next step coupling, otherwise the termination matching process, judgement two targets are not mated; Then carry out the frequency domain character coupling, satisfy the frequency matching condition and continue next step coupling, otherwise stop matching process, judgement two targets are not mated; Carry out at last temporal signatures coupling, satisfy the time domain condition and judge two object matchings, otherwise judge that two targets do not mate;
After obtaining the four-tuple of each target in each frame video image, adopt the time window method to obtain the time window four-tuple, be designated as
W TR :
Wherein,
t 0Represent initial video frame number,
t d Expression interval frame number;
(C), track characteristic extracts and classification
Employing comprises the following steps Step6-Step8 based on the feature extraction of mark vector locus and the svm classifier method of space-time discrete curve:
Step6: space-time discrete curve scalar feature extraction
The space-time discrete curve scalar feature of extracting comprises: broad sense curvature, space-time length and space-time flex point number, and details are as follows:
(1) broad sense curvature
At first calculate each discrete point on the space-time discrete curve and angle at adjacent 2, as the angle character of this discrete point, then, get the mean value of all discrete point angle characters, as broad sense curvature;
(2) space-time length
Space-time length is the discrete point number on the space-time discrete curve;
(3) space-time flex point number
On the space-time discrete curve, the number of space-time flex point is space-time flex point number;
Step7: space-time discrete curve vector characteristic extracts
For each discrete point on the space-time discrete curve, extract spatial domain and two vector characteristics of time domain;
(4) spatial domain vector
The acquisition methods of spatial domain vector is: at first the profile descriptor by four-tuple recovers the human body contour outline shape, then adopts the elliptic curve approximating method to obtain the human body elliptical shape, extracts at last oval major axis vector, as the spatial domain vector;
(5) time domain vector
For any point on the space-time discrete curve, the mould value of its time domain vector is this point and lower any Euclidean distance, the direction of time domain vector for this point point under any direction and the angle of horizontal direction;
Step8: tagsort
For the track characteristic that extracts, send into the svm classifier device that is obtained by the training stage and classify, differentiate to monitor in scene whether have suspicious actions.
2. the suspicious actions detection method based on video analysis according to claim 1, is characterized in that, adopts the profile matching method to reject jamming target, and step is:
At first, detect the profile of target, at same frame bianry image
MRIn, point (
x,
y) satisfy condition:
Condition 1:
,
Perhaps
,
Then, adopt the normalization Fourier descriptors to represent the profile of target;
To coordinate be (
x,
y)
nIndividual point, note
X[
n]=
x,
Y[
n]=
y, calculate Fourier descriptor:
,
Wherein,
KBe the point sum, the Fourier descriptors method for normalizing is:
At last, adopt Euclidean distance to carry out the profile pairing, the judgement objective attribute target attribute;
The Fourier descriptors of supposing target to be identified is
d 1(
u), the Fourier descriptors of human body target is
d 2(
u), both shape difference is:
Set fixed threshold
DIf,
d<
D, think that this target is human body target, otherwise think that this target is jamming target, with its rejecting.
3. the suspicious actions detection method based on video analysis according to claim 1, is characterized in that, the concrete steps of uniting the human body target feature matching method of coupling based on spatial domain constraint and time and frequency domain characteristics are:
Step3.1: spatial domain constraint
Dividing obviously according to the spatial domain confining region is not the target of same human body; Suppose overlapping point for (
x,
y), in the two frame bianry images of front and back, (
x,
y) must satisfy two conditions:
Wherein,
MR 1Expression present frame object block,
MR 0Expression former frame object block;
If front and back two frame human body targets have overlapping point, think that two human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process;
Step3.2: frequency domain character coupling
Adopt the Fourier descriptors feature of having obtained to carry out the frequency domain character coupling, suppose that the Fourier descriptors of present frame target is
d 1(
u), the Fourier descriptors of former frame target is
d 2(
u), according to foregoing description, the frequency domain character difference between target is:
Set fixed threshold
D 2If,
d<
D 2, before and after thinking, two frame human body targets might be same targets, continue next step coupling; Otherwise, think that two human body targets do not mate, stop the object matching process;
Step3.3: temporal signatures coupling
Adopt the gradient vector feature to carry out the temporal signatures coupling; The gradient vector feature acquiring method of target is as follows:
Calculate each pixel gradient according to gradient operator:
The gradient-norm value is:
Gradient direction is:
The gradient direction of [pi/2, pi/2] evenly is divided into 9 intervals, is designated as
Area k , 1≤
k≤ 9, the 9 dimension gradient vectors of each pixel on the component interval are characterized as:
The average gradient vector of object block is characterized as:
Wherein, W * H is the size of object block,
WBe the width of object block,
HHeight for object block;
The gradient vector of supposing the present frame target is characterized as
V 1, the gradient vector of former frame target is characterized as
V 2, the temporal signatures difference between target is:
Set fixed threshold
D 3If,
v<
D 3, before and after thinking, two frame human body targets are same targets; Otherwise, think that two human body targets do not mate.
4. the suspicious actions detection method based on video analysis according to claim 1, it is characterized in that, the training method of svm classifier device is: at first select positive negative sample, positive sample is to comprise the video of hovering, run, crawl, toppling over or bending forward the body behavior, negative sample is the video that comprises normal walking, gathering or chat behavior, then sets up locus model, extracts track characteristic, adopt at last the SVM method to train, obtain sorter.
5. the suspicious actions detection method based on video analysis according to claim 1, is characterized in that, on the space-time discrete curve, when the angle character of discrete point less than
π/ 2 o'clock, this point was the space-time flex point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210108381 CN102663452B (en) | 2012-04-14 | 2012-04-14 | Suspicious act detecting method based on video analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210108381 CN102663452B (en) | 2012-04-14 | 2012-04-14 | Suspicious act detecting method based on video analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102663452A CN102663452A (en) | 2012-09-12 |
CN102663452B true CN102663452B (en) | 2013-11-06 |
Family
ID=46772935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201210108381 Expired - Fee Related CN102663452B (en) | 2012-04-14 | 2012-04-14 | Suspicious act detecting method based on video analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102663452B (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517042B (en) * | 2013-10-17 | 2016-06-29 | 吉林大学 | A kind of nursing house old man's hazardous act monitoring method |
CN103751998A (en) * | 2013-12-27 | 2014-04-30 | 电子科技大学 | Intelligent efficient crawl training system and method |
CN104123007B (en) * | 2014-07-29 | 2017-01-11 | 电子科技大学 | Multidimensional weighted 3D recognition method for dynamic gestures |
CN104268851A (en) * | 2014-09-05 | 2015-01-07 | 浙江捷尚视觉科技股份有限公司 | ATM self-service business hall behavior analysis method based on depth information |
CN104331700B (en) * | 2014-11-28 | 2017-08-15 | 吉林大学 | Group Activity recognition method based on track energy dissipation figure |
CN104751489A (en) * | 2015-04-09 | 2015-07-01 | 苏州阔地网络科技有限公司 | Grid-based relay tracking method and device in online class |
CN104866841B (en) * | 2015-06-05 | 2018-03-09 | 中国人民解放军国防科学技术大学 | A kind of human body target is run behavioral value method |
CN105096342A (en) * | 2015-08-11 | 2015-11-25 | 杭州景联文科技有限公司 | Intrusion detection algorithm based on Fourier descriptor and histogram of oriented gradient |
CN105512606B (en) * | 2015-11-24 | 2018-12-21 | 北京航空航天大学 | Dynamic scene classification method and device based on AR model power spectrum |
CN105631427A (en) * | 2015-12-29 | 2016-06-01 | 北京旷视科技有限公司 | Suspicious personnel detection method and system |
CN106210635A (en) * | 2016-07-18 | 2016-12-07 | 四川君逸数码科技股份有限公司 | A kind of wisdom gold eyeball identification is moved through method and apparatus of reporting to the police |
CN107784769B (en) * | 2016-08-26 | 2020-07-31 | 杭州海康威视系统技术有限公司 | Alarm method, device and system |
CN106778678A (en) * | 2016-12-29 | 2017-05-31 | 中国人民解放军火箭军工程大学 | A kind of Human bodys' response system |
CN106874849A (en) * | 2017-01-09 | 2017-06-20 | 北京航空航天大学 | Flying bird detection method and device based on video |
CN109961588A (en) * | 2017-12-26 | 2019-07-02 | 天地融科技股份有限公司 | A kind of monitoring system |
CN108898042B (en) * | 2017-12-27 | 2021-10-22 | 浩云科技股份有限公司 | Method for detecting abnormal user behavior in ATM cabin |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN110399437A (en) * | 2018-04-23 | 2019-11-01 | 北京京东尚科信息技术有限公司 | Behavior analysis method and device, electronic equipment, storage medium |
CN108572734A (en) * | 2018-04-23 | 2018-09-25 | 哈尔滨拓博科技有限公司 | A kind of gestural control system based on infrared laser associated image |
CN108564129B (en) * | 2018-04-24 | 2020-09-08 | 电子科技大学 | Trajectory data classification method based on generation countermeasure network |
CN108614896A (en) * | 2018-05-10 | 2018-10-02 | 济南浪潮高新科技投资发展有限公司 | Bank Hall client's moving-wire track describing system based on deep learning and method |
CN108964998B (en) * | 2018-07-06 | 2021-10-15 | 北京建筑大学 | Method and device for detecting singularity of network entity behavior |
CN109113520A (en) * | 2018-08-16 | 2019-01-01 | 孙春兰 | Sliding window based on PVC framework |
CN109035651B (en) * | 2018-08-16 | 2020-06-19 | 宇宸江苏建筑工程有限公司 | Method for improving safety of residential environment |
CN109344790A (en) * | 2018-10-16 | 2019-02-15 | 浩云科技股份有限公司 | A kind of human body behavior analysis method and system based on posture analysis |
CN109511090A (en) * | 2018-10-17 | 2019-03-22 | 陆浩洁 | A kind of interactive mode tracing and positioning anticipation system |
CN109523571B (en) * | 2018-10-25 | 2020-11-17 | 广州番禺职业技术学院 | Non-feature matching motion trajectory optimization method and system |
CN109684916B (en) * | 2018-11-13 | 2020-01-07 | 恒睿(重庆)人工智能技术研究院有限公司 | Method, system, equipment and storage medium for detecting data abnormity based on path track |
CN109215207A (en) * | 2018-11-13 | 2019-01-15 | 武汉极易云创信息技术有限公司 | A kind of gate inhibition's safety monitoring method and system based on network monitoring |
CN109711344B (en) * | 2018-12-27 | 2023-05-26 | 东北大学 | Front-end intelligent specific abnormal behavior detection method |
CN109858365B (en) * | 2018-12-28 | 2021-03-05 | 深圳云天励飞技术有限公司 | Special crowd gathering behavior analysis method and device and electronic equipment |
CN110222640B (en) * | 2019-06-05 | 2022-02-18 | 浙江大华技术股份有限公司 | Method, device and method for identifying suspect in monitoring site and storage medium |
CN110276398A (en) * | 2019-06-21 | 2019-09-24 | 北京滴普科技有限公司 | A kind of video abnormal behaviour automatic judging method |
US11055518B2 (en) * | 2019-08-05 | 2021-07-06 | Sensormatic Electronics, LLC | Methods and systems for monitoring potential losses in a retail environment |
CN111046797A (en) * | 2019-12-12 | 2020-04-21 | 天地伟业技术有限公司 | Oil pipeline warning method based on personnel and vehicle behavior analysis |
CN114530018B (en) * | 2022-04-24 | 2022-08-16 | 浙江华眼视觉科技有限公司 | Voice prompt method and device based on pickup trajectory analysis |
CN114821795B (en) * | 2022-05-05 | 2022-10-28 | 北京容联易通信息技术有限公司 | Personnel running detection and early warning method and system based on ReiD technology |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7382277B2 (en) * | 2003-02-12 | 2008-06-03 | Edward D. Ioli Trust | System for tracking suspicious vehicular activity |
JP2007138811A (en) * | 2005-11-17 | 2007-06-07 | Toyota Motor Corp | Exhaust pipe for internal combustion engine |
WO2007138811A1 (en) * | 2006-05-31 | 2007-12-06 | Nec Corporation | Device and method for detecting suspicious activity, program, and recording medium |
CN101098465A (en) * | 2007-07-20 | 2008-01-02 | 哈尔滨工程大学 | Moving object detecting and tracing method in video monitor |
CN101859436B (en) * | 2010-06-09 | 2011-12-14 | 王巍 | Large-amplitude regular movement background intelligent analysis and control system |
-
2012
- 2012-04-14 CN CN 201210108381 patent/CN102663452B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN102663452A (en) | 2012-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102663452B (en) | Suspicious act detecting method based on video analysis | |
CN102163290B (en) | Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information | |
Topkaya et al. | Counting people by clustering person detector outputs | |
CN103116987B (en) | Traffic flow statistic and violation detection method based on surveillance video processing | |
CN108549846B (en) | Pedestrian detection and statistics method combining motion characteristics and head-shoulder structure | |
CN102799873B (en) | Human body abnormal behavior recognition method | |
CN104866841B (en) | A kind of human body target is run behavioral value method | |
Sugimura et al. | Using individuality to track individuals: Clustering individual trajectories in crowds using local appearance and frequency trait | |
CN103902966B (en) | Video interactive affair analytical method and device based on sequence space-time cube feature | |
CN109829382B (en) | Abnormal target early warning tracking system and method based on intelligent behavior characteristic analysis | |
CN105303191A (en) | Method and apparatus for counting pedestrians in foresight monitoring scene | |
CN103942533A (en) | Urban traffic illegal behavior detection method based on video monitoring system | |
CN108985204A (en) | Pedestrian detection tracking and device | |
CN106682573B (en) | A kind of pedestrian tracting method of single camera | |
Cui et al. | Abnormal event detection in traffic video surveillance based on local features | |
CN104992453A (en) | Target tracking method under complicated background based on extreme learning machine | |
KR101472674B1 (en) | Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images | |
CN105809954A (en) | Traffic event detection method and system | |
CN111738218A (en) | Human body abnormal behavior recognition system and method | |
CN102116876B (en) | Method for detecting spatial point target space-base on basis of track cataloguing model | |
CN103106414A (en) | Detecting method of passer-bys in intelligent video surveillance | |
CN104123714A (en) | Optimal target detection scale generation method in people flow statistics | |
Zhang et al. | Anomaly detection and localization in crowded scenes by motion-field shape description and similarity-based statistical learning | |
Ivanov et al. | Towards generic detection of unusual events in video surveillance | |
KR20150002040A (en) | The way of Real-time Pedestrian Recognition and Tracking using Kalman Filter and Clustering Algorithm based on Cascade Method by HOG |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131106 |
|
CF01 | Termination of patent right due to non-payment of annual fee |