CN108596028A - A kind of unusual checking algorithm based in video record - Google Patents

A kind of unusual checking algorithm based in video record Download PDF

Info

Publication number
CN108596028A
CN108596028A CN201810224910.6A CN201810224910A CN108596028A CN 108596028 A CN108596028 A CN 108596028A CN 201810224910 A CN201810224910 A CN 201810224910A CN 108596028 A CN108596028 A CN 108596028A
Authority
CN
China
Prior art keywords
point
acceleration
video
crowd
particle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810224910.6A
Other languages
Chinese (zh)
Other versions
CN108596028B (en
Inventor
宋耀莲
马丽华
徐文林
王慧东
武双新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201810224910.6A priority Critical patent/CN108596028B/en
Publication of CN108596028A publication Critical patent/CN108596028A/en
Application granted granted Critical
Publication of CN108596028B publication Critical patent/CN108596028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of unusual checking algorithms based in video record, belong to intelligent video-detect early warning field.Video is carried out foreground extraction first and carries out mesh generation to crowd by the present invention, using optical flow method calculating speed matrix and then obtains acceleration matrix, the center monitors point of crowd's exception is determined further according to the size and Orientation of acceleration;Then the facial feature extraction of face in machine learning carry out crowd is recycled;The center monitors point finally being had identified that according to the threshold contrast that machine learning is trained is detected, and then judges whether the crowd of the Spot detection point is in abnormal behaviour.It is hit cruelly if detecting to have occurred in crowd, assembles a crowd to make trouble and when the abnormal behaviours such as similar dangerous event, video surveillance will alarm to this.The present invention not only overcomes due to abnormal behaviour situation complicated difficult the problem of to be detected, but also can predict the generation of abnormal behaviour to a certain extent.

Description

A kind of unusual checking algorithm based in video record
Technical field
The present invention relates to a kind of unusual checking algorithms based in video record, belong to intelligent video-detect early warning skill Art field.
Background technology
With the continuous development of society, the place of many crowd massings easy tos produce safety problem, for example violence is asked safely Topic, the various events such as group affray and robbery are all existing security risks.Therefore video monitoring just becomes and ensures public field The essential equipment of institute's safety, the presence of video monitoring system reduce the probability of time generation to a certain extent, still It equally spends human and material resources, staff is needed sporadically to watch video and then could find locale.
Invention content
The technical problem to be solved in the present invention is to provide a kind of unusual checking algorithms based in video record, provide It is a kind of rationally, the screening scheme that real-time is good, accuracy is high, not only overcome that monitoring range is wide, and abnormal behaviour pattern is complicated The problem of being difficult to be detected, but also the generation of abnormal behaviour can be predicted to a certain extent, alarm can be played Effect, improves the scope of application and the effect of monitoring.
The technical solution adopted by the present invention is:A kind of unusual checking algorithm based in video record, including it is as follows Step:
Step1 carries out foreground extraction for the video record provided;
Step2 carries out gridding processing for Step1 treated video records;
Step3 is carried out feature point extraction to Step2 treated video records, and is tracked using optical flow method, is obtained To rate matrices;
Step4 obtains the size a and deflection β of acceleration, and then obtains acceleration matrix;
Step5 obtains the same particle different moments acceleration in adjacent two field pictures by analyzing the deflection of acceleration Direction change angulation β ' is spent, and particle point proportion m in each region is obtained by the region of β ' distributions;It obtains Acceleration a and accounting m is compared with the a* and m* set before respectively, if | a | > | a*| and m < m*Can by this point Position judgment be abnormal behaviour spot O1Point, as center monitors point, O1Point coordinates is (xo1,yo1), a* is to be arranged in advance Crowd it is normal when peak acceleration, m* is the reduced parameter being arranged in advance, m < m*When illustrate crowd's neither one determine Acceleration direction is in rambling state, and abnormal behaviour is not present;
Step6 establishes model by machine learning, and model is to be trained to obtain according to classical case and a large amount of data Crowd's changing ratio standard value ρ*
Step7, it is ρ to define expression shape change rate first
Indicate that changing features degree is more than the number of the general value upper limit in n expression crowds, N indicates that the target group of extraction is total Number,
Then facial characteristics is carried out to the processed video record of Step2 treated griddings according to classical approach to carry It takes, expression shape change rate ρ will be obtained in the model of the facial expression input Step6 foundation of extraction, ρ and ρ will be gone out*Value compare, If ρ is more than ρ*Then think this region point O of extraction human face's expression in the video2In exception, O2Point coordinates is (xo2, yo2);
Step8, by O2Point coordinates (xo2,yo2) and O1Point coordinates (xo1,yo1) compared, allow model in certain error It is considered that 2 points of expressions are the same spots in enclosing, then monitoring point i.e. anomalous event centered on the point can be specified Point O occurs.
Specifically, the step Step1 is specifically included:
Foreground is extracted using frame difference method, target object and background are distinguished;
Frame difference method is operated using the image sequence between consecutive frame, is taken absolutely to the gray value of the pixel of image Value, then by being compared to have obtained sport foreground information with threshold value, also it is achieved that the extraction of foreground, the figure of consecutive frame Aberration score value calculates as follows
D (x, y, i)=| I (x, y, i+1)-I (x, y, i) | (1-1)
Wherein D (x, y, i) indicates that the difference value of image, I (x, y, i+1) indicate the gray value of i+1 frame, I (x, y, i) table The gray value for showing the i-th frame, the pixel value of certain point is more than some preset threshold value after image difference, illustrates that the point is at this time Belong to foreground point, otherwise belongs to background dot.
Specifically, the step of video record gridding is handled in the step Step2 specifically includes:
Step2.1:Video is extracted in the frame picture of t moment image, is denoted as Ft
Step2.2:By entire frame picture according to p1*p2To be divided into a series of sub-box.
Specifically, the step Step3 is specifically included:
Step3.1:Calculating speed size vt
A certain moving foreground object is chosen on the basis of Step2 ready-portioned small grids as feature extraction target, vacation If target A is a particle, in t moment particle position pixel coordinate AtFor (xt,yt), passing through time τ, corresponding t+ τ The frame picture F at momentt+τParticle position pixel coordinate At+τFor (xt+τ,yt+τ);
Step3.1.1:By the coordinate relationship of particle position between adjacent two frame it is found that particle point AtIt is horizontal and vertical Displacement on direction is respectively (1-2), (1-3):
Δ x=xt+τ-xt (1-2)
Δ y=yt+τ-yt (1-3)
It can be obtained in t moment A according to vector calculation methodtPoint velocity magnitude be
Utilize (1-4) that the velocity magnitude v at the point moment can be calculatedt
Step3.1.2:Approximating assumption τ is 0, be can be ignored because the time interval of consecutive frame is very short, in this way can be with (1-4) formula is reduced to the form of (1-5)
Step3.2:Calculating speed direction θt
Using in Step3.1.1 coordinate and displacement (1-2) and (1-3) the direction θ of speed is calculatedt, by speed Vector and the expression of the angle of horizontal direction,
θt=arctan (Δ y/ Δs x) (1-6)
θtValue range is [- π, π]
Step3.3:One is carried out to each characteristic point in video pictures frame according to the step of Step3.1 and Step3.2 One processing, obtains corresponding velocity magnitude matrix and directional velocity matrix.
Specifically, the step of acquiring acceleration matrix in the Step4 is as follows:
Step4.1:Calculate the size a of acceleration
It can be calculated in t moment A according to (1-4) formula in Step3tThe velocity magnitude of point is vt, after time τ again Calculate in t+ τ moment A according to (1-4) formulat+τThe velocity magnitude of point is vt+τ, the variable quantity of speed is in time τ
Δ v=vt+τ-vt (1-7)
Can obtain acceleration magnitude according to the calculation formula of acceleration is
A=Δ v/ τ (1-8)
Again τ can approximation ignore, it is possible to be approximately considered acceleration a=Δs v
It is obtained by quadrilateral rule
And
Step4.2:Calculate the direction β of acceleration
β=arctan (Δ vy/Δvx) (1-11)
β value ranges are [- π, π]
Step4.3:One is carried out to each particle point in video pictures frame according to the step of Step4.1 and Step4.2 One processing, obtains corresponding acceleration magnitude matrix and acceleration direction matrix.
Specifically, the Step5 specifically comprises the following steps:
Step5.1:Wherein a* is the peak acceleration when crowd that is arranged in advance is normal, shows crowd when a is more than a* Acceleration be in abnormality, judge again at this time acceleration direction whether be in abnormality;
Step5.2:β ' is the same particle different moments acceleration direction change angulation in adjacent two field pictures, M is to pass through the region that β ' is distributed to obtain particle point proportion in each region;2 π are divided into four quadrants, each quadrant is again Two regions are divided into, according to the acceleration direction matrix of present frame particle point, count the number of particle point in each minizone Amount accounts for the ratio m of total numberi, work as miMore than preset m*When illustrate that the flow direction of majority of populations is consistent, people Group can be regarded as in normal condition, otherwise can regard as in abnormality;
Step5.3:When acceleration magnitude and direction are in abnormality it may determine that location point O1(xo1,yo1) Centered on monitoring point.
Specifically, the Step8 is comprised the concrete steps that:
The O that Step5 and Step7 are obtained1(xo1,yo1) and O2(xo2,yo2) two point coordinates are analyzed,
Work as O1And O2It can think that the two center monitors points are the same position when distance d≤d* of two central points,
D* indicates the upper threshold set in advance, finally according to O1And O2Determine that central point O, O point coordinates is (x0,yo)
It is determined that point O occurs for anomalous event from above.
The beneficial effects of the invention are as follows:
1, patent of the present invention, realizes a kind of unusual checking algorithm based in video record, which combines Optical flow method calculates acceleration to judge the abnormal target for realizing crowd's abnormality detection with the detection again of machine learning of crowd, carries High accuracy.
2, patent of the present invention is largely capable of the prompting staff progress early warning processing of intelligence, reduces society The generation of security incident.
The beneficial effects of the invention are as follows:
(1) method of the invention can estimate cycle length;
(2) method of the invention is capable of detecting when period and paracycle and three kinds of states of chaos;
(3) method of the invention can change parameter and the movement convenient for observing time sequence at any time in calculating process State and process.
Description of the drawings
Fig. 1 is the particular flow sheet in the present invention;
Fig. 2 is that crowd flows simple distribution map in Step5 of the present invention.
Fig. 3 is machine learning unusual checking modeling process figure of the present invention;
Fig. 4 is the flow chart that study model part is established in machine learning unusual checking model of the present invention.
Specific implementation mode
In the following with reference to the drawings and specific embodiments, the invention will be further described.
Embodiment 1:As shown in Figs 1-4, a kind of unusual checking algorithm based in video record, including walk as follows Suddenly:
Step1 carries out foreground extraction for the video record provided;
Step2 carries out gridding processing for Step1 treated video records;
Step3 is carried out feature point extraction to Step2 treated video records, and is tracked using optical flow method, is obtained To rate matrices;
Step4 obtains the size a and deflection β of acceleration, and then obtains acceleration matrix;
Step5 obtains the same particle different moments acceleration in adjacent two field pictures by analyzing the deflection of acceleration Direction change angulation β ' is spent, and the region for passing through β ' distributions obtains particle point proportion m in each region;It obtains Acceleration a and accounting m is compared with the a* and m* set before respectively, if | a | > | a*| and m < m*Can by this point Position judgment be abnormal behaviour spot O1Point, as center monitors point, O1Point coordinates is (xo1,yo1), a* is to be arranged in advance Crowd it is normal when peak acceleration, m* is the reduced parameter being arranged in advance, m < m*When illustrate crowd's neither one determine Acceleration direction is in rambling state, and abnormal behaviour is not present;
M* is a reduced parameter, has been set in advance, generally according to being set as 50% for routine, is made with it Comparison be front and back two frame speeds direction change β ' size proportion m in the zone,
Step6 establishes model by machine learning, and model is to be trained to obtain according to classical case and a large amount of data Crowd changing ratio standard value ρ *;
Step7, it is ρ to define expression shape change rate first
Indicate that changing features degree is more than the number of the general value upper limit in n expression crowds, N indicates that the target group of extraction is total Number,
Then facial characteristics is carried out to the processed video record of Step2 treated griddings according to classical approach to carry It takes, expression shape change rate ρ will be obtained in the model of the facial expression input Step6 foundation of extraction, the value for going out ρ and ρ * is compared, This region point O of extraction human face's expression in the video is thought if ρ is more than ρ *2In exception, O2Point coordinates is (xo2, yo2);
Step8, by O2Point coordinates (xo2,yo2) and O1Point coordinates (xo1,yo1) compared, allow model in certain error It is considered that 2 points of expressions are the same spots in enclosing, then monitoring point i.e. anomalous event centered on the point can be specified Point O occurs.
Further, the step Step1 is specifically included:
Foreground is extracted using frame difference method, target object and background are distinguished;
Frame difference method is operated using the image sequence between consecutive frame, is taken absolutely to the gray value of the pixel of image Value, then by being compared to have obtained sport foreground information with threshold value, also it is achieved that the extraction of foreground, the figure of consecutive frame Aberration score value calculates as follows
D (x, y, i)=| I (x, y, i+1)-I (x, y, i) | (1-1)
Wherein D (x, y, i) indicates that the difference value of image, I (x, y, i+1) indicate the gray value of i+1 frame, I (x, y, i) table The gray value for showing the i-th frame, the pixel value of certain point is more than some preset threshold value after image difference, illustrates that the point is at this time Belong to foreground point, otherwise belongs to background dot.
Foreground is extracted using frame difference method, light sensitive degree of the frame difference method used in the present invention for environment It is smaller, so influence of the environment to result is reduced to a certain extent, it can be by the wheel of foreground target by frame difference method Exterior feature extracts.Foreground extraction is the basis of carry out crowd's abnormality detection, and target object and background are only distinguished ability Preferably carry out the mesh generation of next step video record.Calculus of finite differences be used for video frame between arithmetic speed faster.
Further, the step of video record gridding is handled in the step Step2 specifically includes:
Step2.1:Video is extracted in the frame picture of t moment image, is denoted as Ft
Step2.2:By entire frame picture according to p1*p2To be divided into a series of sub-box.
Specifically, the step Step3 is specifically included:
Step3.1:Calculating speed size vt
A certain moving foreground object is chosen on the basis of Step2 ready-portioned small grids as feature extraction target, vacation If target A is a particle, in t moment particle position pixel coordinate AtFor (xt,yt), passing through time τ, corresponding t+ τ The frame picture F at momentt+τParticle position pixel coordinate At+τFor (xt+τ,yt+τ);
Step3.1.1:By the coordinate relationship of particle position between adjacent two frame it is found that particle point AtIt is horizontal and vertical Displacement on direction is respectively (1-2), (1-3):
Δ x=xt+τ-xt (1-2)
Δ y=yt+τ-yt (1-3)
It can be obtained in t moment A according to vector calculation methodtPoint velocity magnitude be
Utilize (1-4) that the velocity magnitude v at the point moment can be calculatedt
Step3.1.2:Approximating assumption τ is 0, be can be ignored because the time interval of consecutive frame is very short, in this way can be with (1-4) formula is reduced to the form of (1-5)
Step3.2:Calculating speed direction θt
Using in Step3.1.1 coordinate and displacement (1-2) and (1-3) the direction θ of speed is calculatedt, by speed Vector and the expression of the angle of horizontal direction,
θt=arctan (Δ y/ Δs x) (1-6)
θtValue range is [- π, π]
Step3.3:One is carried out to each characteristic point in video pictures frame according to the step of Step3.1 and Step3.2 One processing, obtains corresponding velocity magnitude matrix and directional velocity matrix.
Further, the step of acquiring acceleration matrix in the Step4 is as follows:
Step4.1:Calculate the size a of acceleration
It can be calculated in t moment A according to (1-4) formula in Step3tThe velocity magnitude of point is vt, after time τ again Calculate in t+ τ moment A according to (1-4) formulat+τThe velocity magnitude of point is vt+τ, the variable quantity of speed is in time τ
Δ v=vt+τ-vt (1-7)
Can obtain acceleration magnitude according to the calculation formula of acceleration is
A=Δ v/ τ (1-8)
Again τ can approximation ignore, it is possible to be approximately considered acceleration a=Δs v
It is obtained by quadrilateral rule
And
Step4.2:Calculate the direction β of acceleration
β=arctan (Δ vy/Δvx) (1-11)
β value ranges are [- π, π]
Step4.3:One is carried out to each particle point in video pictures frame according to the step of Step4.1 and Step4.2 One processing, obtains corresponding acceleration magnitude matrix and acceleration direction matrix.
Further, the Step5 specifically comprises the following steps:
Step5.1:Wherein a* is the peak acceleration when crowd that is arranged in advance is normal, shows crowd when a is more than a* Acceleration be in abnormality, judge again at this time acceleration direction whether be in abnormality;
Step5.2:β ' is the same particle different moments acceleration direction change angulation in adjacent two field pictures, M is to pass through the region that β ' is distributed to obtain particle point proportion in each region;2 π are divided into four quadrants, each quadrant is again Two regions are divided into, according to the acceleration direction matrix of present frame particle point, count the number of particle point in each minizone Amount accounts for the ratio m of total numberi, work as miMore than preset m*When illustrate that the flow direction of majority of populations is consistent, people Group can be regarded as in normal condition, otherwise can regard as in abnormality;
Step5.3:When acceleration magnitude and direction are in abnormality it may determine that location point O1(xo1,yo1) Centered on monitoring point.
Further, the Step8 is comprised the concrete steps that:
The O that Step5 and Step7 are obtained1(xo1,yo1) and O2(xo2,yo2) two point coordinates are analyzed,
Work as O1And O2It can think that the two center monitors points are the same position when distance d≤d* of two central points,
D* indicates the upper threshold set in advance, finally according to O1And O2Determine that central point O, O point coordinates is (x0,yo)
It is determined that point O occurs for anomalous event from above.
In Step6, the accuracy present invention in order to improve time prediction proposes the method based on machine learning, passes through machine Model is established in device study, and the foundation of model is according to classical case and as obtained by a large amount of data analysis and experiment;Video Include that there is a phenomenon where under normal circumstances, these videos are first specifically divided into two, a part is used for for abnormal behaviour The foundation of detection model, a part are used for the correctness of detection model, are optimized further according to the result of detection, to reach model Accuracy.
In Step7, carry out judging that point occurs for anomalous event again using the model being previously mentioned in Step6, first according to classical approach Facial feature extraction is carried out to the processed video record of gridding in video, the facial expression of people is in face of different things When variation be it is bigger, when occur such as dangerous sudden and violent of accident hit event and assemble a crowd to make trouble event when micro- table End of love or prodigious.Further judge O by expression shape change ratio ρ in the extraction of facial expression and crowd2Point Occur with the presence or absence of anomalous event;
As shown in Figure 1, carrying out foreground extraction after getting frame video pictures first, processing that then video network will be formatted will Target regards particle point as, carries out feature extraction to the speed of particle point and direction using optical flow method, and then accelerated accordingly Matrix is spent, judges monitoring point O by the size and Orientation variation for analyzing acceleration and corresponding distributed layout1Whether For target point, that is, Spot detection point.In order to improve accuracy, simple model is established using machine learning, by the face for detecting face The micro- expression shape change accounting rate ρ in portion and numerical value relatively obtain monitoring point O2Whether it is abnormal point, if two monitoring points are Abnormal point, according to the position of the further center monitors point of 2 points of coordinate positions.
Fig. 2 indicates the acceleration of the direction of motion and acquiring size by particle, when judging the distribution of its direction change Simple distribution map:Entire plane space is divided into four quadrants, then four quadrants are respectively divided into two regions, according to phase The particle point that the adjacent same particle point acceleration direction change angle of two frames is fallen into the distribution and each quadrant in space Number accounts for the ratio m of sum to judge the acceleration of crowd whether in abnormal.Illustrate major part when m is more than preset m* The flow direction of crowd is consistent, orderly movement, be can be regarded as in normal condition, otherwise can be regarded place as In abnormality.
Fig. 3 indicates machine learning unusual checking modeling process, and the classical case video of importing is divided Class is divided into two large divisions, and a part is used for the foundation of model, and a part is for detecting whether the model having had built up can be sentenced It is disconnected correct;Undoubtedly will appear during detection error or other the problem of, learning model is optimized accordingly, with Reach the set goal.
Fig. 4 indicates that the process that study model part is established in machine learning unusual checking model, video are starting just Whether it is in abnormality on identified, known video is imported to the rudimentary model made according to face feature, root According to the comparison and analysis of the target signature in video, preliminary assessment is carried out to picture, judges whether to be in abnormality, warp Continuous test and training are crossed, the parameter in model is optimized and changed, learning model is finally obtained.
The present invention provides a kind of screening schemes reasonable, real-time is good, accuracy is high, not only overcome monitoring range Extensively, the problem of abnormal behaviour pattern complicated difficult is to be detected, but also abnormal behaviour can be predicted to a certain extent Occur, improves the scope of application and the effect of intelligent monitoring.The present invention monitors the exception in video using intelligent electronic equipment Then behavior alarms to relevant staff, thus can effectively be avoided using intelligent video monitoring system sudden and violent Lixing be etc. a series of security risks generation, save manpower, material resources and financial resources.
The specific implementation mode of the present invention is explained in detail above in conjunction with attached drawing, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (7)

1. a kind of unusual checking algorithm based in video record, it is characterised in that:Include the following steps:
Step1 carries out foreground extraction for the video record provided;
Step2 carries out gridding processing for Step1 treated video records;
Step3 is carried out feature point extraction to Step2 treated video records, and is tracked using optical flow method, and speed is obtained Spend matrix;
Step4 obtains the size a and deflection β of acceleration, and then obtains acceleration matrix;
Step5 obtains acceleration side of the same particle different moments in adjacent two field pictures by analyzing the deflection of acceleration To variation angulation β ', and the region for passing through β ' distributions obtains particle point proportion m in each region;Obtained acceleration Degree a and accounting m is compared with the a* and m* set before respectively, if | a | > | a*| and m < m*It can be by the position It is judged as YES abnormal behaviour spot O1Point, as center monitors point, O1Point coordinates is (xo1,yo1), a* is the people being arranged in advance Peak acceleration when group is normal, m* is the reduced parameter being arranged in advance, m<m*When illustrate crowd's neither one determine acceleration Direction is spent, rambling state is in, abnormal behaviour is not present;
Step6 establishes model by machine learning, and model is to be trained to obtain crowd according to classical case and a large amount of data Changing ratio standard value ρ*
Step7, it is ρ to define expression shape change rate first
Indicate that changing features degree is more than the number of the general value upper limit in n expression crowds, N indicates the total people of the target group of extraction Number,
Then facial feature extraction is carried out to the processed video record of Step2 treated griddings according to classical approach, Expression shape change rate ρ will be obtained in the model of the facial expression input Step6 foundation of extraction, ρ and ρ will be gone out*Value compare, if ρ More than ρ*Then think this region point O of extraction human face's expression in the video2In exception, O2Point coordinates is (xo2,yo2);
Step8, by O2Point coordinates (xo2,yo2) and O1Point coordinates (xo1,yo1) compared, it can in certain allowable range of error To think at 2 points, indicate is the same spot, then can specify monitoring point i.e. anomalous event centered on the point and point occurs O。
2. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step Step1 specifically include:
Foreground is extracted using frame difference method, target object and background are distinguished;
Frame difference method is operated using the image sequence between consecutive frame, is taken absolute value to the gray value of the pixel of image, Again by being compared to have obtained sport foreground information with threshold value, it is also achieved that the extraction of foreground, the image of consecutive frame Difference value calculates as follows
D (x, y, i)=| I (x, y, i+1)-I (x, y, i) | (1-1)
Wherein D (x, y, i) indicates that the difference value of image, I (x, y, i+1) indicate that the gray value of i+1 frame, I (x, y, i) indicate the The gray value of i frames, the pixel value of certain point is more than some preset threshold value after image difference, illustrates that the point is to belong at this time Otherwise foreground point belongs to background dot.
3. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step Step2 in video record gridding handle the step of specifically include:
Step2.1:Video is extracted in the frame picture of t moment image, is denoted as Ft
Step2.2:By entire frame picture according to p1*p2To be divided into a series of sub-box.
4. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step Step3 specifically include:
Step3.1:Calculating speed size vt
A certain moving foreground object is chosen as feature extraction target on the basis of Step2 ready-portioned small grids, it is assumed that should Target A is a particle, in t moment particle position pixel coordinate AtFor (xt,yt), passing through time τ, corresponding t+ τ moment Frame picture Ft+τParticle position pixel coordinate At+τFor (xt+τ,yt+τ);
Step3.1.1:By the coordinate relationship of particle position between adjacent two frame it is found that particle point AtBoth horizontally and vertically On displacement be respectively (1-2), (1-3):
Δ x=xt+τ-xt (1-2)
Δ y=yt+τ-yt (1-3)
It can be obtained in t moment A according to vector calculation methodtPoint velocity magnitude be
Utilize (1-4) that the velocity magnitude v at the point moment can be calculatedt
Step3.1.2:Approximating assumption τ is 0, be can be ignored because the time interval of consecutive frame is very short, in this way can be by (1- 4) formula is reduced to the form of (1-5)
Step3.2:Calculating speed direction θt
Using in Step3.1.1 coordinate and displacement (1-2) and (1-3) the direction θ of speed is calculatedt, by velocity vector with The angle expression of horizontal direction,
θt=arctan (Δ y/ Δs x) (1-6)
θtValue range is [- π, π]
Step3.3:Each characteristic point in video pictures frame is located one by one according to the step of Step3.1 and Step3.2 Reason, obtains corresponding velocity magnitude matrix and directional velocity matrix.
5. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step4 in the step of acquiring acceleration matrix it is as follows:
Step4.1:Calculate the size a of acceleration
It can be calculated in t moment A according to (1-4) formula in Step3tThe velocity magnitude of point is vt, after time τ further according to (1-4) formula calculates in t+ τ moment At+τThe velocity magnitude of point is vt+τ, the variable quantity of speed is in time τ
Δ v=vt+τ-vt (1-7)
Can obtain acceleration magnitude according to the calculation formula of acceleration is
A=Δ v/ τ (1-8)
Again τ can approximation ignore, it is possible to be approximately considered acceleration a=Δs v
It is obtained by quadrilateral rule
And
Step4.2:Calculate the direction β of acceleration
β=arctan (Δ vy/Δvx) (1-11)
β value ranges are [- π, π]
Step4.3:Each particle point in video pictures frame is located one by one according to the step of Step4.1 and Step4.2 Reason, obtains corresponding acceleration magnitude matrix and acceleration direction matrix.
6. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step5 specifically comprise the following steps:
Step5.1:Wherein a* is the peak acceleration when crowd that is arranged in advance is normal, show when a is more than a* crowd's plus Speed has been in abnormality, judges whether the direction of acceleration is in abnormality again at this time;
Step5.2:β ' is that the same particle different moments acceleration direction change angulation, m are in adjacent two field pictures The region for passing through β ' distributions obtains particle point proportion in each region;2 π are divided into four quadrants, each quadrant is drawn again It is divided into two regions, according to the acceleration direction matrix of present frame particle point, counts the quantity of particle point in each minizone Account for the ratio m of total numberi, work as miMore than preset m*When illustrate that the flow direction of majority of populations is consistent, crowd It can be regarded as in normal condition, otherwise can regard as in abnormality;
Step5.3:When acceleration magnitude and direction are in abnormality it may determine that location point O1(xo1,yo1) be Heart monitoring point.
7. a kind of unusual checking algorithm based in video record according to claim 1, it is characterised in that:It is described Step8 comprise the concrete steps that:
The O that Step5 and Step7 are obtained1(xo1,yo1) and O2(xo2,yo2) two point coordinates are analyzed,
Work as O1And O2It can think that the two center monitors points are the same position when distance d≤d* of two central points,
D* indicates the upper threshold set in advance, finally according to O1And O2Determine that central point O, O point coordinates is (x0,yo)
It is determined that point O occurs for anomalous event from above.
CN201810224910.6A 2018-03-19 2018-03-19 Abnormal behavior detection algorithm based on video recording Active CN108596028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810224910.6A CN108596028B (en) 2018-03-19 2018-03-19 Abnormal behavior detection algorithm based on video recording

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810224910.6A CN108596028B (en) 2018-03-19 2018-03-19 Abnormal behavior detection algorithm based on video recording

Publications (2)

Publication Number Publication Date
CN108596028A true CN108596028A (en) 2018-09-28
CN108596028B CN108596028B (en) 2022-02-08

Family

ID=63626549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810224910.6A Active CN108596028B (en) 2018-03-19 2018-03-19 Abnormal behavior detection algorithm based on video recording

Country Status (1)

Country Link
CN (1) CN108596028B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225299A (en) * 2019-05-06 2019-09-10 平安科技(深圳)有限公司 Video monitoring method, device, computer equipment and storage medium
CN110274590A (en) * 2019-07-08 2019-09-24 哈尔滨工业大学 A kind of violent action detection method and system based on decision tree
CN110298327A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of visual effect processing method and processing device, storage medium and terminal
CN111368089A (en) * 2018-12-25 2020-07-03 中国移动通信集团浙江有限公司 Service processing method and device based on knowledge graph
CN111695404A (en) * 2020-04-22 2020-09-22 北京迈格威科技有限公司 Pedestrian falling detection method and device, electronic equipment and storage medium
CN111814775A (en) * 2020-09-10 2020-10-23 平安国际智慧城市科技股份有限公司 Target object abnormal behavior identification method, device, terminal and storage medium
WO2021134982A1 (en) * 2019-12-31 2021-07-08 上海依图网络科技有限公司 Video analysis-based event prediction method and device, and medium and system thereof
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 Attack behavior early warning method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732236A (en) * 2015-03-23 2015-06-24 中国民航大学 Intelligent crowd abnormal behavior detection method based on hierarchical processing
CN105608440A (en) * 2016-01-03 2016-05-25 复旦大学 Minimum -error-based feature extraction method for face microexpression sequence
KR20170051196A (en) * 2015-10-29 2017-05-11 주식회사 세코닉스 3-channel monitoring apparatus for state of vehicle and method thereof
CN107483887A (en) * 2017-08-11 2017-12-15 中国地质大学(武汉) The early-warning detection method of emergency case in a kind of smart city video monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732236A (en) * 2015-03-23 2015-06-24 中国民航大学 Intelligent crowd abnormal behavior detection method based on hierarchical processing
KR20170051196A (en) * 2015-10-29 2017-05-11 주식회사 세코닉스 3-channel monitoring apparatus for state of vehicle and method thereof
CN105608440A (en) * 2016-01-03 2016-05-25 复旦大学 Minimum -error-based feature extraction method for face microexpression sequence
CN107483887A (en) * 2017-08-11 2017-12-15 中国地质大学(武汉) The early-warning detection method of emergency case in a kind of smart city video monitoring

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PENG WANG ET AL.: "Automated video-based facial expression analysis of neuropsychiatric disorders", 《JOURNAL OF NEUROSCIENCE METHODS》 *
刘皓: "基于条件随机场模型的异常行为检测方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *
华斌等: "公共场所人群加速度异常检测系统", 《安全与环境学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368089A (en) * 2018-12-25 2020-07-03 中国移动通信集团浙江有限公司 Service processing method and device based on knowledge graph
CN111368089B (en) * 2018-12-25 2023-04-25 中国移动通信集团浙江有限公司 Business processing method and device based on knowledge graph
CN110225299B (en) * 2019-05-06 2022-03-04 平安科技(深圳)有限公司 Video monitoring method and device, computer equipment and storage medium
CN110225299A (en) * 2019-05-06 2019-09-10 平安科技(深圳)有限公司 Video monitoring method, device, computer equipment and storage medium
CN110298327A (en) * 2019-07-03 2019-10-01 北京字节跳动网络技术有限公司 A kind of visual effect processing method and processing device, storage medium and terminal
CN110298327B (en) * 2019-07-03 2021-09-03 北京字节跳动网络技术有限公司 Visual special effect processing method and device, storage medium and terminal
CN110274590A (en) * 2019-07-08 2019-09-24 哈尔滨工业大学 A kind of violent action detection method and system based on decision tree
WO2021134982A1 (en) * 2019-12-31 2021-07-08 上海依图网络科技有限公司 Video analysis-based event prediction method and device, and medium and system thereof
CN111695404A (en) * 2020-04-22 2020-09-22 北京迈格威科技有限公司 Pedestrian falling detection method and device, electronic equipment and storage medium
CN111695404B (en) * 2020-04-22 2023-08-18 北京迈格威科技有限公司 Pedestrian falling detection method and device, electronic equipment and storage medium
CN111814775A (en) * 2020-09-10 2020-10-23 平安国际智慧城市科技股份有限公司 Target object abnormal behavior identification method, device, terminal and storage medium
CN111814775B (en) * 2020-09-10 2020-12-11 平安国际智慧城市科技股份有限公司 Target object abnormal behavior identification method, device, terminal and storage medium
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 Attack behavior early warning method and system
CN114821808B (en) * 2022-05-18 2023-05-26 湖北大学 Attack behavior early warning method and system

Also Published As

Publication number Publication date
CN108596028B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN108596028A (en) A kind of unusual checking algorithm based in video record
CN104123544B (en) Anomaly detection method and system based on video analysis
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN105100757B (en) Detector with integral structure
CN106128053A (en) A kind of wisdom gold eyeball identification personnel stay hover alarm method and device
CN103456024B (en) A kind of moving target gets over line determination methods
KR102144531B1 (en) Method for automatic monitoring selectively based in metadata of object employing analysis of images of deep learning
CN102799893A (en) Method for processing monitoring video in examination room
CN106127814A (en) A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN103198296A (en) Method and device of video abnormal behavior detection based on Bayes surprise degree calculation
CN103456009B (en) Object detection method and device, supervisory system
CN102214359A (en) Target tracking device and method based on hierarchic type feature matching
CN111488803A (en) Airport target behavior understanding system integrating target detection and target tracking
CN106548142A (en) Crowd&#39;s incident detection and appraisal procedure in a kind of video based on comentropy
CN101303726A (en) System for tracking infrared human body target based on corpuscle dynamic sampling model
CN116310943B (en) Method for sensing safety condition of workers
CN114842560B (en) Computer vision-based construction site personnel dangerous behavior identification method
CN115171022A (en) Method and system for detecting wearing of safety helmet in construction scene
CN116416577A (en) Abnormality identification method for construction monitoring system
CN104866830A (en) Abnormal motion detection method and device
Liu et al. Abnormal crowd behavior detection based on optical flow and dynamic threshold
CN111274872B (en) Video monitoring dynamic irregular multi-supervision area discrimination method based on template matching
CN115661755A (en) Worker safety dynamic evaluation method based on computer vision and trajectory prediction
Zhao et al. Abnormal behavior detection based on dynamic pedestrian centroid model: Case study on U-turn and fall-down

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant