CN109886117A - A kind of method and apparatus of goal behavior detection - Google Patents

A kind of method and apparatus of goal behavior detection Download PDF

Info

Publication number
CN109886117A
CN109886117A CN201910055397.7A CN201910055397A CN109886117A CN 109886117 A CN109886117 A CN 109886117A CN 201910055397 A CN201910055397 A CN 201910055397A CN 109886117 A CN109886117 A CN 109886117A
Authority
CN
China
Prior art keywords
target
frame image
image
same target
same
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910055397.7A
Other languages
Chinese (zh)
Inventor
罗蝶
单洪伟
孙论强
郝旭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense TransTech Co Ltd
Qingdao Hisense Network Technology Co Ltd
Original Assignee
Qingdao Hisense Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Network Technology Co Ltd filed Critical Qingdao Hisense Network Technology Co Ltd
Priority to CN201910055397.7A priority Critical patent/CN109886117A/en
Publication of CN109886117A publication Critical patent/CN109886117A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods and apparatus of goal behavior detection, it is related to video monitoring behavioural analysis field, it is very big to solve the variation influence that current goal behavior detection is illuminated by the light with environment, the problem of erroneous judgement is easy to happen when detection, the method for the present invention include: to be detected to obtain the location information of the target in image to collected multiple image by target detection model;After having the same target in determining continuous multiple frames image according to the gradient eigenvector of the target in the location information of the target in continuous multiple frames image and continuous multiple frames image, the coordinate information of the same target in continuous multiple frames image is placed in the corresponding coordinate set of the same target;Behavioral value is carried out to the same target in continuous multiple frames image according to the same target corresponding coordinate set, since the present invention detects target by target detection model, it can be with the location information of accurate detection target into image, not being illuminated by the light the variation with environment influences, and reduces erroneous judgement.

Description

A kind of method and apparatus of goal behavior detection
Technical field
The present invention relates to video monitoring behavioural analysis field, in particular to a kind of method and apparatus of goal behavior detection.
Background technique
Line of stumbling invasion, region invasion are to obtain a background image using the image sequence continuously inputted and be used as reference, after The continuous image entered and background image are compared, and obtain the pixel of difference, then carry out connectivity mark to these pixels Note, regions, that is, initial target of these labels, then tracks these targets, forms continuous target chained list, then Above-mentioned prospect and target chained list are analyzed;Compared with pre-set Rule Information, warning message is exported.
Wherein, line of stumbling, which is invaded, is only suitable for target sparse, such as unattended substantially without the scene mutually blocked between target The boundary defence in region;Region invasion is similar with warning line, if to detect entrance leaves event, region line periphery also will be there are Certain target space;It is only suitable for target sparse, the scene mutually blocked substantially between target, such as unattended region Boundary defence.Passing through the region into or out region of feeling the pulse with the finger-tip mark will call the police.
In order to ensure social public security, need to be monitored in key area installation surveillance video, in video The line of stumbling of target area, the abnormal behaviours such as invasion carry out identification early warning.Behavior point is carried out by computer understanding video content Analysis, can automatically alarm to the abnormal behaviour of warning region, improve the monitoring effect and efficiency of key area.
But the detection means such as line of stumbling at this stage, invasion are mostly based on foreground detection or background modeling, by pixel Change to detect the motion change of target, the track of target can not be accurately tracked, and the variation influence being illuminated by the light with environment is very big, Algorithm is unable to the behaviors such as the invasion of accurate judgement target, line of stumbling, and causes to be easy to happen erroneous judgement.
In conclusion goal behavior detection at this stage, which is illuminated by the light variation with environment, to be influenced very big, can not accurately track The track of target is easy to happen erroneous judgement when detecting to goal behavior.
Summary of the invention
The present invention provides a kind of method and apparatus of goal behavior detection, and goal behavior exists in the prior art to solve It is very big to detect the variation influence being illuminated by the light with environment, the track of target can not be accurately tracked, hair is easy when detecting to goal behavior The problem of raw erroneous judgement.
In a first aspect, a kind of method of goal behavior detection provided in an embodiment of the present invention includes:
Detected to obtain the location information of the target in image to collected multiple image by target detection model;
According to the Gradient Features of the target in the location information of the target in continuous multiple frames image and continuous multiple frames image to After amount has the same target in determining the continuous multiple frames image, by the same target in the continuous multiple frames image Coordinate information be placed in the corresponding coordinate set of the same target;
According to the corresponding coordinate set of the same target to the same target in the continuous multiple frames image Carry out behavioral value.
The above method detects target by target detection model, can be accurate using the method for deep learning Detect the location information of target in image, not being illuminated by the light the variation with environment influences, and carries out to the target after detection special Sign is extracted, and by comparing the feature and location information of continuous multiple frames image, the track of target can be accurately obtained, eventually by target Track the behavior of target is detected, reduce because illumination and environmental change cause erroneous judgement there is a situation where.
In one possible implementation, determine in the continuous multiple frames image there is the same mesh in the following manner Mark:
For arbitrary neighborhood two field pictures, according in the location information of the target in previous frame image and a later frame image The location information of target determine two goal satisfaction locality conditions, and according to the Gradient Features of the target in previous frame image to The gradient eigenvector of amount and the target in a later frame image determines described two goal satisfaction characteristic conditions, it is determined that described two A target is the same target.
The above method, by the location information and gradient eigenvector of target to two in arbitrary neighborhood two field pictures Target is judged, when locality condition is same with the company target that then can determine in any two field pictures when characteristic condition Target, the location information of the target obtained based on deep learning algorithm is more accurate compared with the prior art, and passes through target Gradient eigenvector can be accurately judged to the marginal information of target, therefore pass through the location information and gradient eigenvector of target It is more accurate that judging result is carried out to target.
In one possible implementation, the locality condition includes some or all of in following:
Believe the target position in central point abscissa and a later frame image in previous frame image in the location information of target The distance of central point abscissa in breath, wide half in the location information no more than target in the previous frame image;
The position of target is believed in central point ordinate and a later frame image in previous frame image in the location information of target The distance of central point ordinate in breath, high half in the location information no more than target in the previous frame image;
Target and the overlapping area of the target in a later frame image and the ratio of the sum of area in previous frame image be not small In area threshold;
The characteristic condition are as follows:
The gradient eigenvector of target in previous frame image and the gradient eigenvector of the target in a later frame image press from both sides Cosine of an angle value is not less than characteristic threshold value.
The above method, by the comparison of the center point coordinate and area and gradient eigenvector of front and back two field pictures to judge Whether two targets in adjacent two field pictures are the same target, since same target under normal circumstances is in adjacent two field pictures In displacement it is not too large, it is more accurate to be judged for two frame of arbitrary neighborhood.
In one possible implementation, the method also includes:
If the goal satisfaction locality condition in multiple targets and a later frame image in previous frame image, and meet special Sign condition, it is determined that in multiple targets in the previous frame image with the faying surface of a target in a later frame image The long-pending target with the maximum target of ratio of the sum of area and a later frame image is the same target;Or
If the goal satisfaction locality condition in multiple targets and previous frame image in a later frame image, and meet special Sign condition, it is determined that in multiple targets in a later frame image with the faying surface of a target in the previous frame image The long-pending target with the maximum target of ratio of the sum of area and the previous frame image is the same target.
The above method, when judging the target in arbitrary neighborhood two field pictures, if there is multiple mesh in two field pictures Mark meets locality condition and characteristic condition, i.e. a target in a frame image matches with multiple targets in another frame image When, it can be using the maximum target of ratio of overlapping area in multiple targets and the sum of area as optimal with another object matching Target avoids multiple targets in a frame image and a target in another frame image using other targets as new target Match so that the coordinate set of target causes confusion.
In one possible implementation, the method also includes:
If the target in target and the previous frame image in a later frame image is not the same target, by institute Target is stated in a later frame image as new target, and it is corresponding that the coordinate information of the new target is placed in the new target Coordinate set in.
The above method, if a certain target in a later frame image and any one target in previous frame image are not same One target can determine that the target is new target at this time, and will record its coordinate information.
In one possible implementation, according to the corresponding coordinate set of the same target to the continuous multiple frames The same target in image carries out behavioral value, comprising:
The same mesh in the continuous multiple frames image is detected according to the corresponding coordinate set of the same target Whether whether target behavior be intrusion behavior or be line behavior of stumbling.
The above method, the coordinate set when judging the behavior of a certain target according to the target judge the behavior of this target Whether it is intrusion behavior or whether is line behavior of stumbling.
In one possible implementation, it is determined according to the corresponding coordinate set of the same target described continuous more The behavior of the same target in frame image is invasion, comprising:
Judge whether the invasion point coordinate of the same target described in current frame image is located in warning region;
If so, determining that the behavior of the same target in the continuous multiple frames image is intrusion behavior;
If not, it is determined that the behavior of the same target in the continuous multiple frames image is not intrusion behavior.
The above method determines the invasion point coordinate of warning region and target in current frame image, if the invasion point of target is sat It is marked on and then can determine that the behavior of target is invasion in warning region, otherwise, the behavior of the target is not invasion, passes through intrusion rule Whether the behavior for verifying target is invasion, provides a kind of simple and effective intrusion behavior verification method.
In one possible implementation, the same target described in the current frame image is determined in the following manner Invasion point coordinate:
Determine the corresponding coordinate set of the same target determine the same target the current frame image with And the center point coordinate in the continuous N frame image before the current frame image, wherein N is positive integer;
By the same target in the current frame image and the continuous N frame figure before the current frame image Invasion point coordinate of the average value of center point coordinate as in as the same target described in the current frame image.
The above method passes through company in order to prevent since camera shake makes target in a certain frame image big jump occur The center point coordinate of target, which is averaged, in continuous N frame image determines that invasion point coordinate reduces so that testing result is more accurate It is influenced caused by camera shake.
In one possible implementation, it is determined according to the corresponding coordinate set of the same target described continuous more The behavior of the same target in frame image is line of stumbling, comprising:
The line that will be stumbled using in current frame image is that cornerwise rectangle is stumbled outer section of rectangle of line described in;
When stumble line point coordinate and the same target of the same target in current frame image are worked as described Line point coordinate is stumbled all in the outer section of rectangle in the previous frame image of prior image frame, and the same target is worked as described Line point coordinate of stumbling in prior image frame is located at described stumble with the line point coordinate of stumbling in the previous frame image of the current frame image When line two sides, determine that the behavior of the same target in the continuous multiple frames image is line behavior of stumbling.
The above method, the position for line of determining to stumble in current frame image, and determine using line of stumbling be cornerwise rectangle as It stumbles outer section of rectangle of line, if target stumbles line point coordinate all in stumble line point coordinate and the previous frame image of target in current frame image It cuts in rectangle outside, and line point coordinate of stumbling in present frame and previous frame image is located at line two sides of stumbling, it at this time can stumbling by target Line point coordinate determines that the behavior of target is line behavior of stumbling, and provides a kind of line behavior verification method of simply and effectively stumbling.
In one possible implementation, the line point of stumbling of the same target in the picture is determined in the following manner Coordinate:
According to the corresponding coordinate set of the same target determine the same target the current frame image with And the center point coordinate in the continuous N frame image before the current frame image, wherein M is positive integer;
By the same target in the current frame image and the continuous N frame figure before the current frame image Stumble line point coordinate of the average value of center point coordinate as in as the same target in the current frame image.
The above method passes through company in order to prevent since camera shake makes target in a certain frame image big jump occur The method that the center point coordinate of target is averaged in continuous M frame image determines that stumbling for target in current frame image and is worked as at line point coordinate The line point coordinate of stumbling of target in the previous frame image of previous frame, so that testing result is more accurate, caused by reducing camera shake It influences.
Second aspect, a kind of equipment of goal behavior detection provided in an embodiment of the present invention include: that at least one processing is single Member and at least one storage unit, wherein the storage unit is stored with program code, when said program code is by the place When managing unit execution, so that the equipment executes method as described above.
The third aspect, the application also provide a kind of computer storage medium, are stored thereon with computer program, the program quilt The step of first aspect the method is realized when processing unit executes.
In addition, second aspect technical effect brought by any implementation into the third aspect can be found in first aspect Technical effect brought by middle difference implementation, details are not described herein again.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of method schematic diagram of goal behavior detection provided in an embodiment of the present invention;
Fig. 2 is target position information schematic diagram in a kind of image provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of the histograms of oriented gradients of target provided in an embodiment of the present invention;
Fig. 4 A is 1 location information schematic diagram of target in a kind of previous frame image provided in an embodiment of the present invention;
Fig. 4 B is 2 location information schematic diagram of target in a kind of a later frame image provided in an embodiment of the present invention;
Fig. 4 C is a kind of target 1 provided in an embodiment of the present invention and 2 location information schematic diagram of target;
Fig. 4 D is a kind of target 1 provided in an embodiment of the present invention and 2 area schematic diagram of target;
Fig. 5 A is a kind of schematic diagram of the target invasion point provided in an embodiment of the present invention with warning region position;
Fig. 5 B is schematic diagram of another target invasion point provided in an embodiment of the present invention with warning region position;
Fig. 6 A is the schematic diagram of line of stumbling in a kind of image provided in an embodiment of the present invention;
Fig. 6 B is that a kind of target provided in an embodiment of the present invention is stumbled the schematic diagram of line behavioral value;
Fig. 6 C is that another target provided in an embodiment of the present invention is stumbled the schematic diagram of line behavioral value;
Fig. 7 is that a kind of target provided in an embodiment of the present invention is stumbled the complete method schematic diagram of line or intrusion behavior detection;
Fig. 8 is a kind of equipment schematic diagram of goal behavior detection provided in an embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that the described embodiments are only some of the embodiments of the present invention, rather than whole implementation Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts All other embodiment, shall fall within the protection scope of the present invention.
The some words occurred in text are explained below:
1, term "and/or" in the embodiment of the present invention describes the incidence relation of affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.Character "/" one As indicate forward-backward correlation object be a kind of "or" relationship.
2, term " HOG (Histogram of Oriented Gradient, direction gradient histogram in the embodiment of the present invention Figure) feature " it is a kind of Feature Descriptor for being used to carry out object detection in computer vision and image procossing.HOG feature is logical It crosses to calculate and carrys out constitutive characteristic with the gradient orientation histogram of statistical picture regional area.
3, term " chained list " is storage knot discontinuous, non-sequential on a kind of physical memory cell in the embodiment of the present invention Structure, the logical order of data element are realized by the pointer link orders in chained list.Chained list is by a series of nodes (in chained list Each element is known as node) it forms, node can be dynamically generated at runtime.Each node includes two parts: one is The data field of storing data-elements, the other is storing the pointer field of next node address.
The application scenarios of description of the embodiment of the present invention are the technical solutions in order to more clearly illustrate the embodiment of the present invention, The restriction for technical solution provided in an embodiment of the present invention is not constituted, those of ordinary skill in the art are it is found that with newly answering With the appearance of scene, technical solution provided in an embodiment of the present invention is equally applicable for similar technical problem.Wherein, at this In the description of invention, unless otherwise indicated, the meaning of " plurality " is two or more.
Video surveillance applications are very extensive.By carrying out intellectual analysis to video, it can determine whether out whether there is moving target to get over Boundary, invasion area-of-interest.Currently, common analytical technology is judged using the modes such as line and region of stumbling are arranged.
Line of stumbling detection does not allow personnel to enter or only allows one-way trip in certain important place ordinary circumstances, and system can Make in such a way that line is stumbled in setting come analysis detection, prohibited area in this way does not allow for someone to pass through line of stumbling, passes through, alarms: stumbling Line can have direction row, can monitor the passage in some direction.
In intelligent video monitoring, the intrusion detection based on intelligent video analysis is one of them very important project, I.e. in camera supervised scene domain, can be needed according to monitoring and purpose is arranged alert domain, system can detect automatically into Moving target and its behavior of warning region are invaded, once discovery has the default Alert condition of satisfaction, then automatically generates warning message, And the target into security area is indicated with alert box, while identifying its motion profile.
The detection means such as line of stumbling at this stage, invasion are mostly based on foreground detection or background modeling.Establishing background mould After type, by comparing certain of present image and background, it can be deduced that prospect.Under normal circumstances, the prospect packet obtained Many noises are contained, in order to eliminate noise, opening operation and closed operation can have been carried out to foreground image, then abandoned again smaller Profile.Foreground detection is broadly divided into frame difference method, average background method, optical flow method, prospect modeling, background non-parametric estmation, background Modeling etc..Background and prospect are all opposite concepts, by taking highway as an example: we on highway to coming and going in great number sometimes Automobile it is interested, at this moment automobile is prospect, and the environment of road surface and surrounding is background;Sometimes we only to swarm into high speed The pedestrian of highway is interested, and at this moment intruder is prospect, and other things including automobile etc are at background.Background modeling Mode it is many advanced or simple.The but occasion that various background models have oneself applicable, even advanced background mould Type can not be suitable for any occasion.
Goal behavior is detected using the method for foreground detection or background modeling at present, passes through the variation to pixel The motion change of target is detected, the track of target can not be accurately tracked, and the variation being illuminated by the light with environment influences very big, algorithm The behaviors such as the invasion of accurate judgement target, line of stumbling are unable to, cause to be easy to happen erroneous judgement.
Therefore the embodiment of the present invention provides a kind of goal behavior detection, is detected based on deep learning model to target, The behavior of target is detected by target following.
For above-mentioned scene, the embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
As shown in Figure 1, the method that a kind of goal behavior of the embodiment of the present invention detects, specifically includes the following steps:
Step 100: being detected to obtain the target in image to collected multiple image by target detection model Location information;
Step 101: according to the ladder of the target in the location information of the target in continuous multiple frames image and continuous multiple frames image It, will be described same in the continuous multiple frames image after degree feature vector has the same target in determining the continuous multiple frames image The coordinate information of one target is placed in the corresponding coordinate set of the same target;
Step 102: according to the corresponding coordinate set of the same target to described same in the continuous multiple frames image One target carries out behavioral value.
Through the above scheme, target is detected using target detection model, the method based on deep learning can be quasi- The true location information for detecting target in image, not being illuminated by the light the variation with environment influences, and then to the mesh after detection Mark carries out feature extraction, by comparing the feature and location information of continuous multiple frames image, can accurately obtain the track of target, finally By the track of target to the behavior of target carry out detection solve because illumination and environmental change cause erroneous judgement there is a situation where.
In embodiments of the present invention, according in the location information of the target in continuous multiple frames image and continuous multiple frames image It, will be same in continuous multiple frames image after the gradient eigenvector of target has the same target in determining continuous multiple frames image The coordinate information of target is placed in the corresponding coordinate set of the same target;Specifically, coordinate set can be in the form of chained list It embodies, the movement track of target is recorded with this.
For example, there is the same target: target 1 since the continuous 7 frame image first frame, target 1 is in continuous 7 frame image Coordinate information composition coordinate set are as follows: (x1, y1, w1, h1), (x2, y2, w2, h2), (x3, y3, w3, h3), (x4, y4, W4, h4), (x5, y5, w5, h5), (x6, y6, w6, h6), (x7, y7, w7, h7).
In embodiments of the present invention, it before being detected by target detection model to target, needs to target detection mould Type is trained, such as is trained by the target that YOLO (You Qnly Look Once) model detects needs, is obtained Target detection model.
In embodiments of the present invention, behavioral value can be carried out to plurality of target, the type of target is dependent on progress target inspection The type of training sample when surveying model training.For example, the targeted species in training sample are personage, then behavior can be carried out to personage Detection can carry out behavioral value to automobile if the targeted species of training sample are automobile.
In embodiments of the present invention, collected multiple image is detected to obtain in image by target detection model Target location information, wherein the location information be (x, y, w, h) pass through the collected company of camera in monitoring field Background will not change in continuous multiple image, and due to the movement of target, position of the target in partial frame image can also occur Variation, if establishing rectangular coordinate system as shown in Figure 2, (x, y) then indicates coordinate (the i.e. mesh of the central point of target in the picture Mark the position of central point in the picture), wherein x indicates that abscissa, y indicate that ordinate, w then indicate that the width of target, h indicate target Height, then four edge coordinates (x-w/2, y-h/2) of target, (x+w/2, y-h/2), (x+w/2, y+h/2), (x-w/2, y+h/ 2)。
It should be noted that the mode for establishing coordinate system cited in the embodiment of the present invention is merely illustrative, it is any A kind of mode that establishing coordinate system is suitable for the embodiment of the present invention.
In embodiments of the present invention, after detecting by target detection model to the target in image, pass through HOG spy The method that sign is extracted determines the gradient eigenvector of target.
Specifically, it is directed to any one target, the size scaling of the target image that will test to 96*128, every 8*8's Pixel forms a cell (statistics cell factory), and by the group of histogram away from 2 π/9 are set as, the group number of histogram is 9, often 2*2 cell forms a block, using 8 pixels as step-length, i.e., will have 11 scanning windows in horizontal direction, will in vertical direction There are 15 scanning windows, 36*11*15=5940 feature can be formed, as shown in figure 3, be the histograms of oriented gradients of target, The gradient direction of pixel is mapped in 0~360 ° of (i.e. 0~18 π/9) range by expression, and wherein abscissa indicates gradient direction, Unit is rad (radian), and ordinate indicates gradient magnitude, refers to the size of gradient at a certain gradient direction.
In embodiments of the present invention, determine in continuous multiple frames image there is the same target in the following manner:
For arbitrary neighborhood two field pictures, according in the location information of the target in previous frame image and a later frame image The location information of target determine two goal satisfaction locality conditions, and according to the Gradient Features of the target in previous frame image to Amount determines two goal satisfaction characteristic conditions with the gradient eigenvector of target in a later frame image, it is determined that two targets are The same target.
Wherein, locality condition includes but is not limited to some or all of in following:
Locality condition 1, in the central point abscissa and a later frame image in previous frame image in the location information of target The distance of central point abscissa in target position information, it is wide by one in the location information no more than target in previous frame image Half;And in the central point ordinate in previous frame image in the location information of target and a later frame image in the location information of target Central point ordinate distance, high half in the location information no more than target in previous frame image.
Locality condition 2, the sum of the overlapping area of the target in target and a later frame image in previous frame image and area Ratio be not less than area threshold.
In embodiments of the present invention, by taking locality condition 1 and locality condition 2 all meet to meet locality condition as an example.
Assuming that arbitrary neighborhood two field pictures, target is rectangle, and a target is regarded as a template, such as Fig. 4 A and 4B institute Show, wherein Fig. 4 A is previous frame image, and Fig. 4 B is a later frame image, carries out detecting to obtain former frame figure to image by YOLO model The location information of target 1 as in is (x1, y1, w1, h1), and the bit wide of the target 2 in a later frame image is w1, a height of h1;Mesh The center point coordinate of mark 2 is (x2, y2), and the width of target 2 is w2, a height of h2, and as shown in Figure 4 C, then target's center's point coordinate is answered Meet:
||x1-x2||≤w1/2;
||y1-y2||≤h1/2;
For example, x1=10, x2=11, y1=15, y2=17, w1=2, w2=3, h1=2, h2=3.
Then | | x1-x2 | |=1, w1/2=1;| | y1-y2 | |=2, h1/2=1, therefore it is unsatisfactory for locality condition 1.
If x1=10, x2=11, y1=15, y2=17, w1=2, w2=3, h1=5, h2=3.
Then | | x1-x2 | |=1, w1/2=1;| | y1-y2 | |=2, h1/2=2.5 meet locality condition 1 at this time.
As shown in Figure 4 D, the area of the target 1 in previous frame image is S1=w1*h1, the target 2 in a later frame image Area is S2=w2*h2 compared with former frame template area S1, and area should meet:
If S1=1.7m2, S2=1.8m2, wherein the overlapping area of S1 and S2 is 0.3m2, i.e. S1 ∩ S2=0.3, S1US2 =S1+S2-S1 ∩ S2=1.7+1.8-0.3=3.2;
At this point,
Therefore it is unsatisfactory for locality condition 2.
If S1=1.7m2, S2=1.8m2, wherein the overlapping area of S1 and S2 is 1.65m2, i.e. S1 ∩ S2=0.3, S1US2 =S1+S2-S1 ∩ S2=1.7+1.8-0.3=3.2;
At this point,
Therefore meet locality condition 2.
Characteristic condition in embodiments of the present invention are as follows: the gradient eigenvector and a later frame figure of the target in previous frame image The cosine value of the gradient eigenvector angle of target as in is not less than characteristic threshold value.
Assuming that in the previous frame image detected target 1 gradient eigenvector P1, the gradient of target 2 in a later frame image Feature vector P2, Φ are the angle of PI and P2, then should meet:
If the angle Φ of the gradient eigenvector P1 of target 1 and the gradient eigenvector P2 of target 2 are 30 °, cos Φ= 0.87 > 0.6, meet characteristic condition;
If the angle Φ of the gradient eigenvector P1 of target 1 and the gradient eigenvector P2 of target 2 are 60 °, cos Φ= 0.5 < 0.6, it is unsatisfactory for characteristic condition.
Assuming that target 1 meets locality condition 1 and locality condition 2 compared with target 2 in a later frame image in previous frame image (meeting locality condition), characteristic condition, it is determined that target 1 is same target with target 2.
In embodiments of the present invention, when having same target for the determination of arbitrary neighborhood two field pictures, if there is a frame image In a target and multiple goal satisfaction locality conditions in another frame image and when characteristic condition, there are two types of situations:
A goal satisfaction locality condition in multiple targets and a later frame image in situation one, previous frame image, and Meet characteristic condition.
At this time, it is determined that in multiple targets in previous frame image with the overlapping area of a target in a later frame image It is the same target with the target in the maximum target of ratio and a later frame image of the sum of area.
For example, 4 targets in previous frame image: target 1, target 2, target 3, target 4, area be respectively S1, S2, A target in S3, S4, with a later frame: target 5, area S5, each target of the target 1 into target 4 and 5 phase of target Than all meeting locality condition and characteristic condition, in which:
The ratio of the overlapping area and the sum of area that know target 4 and area 5 is maximum, at this time it is considered that target 4 is mesh The best match target of mark 5, that is, think the target 4 in previous frame image and the target 5 in a later frame image is same target, will Target 1, target 2, target 3 are as new target.
A goal satisfaction locality condition in multiple targets and previous frame image in situation two, a later frame image, and Meet characteristic condition.
At this time, it is determined that in multiple targets in a later frame image with the overlapping area of a target in previous frame image It is the same target with the target in the maximum target of ratio and previous frame image of the sum of area.
For example, 4 targets in a later frame image: target 1, target 2, target 3, target 4, area be respectively S1, S2, A target in S3, S4, with former frame: target 5, area S5, each target of the target 1 into target 4 and 5 phase of target Than all meeting locality condition and characteristic condition, wherein
The ratio of the overlapping area and the sum of area that know target 4 and area 5 is maximum, at this time it is considered that target 4 is mesh Target 5 in the best match target of mark 5, i.e. target 4 in a later frame image and previous frame image is same target, by target 1, target 2, target 3 are as new target.
It should be noted that the mode of the same target of determination cited in the embodiment of the present invention is merely illustrative, appoint How a kind of mode for determining same target is suitable for the embodiment of the present invention.
In embodiments of the present invention, if the target in target and previous frame image in a later frame image is not the same mesh Mark, then using target in a later frame image as new target, and it is corresponding that the coordinate information of new target is placed in new target In coordinate set.
For example, there is 3 targets in previous frame image: target 1, target 2, target 3 have 2 targets: mesh in a later frame image Mark 4, target 5, target 4, target 5 be all not belonging to same target compared with target 1, target 2, target 3 respectively, at this time then will after Target 4 and target 5 in one frame image is as new target.
In embodiments of the present invention, according to the corresponding coordinate set of the same target to same in continuous multiple frames image Target carries out behavioral value, and whether the behavior for specifically judging the same target be intrusion behavior or be line row of stumbling For.
Wherein, the behavior of the same target in continuous multiple frames image is determined according to the corresponding coordinate set of the same target To invade, specific proof rule is as follows:
Judge whether the invasion point coordinate of the same target in current frame image is located in warning region;If so, determining The behavior of the same target in continuous multiple frames image is intrusion behavior;If not, it is determined that it is same in continuous multiple frames image The behavior of a target is not intrusion behavior.
Specifically, the rule verification of intrusion behavior: after obtaining the target chained list (coordinate set) of a certain target, verifying Whether the invasion point coordinate (x, y) of the target is in warning region in current frame image:
That is (x, y) in Boundary (boundary)
In order to prevent due to the shake of camera, so that there is big jump in the target in a certain frame image, it is current determining In frame image when the invasion point coordinate of target, it can determine that the invasion point of the same target in current frame image is sat in the following manner Mark:
Determine that the corresponding coordinate set of the same target determines the same target in current frame image and in present frame figure The center point coordinate in continuous N frame image before picture, wherein N is positive integer;
The same target is sat in current frame image and the central point in the continuous N frame image before current frame image Invasion point coordinate of the target average value as target same in current frame image.
For example, being taken out in the target chained list (coordinate set of target) of a certain target in continuous five frames image as N=5 Center point coordinate (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5), wherein (x5, y5) be the target working as Center point coordinate in prior image frame, then the invasion point coordinate of current frame image are as follows:
If the center point coordinate in above-mentioned continuous five frames image be respectively as follows: (10,11), (10,12), (11,12), (12, 12), (12,13), then (x, y) is (11,12), i.e. whether judgement invasion point (11,12) is located in warning region, such as Fig. 5 A institute Show, P point indicates invasion point in figure, and as seen from the figure, for P point in warning region, the behavior of the target is intrusion behavior.
As shown in Figure 5 B, P point indicates invasion point in figure, and as seen from the figure, P point is outside warning region, and the behavior of the target is not It is intrusion behavior.
It should be noted that the invasion point coordinate of the same target in determination image cited in the embodiment of the present invention Mode is merely illustrative, and any one determines that the mode of the invasion point coordinate of the same target in image is suitable for the present invention Embodiment.
In embodiments of the present invention, it is determined according to the corresponding coordinate set of the same target same in continuous multiple frames image The behavior of a target is line of stumbling, and specific proof rule is as follows:
The line that will be stumbled using in current frame image is outer section rectangle of cornerwise rectangle as line of stumbling;
When the same target stumbles line point coordinate and the same target before current frame image in current frame image Line point coordinate of stumbling in one frame image is all cut in rectangle outside, and the same target in current frame image stumble line point coordinate with In the previous frame image of current frame image stumble line point coordinate be located at stumble line two sides when, determine same in continuous multiple frames image The behavior of a target is line behavior of stumbling.
As shown in Figure 6B, wherein d1, d2 are the endpoint of line of stumbling, and outer as shown in the figure to cut rectangle, P1 is mesh in current frame image The line point coordinate of stumbling of mark 1, P2 are the line point coordinate of stumbling of target 1 in the previous frame image of present frame, and wherein P1 and P2 are all located at figure It is outer shown in 6B to cut in rectangle, and P1 and P2 are located at the two sides for line of stumbling, so the behavior of target 1 is line behavior of stumbling.
As shown in Figure 6 C, wherein P1 is the line point coordinate of stumbling of target 1 in current frame image, and P2 is the former frame figure of present frame The line point coordinate of stumbling of target 1 as in, wherein P1 and P2 is all located in outer section of rectangle shown in Fig. 6 C, and P1 and P2 is located at line of stumbling The same side, so there is no line behaviors of stumbling for target 1.
Optionally, the line point coordinate of stumbling of the same target in the picture is determined in the following manner:
Determine the same target in current frame image and in present frame figure according to the corresponding coordinate set of the same target The center point coordinate in continuous N frame image before picture, wherein M is positive integer;
The same target is sat in current frame image and the central point in the continuous N frame image before current frame image Stumble line point coordinate of the target average value as the same target in current frame image.
In view of the shake of camera, determine target stumble line point coordinate when, take the center point coordinate of continuous 4 frame image Average, wherein the 5th frame is current frame image, center point coordinate of the target 1 in the 5th frame image is (x5, y5), and the 4th frame is to work as The former frame of prior image frame, center point coordinate of the target 1 in the 5th frame image are (x4, y4), and target 1 is in the 1st frame to the 3rd frame In center point coordinate be respectively (x1, y1) (x2, y2) (x3, y3)
Specifically, to a certain target stumble line behavioral value when, after obtaining the target chained list of the target, take out target Continuous five frame center point coordinate (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5) in chained list, wherein (x1, It y1) is center point coordinate of the target in current frame image, (x2, y2) is previous frame image of the target in current frame image In center point coordinate,
Assuming that take M=4, the center point coordinate in above-mentioned continuous five frames image be respectively as follows: (10,11), (10,12), (11, 12), (12,12), (12,13);
The then line point coordinate of stumbling in current frame image are as follows:
Line point coordinate of stumbling in previous frame image are as follows:
As shown in Figure 6A, indicate the position of line of stumbling in current frame image, it is assumed that two endpoints of line of wherein stumbling be d1 (m1, N1), d2 (m2, n2).
According to the two of line of stumbling extreme coordinates P1, P2 fitting a straight lines:
F (x, y)=ax+y+c
Wherein, a=- (n1-n2)/(m1-m2), c=-n1+m1 (n1-n2)/(m1-m2).
It is done using line of stumbling as diagonal line and cuts rectangle Rect outside one, then the two of rectangle endpoint are as follows: (Xmin, Ymin) (Xmax, Ymax), in which:
Xmin=min (m1, m2)
Xmax=max (m1, m2)
Ymin=min (n1, n2)
Ymax=max (n1, n2)
Then verify whether goal behavior is the condition of line behavior of stumbling are as follows:
(x, y) in Rect
(x0, y0) in Rect
F (x, y) * f (x0, y0) < 0
Meet the condition i.e. and can determine that the goal behavior is line behavior of stumbling.
If d1 be (5,5), d2 be (10,10), then a=- (n1-n2)/(m1-m2)=- 1, c=-n1+m1 (n1- n2)/ (m1-m2)=0, so f (x, y)=ax+y+c are as follows: f (x, y)=- x+y.
Two extreme coordinates of rectangle: (Xmin, Ymin)=(5,5), (Xmax, Ymax)=(10,10);8,
Assuming that (x, y)=(7,8), (x0, y0)=(8,7), then 5 < 7 < 10,5 < 8 < 10, therefore (x, y) and (x0, y0) all It cuts in rectangular area outside, f (x, y)=1, f (x0, y0)=- 1, f (x, y) * f (x0, y0)=- 1 < 0, so the target occurs It stumbles line behavior.
If (x, y)=(7,8), (x0, y0)=(6,7), then 5 < 7 < 10, therefore (x, y) and (x0, y0) all cuts square outside In shape region, and f (x, y)=1, f (x0, y0)=1, then (x, y) * f (x0, y0)=1 > 0, there is no line rows of stumbling for the target For.
It should be noted that the same target judged in continuous multiple frames image cited in the embodiment of the present invention whether The mode of line behavior of stumbling is merely illustrative, and any one judges whether the same target in continuous multiple frames image occurs The mode of line behavior of stumbling is suitable for the embodiment of the present invention.
As shown in fig. 7, a kind of target provided in an embodiment of the present invention is stumbled, the complete method of line behavioral value includes:
Step 700, acquisition video image;
Step 701 detects collected multi-frame video image by YOLO model, detects target in each frame Location information;
Step 702 carries out signature analysis to the target detected, determines the central point of target, the HOG spy of area and target Sign;
Step 703 records target chained list according to the central point, area and HOG special type of target in the two field pictures of front and back;
Step 704 judges whether the target chained list of same target in continuous multiple frames image meets stumble line rule or invasion rule Then, if so, meeting line rule of stumbling, 705 is thened follow the steps, if so, meeting intrusion rule, thens follow the steps 707, otherwise, Then terminate line rule verification of stumbling;
Step 705, the behavior for determining the same target are line of stumbling;
Step 706 carries out the report from a liner police that stumbles;
Step 707 determines the behavior of the same target for invasion;
Step 708 carries out intrusion alarm.
Based on identical inventive concept, a kind of equipment of goal behavior detection is additionally provided in the embodiment of the present invention, due to The equipment is the equipment in the method in the embodiment of the present invention, and the principle that the equipment solves the problems, such as is similar to this method, Therefore the implementation of the equipment may refer to the implementation of method, and overlaps will not be repeated.
As shown in figure 8, the embodiment of the present invention also provides a kind of equipment of goal behavior detection, which includes: at least one A processing unit 800 and at least one storage unit 801, wherein the storage unit 801 is stored with program code, when When said program code is executed by the processing unit 800, so that the equipment executes following process:
Detected to obtain the location information of the target in image to collected multiple image by target detection model;
According to the Gradient Features of the target in the location information of the target in continuous multiple frames image and continuous multiple frames image to After amount has the same target in determining the continuous multiple frames image, by the same target in the continuous multiple frames image Coordinate information be placed in the corresponding coordinate set of the same target;
According to the corresponding coordinate set of the same target to the same target in the continuous multiple frames image Carry out behavioral value.
Optionally, the processing unit 800 be also used to determine in the following manner have in the continuous multiple frames image it is same A target:
For arbitrary neighborhood two field pictures, according in the location information of the target in previous frame image and a later frame image The location information of target determine two goal satisfaction locality conditions, and according to the Gradient Features of the target in previous frame image to The gradient eigenvector of amount and the target in a later frame image determines described two goal satisfaction characteristic conditions, it is determined that described two A target is the same target.
Optionally, the locality condition includes some or all of in following:
Believe the target position in central point abscissa and a later frame image in previous frame image in the location information of target The distance of central point abscissa in breath, wide half in the location information no more than target in the previous frame image;
The position of target is believed in central point ordinate and a later frame image in previous frame image in the location information of target The distance of central point ordinate in breath, high half in the location information no more than target in the previous frame image;
Target and the overlapping area of the target in a later frame image and the ratio of the sum of area in previous frame image be not small In area threshold;
The characteristic condition are as follows:
The gradient eigenvector of target in previous frame image and the gradient eigenvector of the target in a later frame image press from both sides Cosine of an angle value is not less than characteristic threshold value.
Optionally, the processing unit 800 is also used to:
If the goal satisfaction locality condition in multiple targets and a later frame image in previous frame image, and meet special Sign condition, it is determined that in multiple targets in the previous frame image with the faying surface of a target in a later frame image The long-pending target with the maximum target of ratio of the sum of area and a later frame image is the same target;Or
If the goal satisfaction locality condition in multiple targets and previous frame image in a later frame image, and meet special Sign condition, it is determined that in multiple targets in a later frame image with the faying surface of a target in the previous frame image The long-pending target with the maximum target of ratio of the sum of area and the previous frame image is the same target.
Optionally, the processing unit 800 is also used to:
If the target in target and the previous frame image in a later frame image is not the same target, by institute Target is stated in a later frame image as new target, and it is corresponding that the coordinate information of the new target is placed in the new target Coordinate set in.
Optionally, the processing unit 800 is specifically used for:
The same mesh in the continuous multiple frames image is detected according to the corresponding coordinate set of the same target Whether whether target behavior be intrusion behavior or be line behavior of stumbling.
Optionally, the specific unit of the processing unit 800:
Judge whether the invasion point coordinate of the same target described in current frame image is located in warning region;
If so, determining that the behavior of the same target in the continuous multiple frames image is intrusion behavior;
If not, it is determined that the behavior of the same target in the continuous multiple frames image is not intrusion behavior.
Optionally, the processing unit 800 is also used to determine in the following manner same described in the current frame image The invasion point coordinate of a target:
Determine the corresponding coordinate set of the same target determine the same target the current frame image with And the center point coordinate in the continuous N frame image before the current frame image, wherein N is positive integer;
By the same target in the current frame image and the continuous N frame figure before the current frame image Invasion point coordinate of the average value of center point coordinate as in as the same target described in the current frame image.
Optionally, the processing unit 800 is specifically used for:
The line that will be stumbled using in current frame image is that cornerwise rectangle is stumbled outer section of rectangle of line described in;
When stumble line point coordinate and the same target of the same target in current frame image are worked as described Line point coordinate is stumbled all in the outer section of rectangle in the previous frame image of prior image frame, and the same target is worked as described Line point coordinate of stumbling in prior image frame is located at described stumble with the line point coordinate of stumbling in the previous frame image of the current frame image When line two sides, determine that the behavior of the same target in the continuous multiple frames image is line behavior of stumbling.
Optionally, the processing unit 800 is also used to determine the same target in the picture in the following manner It stumbles line point coordinate:
According to the corresponding coordinate set of the same target determine the same target the current frame image with And the center point coordinate in the continuous N frame image before the current frame image, wherein M is positive integer;
By the same target in the current frame image and the continuous N frame figure before the current frame image Stumble line point coordinate of the average value of center point coordinate as in as the same target in the current frame image.
Based on identical inventive concept, a kind of equipment of goal behavior detection is additionally provided in the embodiment of the present invention, due to The equipment is the equipment in the method in the embodiment of the present invention, and the principle that the equipment solves the problems, such as is similar to this method, Therefore the implementation of the equipment may refer to the implementation of method, and overlaps will not be repeated.
The embodiment of the present invention also provides a kind of computer-readable non-volatile memory medium, including program code, when described For program code when running on computing terminal, said program code is for making the computing terminal execute the embodiments of the present invention The step of method of goal behavior detection.
Above by reference to showing according to the method, apparatus (system) of the embodiment of the present application and/or the frame of computer program product Figure and/or flow chart describe the application.It should be understood that can realize that block diagram and or flow chart is shown by computer program instructions The combination of the block of a block and block diagram and or flow chart diagram for figure.These computer program instructions can be supplied to logical With computer, the processor of special purpose computer and/or other programmable data processing units, to generate machine, so that via meter The instruction that calculation machine processor and/or other programmable data processing units execute creates for realizing block diagram and or flow chart block In specified function action method.
Correspondingly, the application can also be implemented with hardware and/or software (including firmware, resident software, microcode etc.).More Further, the application can take computer usable or the shape of the computer program product on computer readable storage medium Formula has the computer realized in the medium usable or computer readable program code, to be made by instruction execution system It is used with or in conjunction with instruction execution system.In the present context, computer can be used or computer-readable medium can be with It is arbitrary medium, may include, stores, communicates, transmits or transmit program, is made by instruction execution system, device or equipment With, or instruction execution system, device or equipment is combined to use.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (10)

1. a kind of method of goal behavior detection, which is characterized in that this method comprises:
Detected to obtain the location information of the target in image to collected multiple image by target detection model;
Existed according to the gradient eigenvector of the target in the location information of the target in continuous multiple frames image and continuous multiple frames image It determines after having the same target in the continuous multiple frames image, by the seat of the same target in the continuous multiple frames image Mark information is placed in the corresponding coordinate set of the same target;
The same target in the continuous multiple frames image is carried out according to the same target corresponding coordinate set Behavioral value.
2. the method as described in claim 1, which is characterized in that determine has together in the following manner in the continuous multiple frames image One target:
For arbitrary neighborhood two field pictures, according to the mesh in the location information of the target in previous frame image and a later frame image Target location information determines two goal satisfaction locality conditions, and according to the gradient eigenvector of the target in previous frame image with The gradient eigenvector of target in a later frame image determines described two goal satisfaction characteristic conditions, it is determined that described two mesh It is designated as the same target.
3. method according to claim 2, which is characterized in that the locality condition includes some or all of in following:
In the target position information in central point abscissa and a later frame image in previous frame image in the location information of target Central point abscissa distance, wide half in the location information no more than target in the previous frame image;
In central point ordinate and a later frame image in previous frame image in the location information of target in the location information of target Central point ordinate distance, high half in the location information no more than target in the previous frame image;
The overlapping area of the target in target and a later frame image and the ratio of the sum of area in previous frame image are not less than face Product threshold value;
The characteristic condition are as follows:
The gradient eigenvector angle of the gradient eigenvector and target in a later frame image of target in previous frame image Cosine value is not less than characteristic threshold value.
4. the method as described in claim 1, which is characterized in that the method also includes:
If the goal satisfaction locality condition in multiple targets and a later frame image in previous frame image, and meet feature item Part, it is determined that in multiple targets in the previous frame image with the overlapping area of a target in a later frame image with Target in the maximum target of the ratio of the sum of area and a later frame image is the same target;Or
If the goal satisfaction locality condition in multiple targets and previous frame image in a later frame image, and meet feature item Part, it is determined that in multiple targets in a later frame image with the overlapping area of a target in the previous frame image with Target in the maximum target of the ratio of the sum of area and the previous frame image is the same target.
5. method according to claim 2, which is characterized in that the method also includes:
If the target in target and the previous frame image in a later frame image is not the same target, after described Target is placed in the corresponding seat of the new target as new target, and by the coordinate information of the new target in one frame image In mark set.
6. the method as described in claim 1, which is characterized in that according to the corresponding coordinate set of the same target to described The same target in continuous multiple frames image carries out behavioral value, comprising:
Judge whether the invasion point coordinate of the same target described in current frame image is located in warning region;
If so, determining that the behavior of the same target in the continuous multiple frames image is intrusion behavior;
If not, it is determined that the behavior of the same target in the continuous multiple frames image is not intrusion behavior.
7. method as claimed in claim 6, which is characterized in that determine in the following manner same described in the current frame image The invasion point coordinate of one target:
Determine the corresponding coordinate set of the same target determine the same target the current frame image and The center point coordinate in continuous N frame image before the current frame image, wherein N is positive integer;
By the same target in the current frame image and in the continuous N frame image before the current frame image Invasion point coordinate of the average value of center point coordinate as the same target described in the current frame image.
8. the method as described in claim 1, which is characterized in that according to the corresponding coordinate set of the same target to described The same target in continuous multiple frames image carries out behavioral value, comprising:
The line that will be stumbled using in current frame image is that cornerwise rectangle is stumbled outer section of rectangle of line described in;
When stumble line point coordinate and the same target of the same target in current frame image are in the present frame Line point coordinate is stumbled all in the outer section of rectangle in the previous frame image of image, and the same target is in the present frame Line point coordinate of stumbling in image is located at the line two of stumbling with the line point coordinate of stumbling in the previous frame image of the current frame image When side, determine that the behavior of the same target in the continuous multiple frames image is line behavior of stumbling.
9. method according to claim 8, which is characterized in that determine the same target in the following manner in the picture Line point coordinate of stumbling:
According to the corresponding coordinate set of the same target determine the same target the current frame image and The center point coordinate in continuous N frame image before the current frame image, wherein M is positive integer;
By the same target in the current frame image and in the continuous N frame image before the current frame image Stumble line point coordinate of the average value of center point coordinate as the same target in the current frame image.
10. a kind of equipment of goal behavior detection, which is characterized in that the equipment includes: at least one processing unit and at least One storage unit, wherein the storage unit is stored with program code, when said program code is executed by the processing unit When, so that the equipment executes method as claimed in any one of claims 1-9 wherein.
CN201910055397.7A 2019-01-21 2019-01-21 A kind of method and apparatus of goal behavior detection Pending CN109886117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910055397.7A CN109886117A (en) 2019-01-21 2019-01-21 A kind of method and apparatus of goal behavior detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910055397.7A CN109886117A (en) 2019-01-21 2019-01-21 A kind of method and apparatus of goal behavior detection

Publications (1)

Publication Number Publication Date
CN109886117A true CN109886117A (en) 2019-06-14

Family

ID=66926440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910055397.7A Pending CN109886117A (en) 2019-01-21 2019-01-21 A kind of method and apparatus of goal behavior detection

Country Status (1)

Country Link
CN (1) CN109886117A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276751A (en) * 2019-06-17 2019-09-24 北京字节跳动网络技术有限公司 Determine method, apparatus, electronic equipment and the computer readable storage medium of image parameter
CN110428390A (en) * 2019-07-18 2019-11-08 北京达佳互联信息技术有限公司 A kind of material methods of exhibiting, device, electronic equipment and storage medium
CN110674703A (en) * 2019-09-05 2020-01-10 北京正安维视科技股份有限公司 Video tripwire alarm counting method and flow in intelligent monitoring
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN111860559A (en) * 2019-12-31 2020-10-30 滴图(北京)科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112449093A (en) * 2020-11-05 2021-03-05 北京德火科技有限责任公司 Three-dimensional panoramic video fusion monitoring platform
CN112735163A (en) * 2020-12-25 2021-04-30 北京百度网讯科技有限公司 Method for determining static state of target object, road side equipment and cloud control platform
CN112885015A (en) * 2021-01-22 2021-06-01 深圳市奔凯安全技术股份有限公司 Regional intrusion detection method, system, storage medium and electronic equipment
CN114495394A (en) * 2021-12-15 2022-05-13 煤炭科学研究总院有限公司 Multi-stage anti-intrusion method and device and storage medium
KR102591483B1 (en) * 2022-11-16 2023-10-19 주식회사우경정보기술 Apparatus and method for spatiotemporal neural network-based labeling for building anomaly data set

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130132311A1 (en) * 2011-11-18 2013-05-23 Honeywell International Inc. Score fusion and training data recycling for video classification
CN103473528A (en) * 2013-07-09 2013-12-25 深圳市中瀛鑫科技股份有限公司 Detection method and device for tripwire crossing of object and video monitoring system
CN103577804A (en) * 2013-10-21 2014-02-12 中国计量学院 Abnormal human behavior identification method based on SIFT flow and hidden conditional random fields
US20140241619A1 (en) * 2013-02-25 2014-08-28 Seoul National University Industry Foundation Method and apparatus for detecting abnormal movement
CN107145894A (en) * 2017-03-13 2017-09-08 中山大学 A kind of object detection method based on direction gradient feature learning
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108764100A (en) * 2018-05-22 2018-11-06 全球能源互联网研究院有限公司 A kind of goal behavior detection method and server

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130132311A1 (en) * 2011-11-18 2013-05-23 Honeywell International Inc. Score fusion and training data recycling for video classification
US20140241619A1 (en) * 2013-02-25 2014-08-28 Seoul National University Industry Foundation Method and apparatus for detecting abnormal movement
CN103473528A (en) * 2013-07-09 2013-12-25 深圳市中瀛鑫科技股份有限公司 Detection method and device for tripwire crossing of object and video monitoring system
CN103577804A (en) * 2013-10-21 2014-02-12 中国计量学院 Abnormal human behavior identification method based on SIFT flow and hidden conditional random fields
CN107145894A (en) * 2017-03-13 2017-09-08 中山大学 A kind of object detection method based on direction gradient feature learning
CN108229407A (en) * 2018-01-11 2018-06-29 武汉米人科技有限公司 A kind of behavioral value method and system in video analysis
CN108764100A (en) * 2018-05-22 2018-11-06 全球能源互联网研究院有限公司 A kind of goal behavior detection method and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张玉明: "智能监控系统中行人异常行为检测的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵谦等: "《智能视频图像处理技术与应用》", 30 September 2016, 西安电子科技大学出版社 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276751A (en) * 2019-06-17 2019-09-24 北京字节跳动网络技术有限公司 Determine method, apparatus, electronic equipment and the computer readable storage medium of image parameter
CN110428390B (en) * 2019-07-18 2022-08-26 北京达佳互联信息技术有限公司 Material display method and device, electronic equipment and storage medium
CN110428390A (en) * 2019-07-18 2019-11-08 北京达佳互联信息技术有限公司 A kind of material methods of exhibiting, device, electronic equipment and storage medium
US11521368B2 (en) 2019-07-18 2022-12-06 Beijing Dajia Internet Information Technology Co., Ltd. Method and apparatus for presenting material, and storage medium
CN110674703A (en) * 2019-09-05 2020-01-10 北京正安维视科技股份有限公司 Video tripwire alarm counting method and flow in intelligent monitoring
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN111860559A (en) * 2019-12-31 2020-10-30 滴图(北京)科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112449093A (en) * 2020-11-05 2021-03-05 北京德火科技有限责任公司 Three-dimensional panoramic video fusion monitoring platform
CN112735163A (en) * 2020-12-25 2021-04-30 北京百度网讯科技有限公司 Method for determining static state of target object, road side equipment and cloud control platform
CN112885015A (en) * 2021-01-22 2021-06-01 深圳市奔凯安全技术股份有限公司 Regional intrusion detection method, system, storage medium and electronic equipment
CN114495394A (en) * 2021-12-15 2022-05-13 煤炭科学研究总院有限公司 Multi-stage anti-intrusion method and device and storage medium
KR102591483B1 (en) * 2022-11-16 2023-10-19 주식회사우경정보기술 Apparatus and method for spatiotemporal neural network-based labeling for building anomaly data set
KR102613370B1 (en) * 2022-11-16 2023-12-13 주식회사우경정보기술 Labeling methods and apparatus including object detection, object tracking, behavior identification, and behavior detection

Similar Documents

Publication Publication Date Title
CN109886117A (en) A kind of method and apparatus of goal behavior detection
CN104680555B (en) Cross the border detection method and out-of-range monitoring system based on video monitoring
Ko et al. Modeling and formalization of fuzzy finite automata for detection of irregular fire flames
CN104902265B (en) A kind of video camera method for detecting abnormality and system based on background edge model
CN101167086A (en) Human detection and tracking for security applications
Zin et al. Unattended object intelligent analyzer for consumer video surveillance
EA018349B1 (en) Method for video analysis
CN113223046B (en) Method and system for identifying prisoner behaviors
Alshammari et al. Intelligent multi-camera video surveillance system for smart city applications
Zin et al. A Markov random walk model for loitering people detection
Ezzahout et al. Conception and development of a video surveillance system for detecting, tracking and profile analysis of a person
Alqaysi et al. Detection of abnormal behavior in dynamic crowded gatherings
CN111126153A (en) Safety monitoring method, system, server and storage medium based on deep learning
CN114495011A (en) Non-motor vehicle and pedestrian illegal intrusion identification method based on target detection, storage medium and computer equipment
Wang et al. Traffic camera anomaly detection
US10748011B2 (en) Method, device and system for detecting a loitering event
Khan et al. Comparative study of various crowd detection and classification methods for safety control system
CN111753587B (en) Ground falling detection method and device
CN111144260A (en) Detection method, device and system of crossing gate
JP2008152586A (en) Automatic identification monitor system for area surveillance
CN115410153A (en) Door opening and closing state judging method and device, electronic terminal and storage medium
CN111985331B (en) Detection method and device for preventing trade secret from being stolen
Peker et al. Real-time motion-sensitive image recognition system
Szwoch et al. A framework for automatic detection of abandoned luggage in airport terminal
Vaishnavi et al. Implementation of Abnormal Event Detection using Automated Surveillance System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190614