CN103793920A - Retro-gradation detection method based on video and system thereof - Google Patents
Retro-gradation detection method based on video and system thereof Download PDFInfo
- Publication number
- CN103793920A CN103793920A CN201210419365.9A CN201210419365A CN103793920A CN 103793920 A CN103793920 A CN 103793920A CN 201210419365 A CN201210419365 A CN 201210419365A CN 103793920 A CN103793920 A CN 103793920A
- Authority
- CN
- China
- Prior art keywords
- optical
- video
- moving region
- point
- flow feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the field of image understanding in computer vision, and discloses a retro-gradation detection method based on a video and a system thereof. In the invention, an angular point is extracted in a frame difference image, angular point optical flow tracking is performed in an original image, and generation of retro-gradation is judged according to an optical flow characteristic vector decomposition motion mode so that robustness of retro-gradation detection is enhanced, and probability of omission report and false report in the retro-gradation detection in a complex scene is effectively reduced. Different motion modes are obtained by aiming at motion areas in the video via an iteration clustering algorithm so that the motion modes corresponding to different moving objects in the complex scene can be effectively extracted and analysis of the retro-gradation detection is facilitated. Movement of image blocks in a larger range can be dealt with by adopting an idea of pyramid hierarchy so that the method is applicable to application scenes of the retro-gradation detection.
Description
Technical field
The present invention relates to the image understanding field in computer vision, particularly the retrograde detection technique based on video.
Background technology
Based on the retrograde detection of video, be to belong to the one that crowd's abnormal behaviour detects.In current social, safety problem becomes the problem of people's growing interest.And crowd's behavior comprises crowd's gathering, to fight, the behaviors such as movement locus trend become the new problem that computer vision research personnel pay close attention to gradually.Crowd's behavior has good directive significance for public safety department.For example: public security department can utilize video technique crowd massing detected or fight, prevent riot upgrading; Traffic department utilizes a period of time crowd movement locus in region to add up to do some policy-making guidances, such as the place blocking up increases traffic police's police strength, relief of traffic etc.Retrograde detection in crowd has the scene of its application equally, large arriving such as Saudi Arabia's haj, and some people is not advanced in accordance with pre-determined route just, causes crowd to trample on the generation of tragedy; Little arriving in supermarket or the escalator of subway, some drive in the wrong direction, and are not merely the performances that lacks social morality, also can bring potential danger.
In prior art; mostly consider to utilize the movement locus of human body or the movement locus of human synovial to detect abnormal behaviour; the method of this class based on following the tracks of often can obtain good effect under simple scenario; but at complex scene; crowd density is larger; when number is more, lost efficacy owing to following the tracks of, what thereupon bring is a large amount of failing to report and reporting by mistake.
Summary of the invention
The object of the present invention is to provide a kind of retrograde detection method and system thereof based on video, improved the robustness of retrograde detection, effectively reduced complex scene retrograde and detected the probability of failing to report and report by mistake generation.
For solving the problems of the technologies described above, embodiments of the present invention disclose a kind of retrograde detection method based on video, comprise the following steps:
Pixel corresponding in the original image of consecutive frame in video is asked to poor, and to difference binaryzation, obtain frame difference image;
Extract moving region according to frame difference image;
In frame difference image, choose angle point;
Initial trace point using selected angle point as light stream carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector;
According to the initial trace point of light stream and the position relationship of moving region, obtain the corresponding relation of moving region and Optical-flow Feature vector;
For each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region;
Judge whether to drive in the wrong direction according to the size and Orientation of the average velocity of motor pattern and occurred.
Embodiments of the present invention also disclose a kind of retrograde detection system based on video, comprising:
Frame difference image acquiring unit, for pixel corresponding in the original image of video consecutive frame is asked to poor, and to difference binaryzation, obtains frame difference image;
Moving region extraction unit, extracts moving region for the investigation image obtaining according to frame difference image acquiring unit;
Angle point is chosen unit, chooses angle point for the frame difference image obtaining at frame difference image acquiring unit;
Optical flow tracking unit for choose the initial trace point of the selected angle point in unit as light stream using angle point, carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector;
Movable information acquiring unit, for according to the initial trace point of light stream and the position relationship of moving region, obtains the corresponding relation of moving region and Optical-flow Feature vector;
Motor pattern resolving cell, for for each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region;
The detecting unit that drives in the wrong direction, having judged whether to drive in the wrong direction for the size and Orientation of the average velocity of the motor pattern that decomposes according to motor pattern resolving cell occurs.
Compared with prior art, the key distinction and effect thereof are embodiment of the present invention:
In frame difference image, get angle point, in original image, carry out angle point optical flow tracking, according to Optical-flow Feature resolution of vectors motor pattern, having judged whether to drive in the wrong direction occurs, improve the robustness of retrograde detection, effectively reduced complex scene retrograde and detected the probability of failing to report and report by mistake generation.
Further, for the moving region in video, obtain different motor patterns by Iterative Clustering, can effectively extract the motor pattern corresponding to different motion object under complex scene, analyze so that drive in the wrong direction for detecting.
Further, remove and in RANSAC, got at random Optical-flow Feature vector and solve geometric transformation model, and adopted all Optical-flow Feature vectors of traversal to solve the method for geometric transformation model, can avoid random number to produce.
Further, adopt the thought of pyramid layering can tackle the movement of image block in a big way, be adapted to the application scenarios that drives in the wrong direction and detect.
Further, camera height is moderate, and camera is too low, and the too large easily wrong report of the area that people accounts for is too highly easily failed to report, and focal length can not be excessive, not so people's large easily wrong report in the visual field.Camera angle is as far as possible vertical, and the depth of field within the vision is not excessive, can reduce like this impact that perspective effect brings, and facilitates the setting of algorithm threshold value.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of retrograde detection method based on video in first embodiment of the invention;
Fig. 2 is the schematic flow sheet of a kind of retrograde detection method based on video in second embodiment of the invention;
Fig. 3 is the structural representation of a kind of retrograde detection system based on video in third embodiment of the invention.
Embodiment
In the following description, in order to make reader understand the application better, many ins and outs have been proposed.But, persons of ordinary skill in the art may appreciate that even without these ins and outs and the many variations based on following embodiment and modification, also can realize the each claim of the application technical scheme required for protection.
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiments of the present invention are described in further detail.
First embodiment of the invention relates to a kind of retrograde detection method based on video.Fig. 1 is the schematic flow sheet of this retrograde detection method based on video.
Specifically, as shown in Figure 1, should the retrograde detection method based on video comprise the following steps:
In step 101, pixel corresponding in the original image of consecutive frame in video is asked to poor, and to difference binaryzation, obtain frame difference image.
Do difference as shown in Equation (1) according to adjacent two frames or the corresponding pixel of multiple image, wherein D (t) is difference image, and I (t) is current frame image, and I (t-1) is previous frame image.By setting certain threshold value, obtain pixel and even the moving region of motion.The method that in fact obtains moving region is varied, comprises background modeling method, optical flow method etc.The present invention adopts frame difference method to be because the method and system that propose in the application are to be applied in complex scene, and under this scene, background modeling method had lost efficacy; And optical flow method, the particularly light stream based on pixel are because calculated amount is larger, often in Practical Project, will not adopt.The selection of frame difference limen value is also suitable, and too little words are to noise-sensitive, and too large words real goal moving region is easily broken.In present embodiment, according to a large amount of actual scene tests, default value is 7.Certainly, this is a kind of preferred value, in some other embodiment of the present invention, also can be defaulted as other value.
D(t)=I(t)-I(t-1)(1)
After this enter step 102, extract moving region according to frame difference image.
Further, in step 102, also comprise following sub-step:
The pixel of frame difference image is done to bidirectional projection, obtain horizontal projective histogram and vertical projection histogram.
By horizontal projective histogram and vertical projection histogram are carried out to adaptive threshold, obtain moving region.
In " a kind of image sequence auto Segmentation new method " delivered by Yuan Jiwei, Shi Zhongke in the 29th phase in 2004 " computer engineering and application ", auto-adaptive threshold technology is introduced, here no longer elaborated.
Specifically, utilize histogram bidirectional projection method as shown in formula (2) and (3), wherein C, R are the histogram that bidirectional projection obtains, M is the width of image, N is the height of image, and then the pixel that P is frame difference image obtains frame difference by adaptive threshold and cut figure.Can certainly utilize connected component labeling method.In this application, without connected component labeling method, main cause is that the poor prospect of frame can be broken in the lower scene of contrast, so likely can not get complete region.Another benefit that extract moving region is: with respect to light stream initial point is set in entire image, in moving region, initial point is set and has reduced calculated amount, and improved the efficiency detecting.
C={c
1, c
2c
mwherein
R={, r
1, r
2... r
nwherein
After this enter step 103, in frame difference image, choose angle point.
Preferably, in the present embodiment, we select the initial trace point of Harris angle point as light stream.
Mention above, frame difference image has two effects, is on the one hand to extract moving region, foundation is provided on the other hand the setting of the initial trace point of light stream.Originally for Lucas optical flow tracking, often feature point tracking effect was better than more rich region for texture, and concrete operations are all generally to select the initial trace point of Harris angle point as light stream.According to existing document J.Shi and C.Tomasi.Goodfeatu res to track.In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, pages 593-600,1994(J. and C Damiano Tommasi, good signature tracking, IEEE-USA's proceedings is about the meeting of computer vision and pattern-recognition, 593-600 page, 1994) known, accurately whether criterion is just consistent with Lucas optical flow tracking for the criterion that this angle point is selected.We define autocorrelation function E (x, y) as formula (4), wherein I (x a given point coordinate formula
k+ Δ x, y
kwhether accurately+Δ y) utilizes Taylor's formula to launch as formula (5), and we just obtain formula (6) like this, criterion and Lucas optical flow tracking criterion that wherein the size of matrix A (x, y) eigenwert is selected as angle point.
But we get Harris angle point is not to get on original image, but get on frame difference image, its fundamental purpose still wishes in optical flow tracking simultaneously accurately, makes as much as possible light stream point can embody the movable information of object, and this just requires the light stream point preferably can uniform fold moving region.The fact shows the requirement of having mentioned before getting angle point on frame difference image has met as the initial trace point of light stream, and in our application, preferably, we extract 100 of angle points as the initial trace point of angle point light stream.
After this enter step 104, the initial trace point using selected angle point as light stream carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector.
Preferably, in step 104, employing be Lucas card Nader optical flow algorithm.
Lucas card Nader algorithm is a kind of sparse optical flow algorithm of classics, and its basic thought is exactly by iterative algorithm, finds out the first frame image block (unique point) position at the second frame under equal square criterions.
In addition, can also adopt Horn – Schunck method, Buxton – Buxtonmethod, Black – Jepson method or General variational methods, etc.
Specifically, we adopt Lucas card Nader light stream, and this light stream belongs to the one of sparse optical flow, and its basic thought is exactly by iterative algorithm, find out the first frame image block (unique point) position at the second frame under equal square criterions.Wherein the pixel in image block can give different weights, and in this application for convenience of calculation, each the pixel weights in image block are identical.The improvement algorithm of some light stream is considered the translation of image block in addition, rotation, the variations such as amplification.And our applicable cases is mated corresponding two continuous frames, so only need consider that the translation of image block changes.We have utilized the thought of pyramid layering simultaneously, preserve 1/4 image and 1/16 image of original image, the each iteration of unique point is found first and is located to find at the bottom (1/16), then in middle layer, (1/4) is located to find, finally find matching characteristic piece at original image, some benefits of doing are like this to tackle the movement of image block in a big way, are adapted to the application scenarios that drives in the wrong direction and detect.
After this enter step 105, according to the initial trace point of light stream and the position relationship of moving region, obtain the corresponding relation of moving region and Optical-flow Feature vector.
The Main Function of step 105 is mapped moving region and Optical-flow Feature vector.That is to say and provide a moving region, we need to calculate the initial point of which optical flow tracking in this moving region.Specifically, first travel through all light stream initial points, for each light stream initial point, suppose that its coordinate is for (x, y), moving region (because it is rectangle) supposes that its four point coordinate are (x
left, y
top), (x
right, y
top), (x
left, y
bottom), (x
right, y
bottom) utilize point to judge that at rectangle frame formula (7) this point is whether in rectangle frame.After this step is finished, we have just obtained the corresponding relation (this relation is a kind of relation of one-to-many) of a moving region and light stream initial point, and then have known the movable information in moving region.According to noted earlier, this movable information is a kind of movable information of sampling, because the optical flow approach of our use is a kind of sparse optical flow.Be not merely to drive in the wrong direction for detecting in fact, strenuous exercise is detected or the detection etc. of driving in the wrong direction also will first obtain movable information region substantially, only after this, how rationally to utilize movable information difference.
x
left<x<x
right,y
top<y<y
bottom (7)
After this enter step 106, for each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region.
Mentioned above, retrograde detection is likely the behavior of crowd's entirety, is likely also behavior local in crowd.It is the thought of a kind of " dividing and rule " in fact that motor pattern decomposes, be exactly first first moving object different in moving region to be found out to (supposing that they have different motor patterns), after motor pattern extracts, then analyze for different motor patterns.The thought of this " decomposition " has all been applied in a lot of different fields, comprises Fourier transform, principal component analysis (PCA) etc.
After this enter step 107, having judged whether to drive in the wrong direction according to the size and Orientation of the average velocity of motor pattern occurs.
Because we have obtained size and the direction of the average velocity of different motion pattern, and speed is a vector, and the velocity reversal that we only need to more this motor pattern and user set the relation of proper motion orientation angle.If the difference of two angles is greater than right angle (90 degree), and the size of motor pattern speed exceedes certain threshold value, thinks and has retrograde behavior to occur.
After this process ends.
In frame difference image, get angle point, in original image, carry out angle point optical flow tracking, according to Optical-flow Feature resolution of vectors motor pattern, having judged whether to drive in the wrong direction occurs, improve the robustness of retrograde detection, effectively reduced complex scene retrograde and detected the probability of failing to report and report by mistake generation.
Second embodiment of the invention relates to a kind of retrograde detection method based on video.
The second embodiment improves on the basis of the first embodiment, and main improvements are:
In step 106, namely for each moving region, according in the step of the Optical-flow Feature resolution of vectors motor pattern in this moving region, comprise following sub-step respectively:
A is the initial trace point with corresponding light stream according to the Optical-flow Feature vector in moving region, calculates the translation parameters of geometric transformation model.
Preferably, in the present embodiment, geometric transformation model is affine Transform Model.
Affined transformation belongs to the one of geometric transformation, is a kind of parameter model.Affined transformation principal character is the parallel relation that does not change image straight line, and namely the parallelogram of a sub-picture remains parallelogram in another piece image.
Except affined transformation, can also use translation transformation, linear direct transform or projective transformation, etc.
B utilizes the geometric transformation model obtaining to remove to apply mechanically other Optical-flow Feature vector, calculating meets the number of the Optical-flow Feature vector of this model, meets to refer to point that the initial trace point of certain light stream calculates according to geometric transformation model and to utilize angle point light stream point to follow the tracks of the point obtaining enough near.
C obtains the first geometric transformation model, and wherein, the number that Optical-flow Feature vector meets this model reaches maximum, and the Optical-flow Feature vector that meets this first geometric transformation model is the first motor pattern.
Preferably, employing is that all Optical-flow Feature vectors of traversal solve geometric transformation model.
Here, not to get at random Optical-flow Feature vector to solve geometric transformation model, and adopt all Optical-flow Feature vectors of traversal to solve the method for geometric transformation model, can avoid random number to produce.
In the Optical-flow Feature vector that does not belong to the first motor pattern, according to above-mentioned steps A, B and C, obtain the second geometric transformation model, the Optical-flow Feature vector that meets this second geometric transformation model is the second motor pattern.
In the Optical-flow Feature vector that does not belong to the first and second motor patterns, according to above-mentioned steps A, B and C, obtain the 3rd geometric transformation model, the Optical-flow Feature vector that meets the 3rd geometric transformation model is the 3rd motor pattern.
Fig. 2 is the schematic flow sheet of this motor pattern decomposable process.
Specifically, in the namely step 105 in the first embodiment of step 201() in, according to the initial trace point of light stream and the position relationship of moving region, obtain the corresponding relation of moving region and Optical-flow Feature vector.
After this enter step 202, carry out iteration RANSAC algorithm.
After this enter step 203, decomposite motor pattern 1, motor pattern 2 and motor pattern 3.
After this process ends.
In addition, be appreciated that in a moving region and may have multiple moving object, so may there is the translation motion of multiple friction speed and direction, the present invention looks for first three to plant topmost motor pattern.Certainly,, if last Optical-flow Feature point vector is less than thresholding in the time of certain iteration, do not carried out the decomposition of lower a kind of motor pattern.
In sum, in this application, decompose for motor pattern, utilize iteration RANSAC method, for the moving region in video, obtain different motor patterns by Iterative Clustering, can effectively extract the motor pattern corresponding to different motion object under complex scene, analyze so that drive in the wrong direction for detecting.
Except RAN SAC method, can also adopt in the present invention meanshift, the clustering algorithms such as k-means are realized the decomposition of motor pattern.
We are specifically set forth RANSAC method below:
RANSAC is the abbreviation of " Random Sample Consensus(random sampling is consistent) ".It can be concentrated from one group of observation data that comprises " point not in the know ", estimates the parameter of mathematical model by iterative manner.It is a kind of uncertain algorithm, and it has certain probability to draw a rational result; Must improve iterations in order to improve probability.This algorithm is proposed in 1981 by Fischler and Bolles the earliest.
The basic assumption of RANSAC is:
(1) data are made up of " intra-office point ", for example: the distribution of data can be explained by some model parameters;
(2) " point not in the know " is the data that can not adapt to this model;
(3) data in addition belong to noise.
The reason that point not in the know produces has: the extreme value of noise; The measuring method of mistake; To the false supposition of data.
RAN SAC has also done following hypothesis: given one group (conventionally very little) intra-office point, exist one can estimation model parameter process; And intra-office point can be explained or be applicable to this model.
We suppose that there are three kinds of motor patterns a moving region, and we can select affine Transform Model pattern wherein, as shown in Equation (8), and the namely initial point (x of a light stream of corresponding certain frame
i, y
i), the optical flow tracking point (x of next frame
i+1, y
i+1) (being Optical-flow Feature vector) meet the corresponding relation of formula (8).Wherein S is amplification coefficient, and α is coefficient of rotary Δ x, and Δ y is translation coefficient.For each unique point vector, have so three kinds of situations, a kind of is belong to three kinds of motor patterns a certain, and the second is to belong to us to suppose beyond three kinds of motor patterns, and the third is because Optical-flow Feature point is followed the tracks of unsuccessfully generation " point not in the know ".Detecting application for driving in the wrong direction and belong to image local motion, is S=1 so can select the translational Motion Model of simplifying, α=0, and namely Δ x and Δ y of the parameter of model like this, as shown in Equation (9).
RAN SAC algorithm: in our application, due to the unique point less (being less than 100) of needs tracking, avoid in addition random number to produce, we have removed and in RAN SAC, have got at random Optical-flow Feature vector solving model, and adopt all Optical-flow Feature vector solving models of traversal.Corresponding each eigenvector point is to (x
i, y
i), (x
i+!, y
i+1), we obtain one by formula (9), and (Δ x, Δ y), then travels through other all unique points pair, records the point that meets this model to number.We just can find a model like this, and the number that wherein unique point vector meets this model reaches maximum.
Iteration RAN SAC specific practice: first do once RAN SAC algorithm above-mentioned for the Optical-flow Feature vector of a certain moving region, Model Selection formula (9), obtain two groups of point sets, one group is the point set that belongs to this motor pattern, another group is the point set that does not belong to this motor pattern, certainly we can claim that the latter is " point not in the know " of the first motor pattern, then concentrate and repeat to do RANSAC algorithm at second group of point, " point not in the know " that obtains point set that corresponding the second motor pattern is corresponding and its correspondence, does for the third time by that analogy.Three kinds of motor patterns that we have just obtained this moving region like this with and corresponding point set, then motion vector corresponding to each point set done on average, just obtained namely size and the direction of the average velocity of motor pattern of this point set.
In addition, be appreciated that in a moving region and may have multiple moving object, so may there is the translation motion of multiple friction speed and direction, the present invention looks for first three to plant topmost motor pattern.Certainly,, if last Optical-flow Feature point vector is less than thresholding in the time of certain iteration, do not carried out the decomposition of lower a kind of motor pattern.
Each method embodiment of the present invention all can be realized in modes such as software, hardware, firmwares.No matter the present invention realizes with software, hardware or firmware mode, instruction code can be stored in the storer of computer-accessible of any type (for example permanent or revisable, volatibility or non-volatile, solid-state or non-solid-state, fixing or removable medium etc.).Equally, storer can be for example programmable logic array (Programmable Array Logic, be called for short " PAL "), random access memory (Random Access Memory, be called for short " RAM "), programmable read only memory (Programmable Read Only Memory, be called for short " PROM "), ROM (read-only memory) (Read-Only Memory, be called for short " ROM "), Electrically Erasable Read Only Memory (Electrically Erasable Programmable ROM, be called for short " EEPROM "), disk, CD, digital versatile disc (Digital Versatile Disc, be called for short " DVD ") etc.
Third embodiment of the invention relates to a kind of retrograde detection system based on video.Fig. 3 is the structural representation of this retrograde detection system based on video.
Specifically, as shown in Figure 3, should the retrograde detection system based on video comprise:
Frame difference image acquiring unit, for pixel corresponding in the original image of video consecutive frame is asked to poor, and to difference binaryzation, obtains frame difference image.
Moving region extraction unit, extracts moving region for the investigation image obtaining according to frame difference image acquiring unit.
Angle point is chosen unit, chooses angle point for the frame difference image obtaining at frame difference image acquiring unit.
Optical flow tracking unit for choose the initial trace point of the selected angle point in unit as light stream using angle point, carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector.
Movable information acquiring unit, for according to the initial trace point of light stream and the position relationship of moving region, obtains the corresponding relation of moving region and Optical-flow Feature vector.
Motor pattern resolving cell, for for each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region.
The detecting unit that drives in the wrong direction, having judged whether to drive in the wrong direction for the size and Orientation of the average velocity of the motor pattern that decomposes according to motor pattern resolving cell occurs.
In addition, also comprise: video camera, for obtaining video image, the installation site of video camera wants underwriter's height to account for 1/3 to 1/2 of picture altitude, and focal length of camera is selected 3 to 9 millimeters.
Camera height is moderate, and camera is too low, and the too large easily wrong report of the area that people accounts for is too highly easily failed to report, and focal length can not be excessive, not so people's large easily wrong report in the visual field.Camera angle is as far as possible vertical, and the depth of field within the vision is not excessive, can reduce like this impact that perspective effect brings, and facilitates the setting of algorithm threshold value.
The first and second embodiments are method embodiments corresponding with present embodiment, present embodiment can with the enforcement of working in coordination of the first and second embodiments.The correlation technique details of mentioning in the first and second embodiments is still effective in the present embodiment, in order to reduce repetition, repeats no more here.Correspondingly, the correlation technique details of mentioning in present embodiment also can be applicable in the first and second embodiments.
It should be noted that, each unit of mentioning in the each System Implementation mode of the present invention is all logical block, physically, a logical block can be a physical location, also can be a part for a physical location, can also realize with the combination of multiple physical locations, the physics realization mode of these logical blocks itself is not most important, and the combination of the function that these logical blocks realize is only the key that solves technical matters proposed by the invention.In addition, for outstanding innovation part of the present invention, the above-mentioned each System Implementation mode of the present invention is not introduced the unit not too close with solving technical matters relation proposed by the invention, and this does not show that the said equipment embodiment does not exist other unit.
It should be noted that, in the claim and instructions of this patent, the letter of relational terms such as the first and second grades or A, B, C etc. is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply and between these entities or operation, have the relation of any this reality or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby the process, method, article or the equipment that make to comprise a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or be also included as the intrinsic key element of this process, method, article or equipment.The in the situation that of more restrictions not, the key element that " comprises " and limit by statement, and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
Although pass through with reference to some of the preferred embodiment of the invention, the present invention is illustrated and described, but those of ordinary skill in the art should be understood that and can do various changes to it in the form and details, and without departing from the spirit and scope of the present invention.
Claims (10)
1. the retrograde detection method based on video, is characterized in that, comprises the following steps:
Pixel corresponding in the original image of consecutive frame in video is asked to poor, and to difference binaryzation, obtain frame difference image;
Extract moving region according to described frame difference image;
In described frame difference image, choose angle point;
Initial trace point using selected angle point as light stream carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector;
According to the initial trace point of described light stream and the position relationship of described moving region, obtain the corresponding relation of described moving region and described Optical-flow Feature vector;
For each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region;
Judge whether to drive in the wrong direction according to the size and Orientation of the average velocity of described motor pattern and occurred.
2. the retrograde detection method based on video according to claim 1, is characterized in that, for each moving region, according in the step of the Optical-flow Feature resolution of vectors motor pattern in this moving region, comprises following sub-step described respectively:
A is the initial trace point with corresponding light stream according to the Optical-flow Feature vector in moving region, calculates the translation parameters of geometric transformation model;
B utilizes the geometric transformation model obtaining to remove to apply mechanically other Optical-flow Feature vector, calculating meets the number of the Optical-flow Feature vector of this model, meets to refer to point that the initial trace point of certain light stream calculates according to described geometric transformation model and to utilize angle point light stream point to follow the tracks of the point obtaining enough near;
C obtains the first geometric transformation model, and wherein, the number that Optical-flow Feature vector meets this model reaches maximum, and the Optical-flow Feature vector that meets this first geometric transformation model is the first motor pattern;
In the Optical-flow Feature vector that does not belong to the first motor pattern, according to above-mentioned steps A, B and C, obtain the second geometric transformation model, the Optical-flow Feature vector that meets this second geometric transformation model is the second motor pattern;
In the Optical-flow Feature vector that does not belong to the first and second motor patterns, according to above-mentioned steps A, B and C, obtain the 3rd geometric transformation model, the Optical-flow Feature vector that meets the 3rd geometric transformation model is the 3rd motor pattern.
3. the retrograde detection method based on video according to claim 2, is characterized in that, employing be traversal all Optical-flow Feature vectors solve geometric transformation model.
4. the retrograde detection method based on video according to claim 3, is characterized in that, described geometric transformation model is affine Transform Model.
5. the retrograde detection method based on video according to claim 4, is characterized in that, in the described step of extracting moving region according to described frame difference image, comprises following sub-step:
The pixel of frame difference image is done to bidirectional projection, obtain horizontal projective histogram and vertical projection histogram;
By described horizontal projective histogram and vertical projection histogram are carried out to adaptive threshold, obtain moving region.
6. the retrograde detection method based on video according to claim 5, it is characterized in that, at the described initial trace point using selected angle point as light stream, in the original image of video, carry out angle point optical flow tracking, obtain in the step of Optical-flow Feature vector, employing be Lucas card Nader optical flow algorithm.
7. the retrograde detection method based on video according to claim 6, is characterized in that, selects the initial trace point of Harris angle point as light stream.
8. the retrograde detection method based on video according to claim 7, it is characterized in that, at the described initial trace point using selected angle point as light stream, in the original image of video, carry out angle point optical flow tracking, obtain in the step of Optical-flow Feature vector, utilization be the thought of pyramid layering, preserve 1/4 image and 1/16 image of original image, the each iteration of angle point is found first and is found at image 1/16 place, then finds at image 1/4 place, finally finds matching characteristic piece at original image.
9. the retrograde detection system based on video, is characterized in that, comprising:
Frame difference image acquiring unit, for pixel corresponding in the original image of video consecutive frame is asked to poor, and to difference binaryzation, obtains frame difference image;
Moving region extraction unit, extracts moving region for the investigation image obtaining according to described frame difference image acquiring unit;
Angle point is chosen unit, chooses angle point for the frame difference image obtaining at described frame difference image acquiring unit;
Optical flow tracking unit for choose the initial trace point of the selected angle point in unit as light stream using described angle point, carries out angle point optical flow tracking in the original image of video, obtains Optical-flow Feature vector;
Movable information acquiring unit, for according to the initial trace point of described light stream and the position relationship of described moving region, obtains the corresponding relation of described moving region and described Optical-flow Feature vector;
Motor pattern resolving cell, for for each moving region, respectively according to the Optical-flow Feature resolution of vectors motor pattern in this moving region;
The detecting unit that drives in the wrong direction, having judged whether to drive in the wrong direction for the size and Orientation of the average velocity of the motor pattern that decomposes according to described motor pattern resolving cell occurs.
10. the retrograde detection system based on video according to claim 9, it is characterized in that, also comprise: video camera, for obtaining video image, the installation site of video camera wants underwriter's height to account for 1/3 to 1/2 of picture altitude, and focal length of camera is selected 3 to 9 millimeters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210419365.9A CN103793920B (en) | 2012-10-26 | 2012-10-26 | Retrograde detection method and its system based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210419365.9A CN103793920B (en) | 2012-10-26 | 2012-10-26 | Retrograde detection method and its system based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103793920A true CN103793920A (en) | 2014-05-14 |
CN103793920B CN103793920B (en) | 2017-10-13 |
Family
ID=50669543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210419365.9A Active CN103793920B (en) | 2012-10-26 | 2012-10-26 | Retrograde detection method and its system based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103793920B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426811A (en) * | 2015-09-28 | 2016-03-23 | 高新兴科技集团股份有限公司 | Crowd abnormal behavior and crowd density recognition method |
CN108230305A (en) * | 2017-12-27 | 2018-06-29 | 浙江新再灵科技股份有限公司 | Method based on the detection of video analysis staircase abnormal operating condition |
CN109359169A (en) * | 2018-10-30 | 2019-02-19 | 西南交通大学 | A kind of retrograde behavior real-time identification method of the shared bicycle based on probability graph model |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN110070003A (en) * | 2019-04-01 | 2019-07-30 | 浙江大华技术股份有限公司 | The method and relevant apparatus that unusual checking and light stream autocorrelation determine |
CN111267079A (en) * | 2018-12-05 | 2020-06-12 | 中国移动通信集团山东有限公司 | Intelligent inspection robot inspection method and device |
CN111582243A (en) * | 2020-06-05 | 2020-08-25 | 上海商汤智能科技有限公司 | Countercurrent detection method, device, electronic equipment and storage medium |
CN112989945A (en) * | 2021-02-08 | 2021-06-18 | 杭州海康威视数字技术股份有限公司 | Method and apparatus for detecting reverse travel of retail store cart |
CN113673443A (en) * | 2021-08-24 | 2021-11-19 | 长沙海信智能系统研究院有限公司 | Object reverse detection method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101320427A (en) * | 2008-07-01 | 2008-12-10 | 北京中星微电子有限公司 | Video monitoring method and system with auxiliary objective monitoring function |
CN102184547A (en) * | 2011-03-28 | 2011-09-14 | 长安大学 | Video-based vehicle reverse driving event detecting method |
US20120237114A1 (en) * | 2011-03-16 | 2012-09-20 | Electronics And Telecommunications Research Institute | Method and apparatus for feature-based stereo matching |
CN102708573A (en) * | 2012-02-28 | 2012-10-03 | 西安电子科技大学 | Group movement mode detection method under complex scenes |
-
2012
- 2012-10-26 CN CN201210419365.9A patent/CN103793920B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101320427A (en) * | 2008-07-01 | 2008-12-10 | 北京中星微电子有限公司 | Video monitoring method and system with auxiliary objective monitoring function |
US20120237114A1 (en) * | 2011-03-16 | 2012-09-20 | Electronics And Telecommunications Research Institute | Method and apparatus for feature-based stereo matching |
CN102184547A (en) * | 2011-03-28 | 2011-09-14 | 长安大学 | Video-based vehicle reverse driving event detecting method |
CN102708573A (en) * | 2012-02-28 | 2012-10-03 | 西安电子科技大学 | Group movement mode detection method under complex scenes |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426811A (en) * | 2015-09-28 | 2016-03-23 | 高新兴科技集团股份有限公司 | Crowd abnormal behavior and crowd density recognition method |
CN105426811B (en) * | 2015-09-28 | 2019-03-15 | 高新兴科技集团股份有限公司 | A kind of crowd's abnormal behaviour and crowd density recognition methods |
CN108230305A (en) * | 2017-12-27 | 2018-06-29 | 浙江新再灵科技股份有限公司 | Method based on the detection of video analysis staircase abnormal operating condition |
CN109359169A (en) * | 2018-10-30 | 2019-02-19 | 西南交通大学 | A kind of retrograde behavior real-time identification method of the shared bicycle based on probability graph model |
CN111267079A (en) * | 2018-12-05 | 2020-06-12 | 中国移动通信集团山东有限公司 | Intelligent inspection robot inspection method and device |
CN109819208A (en) * | 2019-01-02 | 2019-05-28 | 江苏警官学院 | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring |
CN109819208B (en) * | 2019-01-02 | 2021-01-12 | 江苏警官学院 | Intensive population security monitoring management method based on artificial intelligence dynamic monitoring |
CN110070003A (en) * | 2019-04-01 | 2019-07-30 | 浙江大华技术股份有限公司 | The method and relevant apparatus that unusual checking and light stream autocorrelation determine |
CN111582243A (en) * | 2020-06-05 | 2020-08-25 | 上海商汤智能科技有限公司 | Countercurrent detection method, device, electronic equipment and storage medium |
CN111582243B (en) * | 2020-06-05 | 2024-03-26 | 上海商汤智能科技有限公司 | Countercurrent detection method, countercurrent detection device, electronic equipment and storage medium |
CN112989945A (en) * | 2021-02-08 | 2021-06-18 | 杭州海康威视数字技术股份有限公司 | Method and apparatus for detecting reverse travel of retail store cart |
CN113673443A (en) * | 2021-08-24 | 2021-11-19 | 长沙海信智能系统研究院有限公司 | Object reverse detection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103793920B (en) | 2017-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103793920A (en) | Retro-gradation detection method based on video and system thereof | |
Wu et al. | Moving object detection with a freely moving camera via background motion subtraction | |
Singh et al. | Muhavi: A multicamera human action video dataset for the evaluation of action recognition methods | |
Tang et al. | Learning people detectors for tracking in crowded scenes | |
Carr et al. | Monocular object detection using 3d geometric primitives | |
Lee et al. | Motion influence map for unusual human activity detection and localization in crowded scenes | |
Park et al. | FastAno: Fast anomaly detection via spatio-temporal patch transformation | |
CN102243765A (en) | Multi-camera-based multi-objective positioning tracking method and system | |
US9191650B2 (en) | Video object localization method using multiple cameras | |
CN102521842B (en) | Method and device for detecting fast movement | |
CN103824070A (en) | Rapid pedestrian detection method based on computer vision | |
Benabbas et al. | Spatio-temporal optical flow analysis for people counting | |
Joshi et al. | A low cost and computationally efficient approach for occlusion handling in video surveillance systems | |
Prokaj et al. | Using 3d scene structure to improve tracking | |
Baltieri et al. | 3D Body Model Construction and Matching for Real Time People Re-Identification. | |
CN103310465B (en) | A kind of occlusion disposal route based on markov random file | |
Zhao et al. | An approach based on mean shift and kalman filter for target tracking under occlusion | |
Pan et al. | Fighting detection based on pedestrian pose estimation | |
Fangfang et al. | Real-time lane detection for intelligent vehicles based on monocular vision | |
Bota et al. | Multi-feature walking pedestrians detection for driving assistance systems | |
Shakeri et al. | Detection of small moving objects using a moving camera | |
Rougier et al. | 3D head trajectory using a single camera | |
Wang et al. | A fast crowd segmentation method | |
Javidnia et al. | Real-time automotive street-scene mapping through fusion of improved stereo depth and fast feature detection algorithms | |
Baskaran et al. | Compressive object tracking—A review and analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |