CN107480646B - Binocular vision-based vehicle-mounted video abnormal motion detection method - Google Patents

Binocular vision-based vehicle-mounted video abnormal motion detection method Download PDF

Info

Publication number
CN107480646B
CN107480646B CN201710722400.7A CN201710722400A CN107480646B CN 107480646 B CN107480646 B CN 107480646B CN 201710722400 A CN201710722400 A CN 201710722400A CN 107480646 B CN107480646 B CN 107480646B
Authority
CN
China
Prior art keywords
detected
optical flow
area
value
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710722400.7A
Other languages
Chinese (zh)
Other versions
CN107480646A (en
Inventor
陈建平
付利华
李灿灿
崔鑫鑫
廖湖声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710722400.7A priority Critical patent/CN107480646B/en
Publication of CN107480646A publication Critical patent/CN107480646A/en
Application granted granted Critical
Publication of CN107480646B publication Critical patent/CN107480646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention discloses a binocular vision-based vehicle-mounted video abnormal motion detection method, which comprises the steps of dividing characteristic points into a plurality of layers according to the real distance of the characteristic points in a binocular image, further dividing the characteristic points of each layer into different sets based on the difference of motion models of the characteristic points of each layer, clustering the characteristic point sets corresponding to each motion model to obtain a series of abnormal motion areas to be detected, and according to the abnormal motion parameters of each area: and calculating the abnormal value of each area to be detected through the established abnormal motion detection model so as to obtain the abnormal motion area in the image. The binocular vision-based vehicle-mounted video abnormal motion detection method provided by the invention can effectively detect the abnormal motion area in the vehicle-mounted video and the threat magnitude of the abnormal motion area to the vehicle.

Description

Binocular vision-based vehicle-mounted video abnormal motion detection method
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a binocular vision-based vehicle-mounted video abnormal motion detection method.
Background
With the rapid development of the automobile industry and the increasing number of automobiles, huge casualties and property loss are caused along with more and more extreme and frequent traffic accidents. As an effective means for reducing traffic accidents and accident loss, the automobile safety auxiliary driving technology becomes the research frontier in the traffic engineering field.
In the research of moving object detection technology of an automobile driving assistance system, the identification and tracking of pedestrians and vehicles have achieved great research results. However, the basic idea of these methods is to classify and identify the target first and then track the target. However, the detection of the moving object is very difficult due to the shape and size of the vehicle and the change in the posture of the pedestrian, which also limits the function of the car safety driving assistance system. Meanwhile, in an actual driving scene, a driver really focuses on an abnormal moving object existing in a traffic environment without identifying whether the abnormal moving object is a pedestrian, a vehicle or other objects. Therefore, if an abnormal model is established for abnormal moving targets in the traffic environment from the perspective of abnormal motion detection, the possibility of collision of the moving targets is quantified, and necessary collision early warning information is provided for a driver in time, so that the performance of the automobile safety auxiliary driving system is greatly improved, the driving safety is improved, and the occurrence of traffic accidents is reduced.
Therefore, there is a need for a binocular vision-based vehicle-mounted video abnormal motion detection technical solution to solve the above problems.
Disclosure of Invention
The invention aims to solve the technical problem that a binocular vision-based vehicle-mounted video abnormal motion detection method is provided, and the problem that only a specific type of target can be detected, the practicability is low and the method can only be suitable for a single scene in the traditional detection method based on image characteristics is solved.
In order to solve the above problems, the present invention provides a binocular vision-based abnormal motion detection method for a vehicle-mounted video, comprising:
1) extracting pixel points from the left view of the current frame according to a certain step length to serve as feature points of the original left view, calculating optical flow information of the feature points by combining the previous frame image to obtain a feature point pair set, and detecting lane lines in the left view of the current frame;
2) calculating a parallax matrix through a left view and a right view of a current frame, obtaining the relation between the parallax value and the real distance based on a binocular vision imaging principle, calculating the real distance of each feature point in a camera coordinate system, and dividing the feature points into a plurality of layers from near to far;
3) according to the obtained multilayer characteristic point sets, multiple times of affine transformation modeling are respectively carried out on the characteristic point pairs corresponding to each layer, the characteristic point pairs are divided into a plurality of sets, and the characteristic points in each set accord with the same motion model;
4) according to the obtained multiple feature point sets which accord with different motion models, clustering the feature point set corresponding to each motion model by adopting a density-based clustering algorithm so as to obtain the position and the size of each region to be detected;
5) calculating the optical flow amplitude, the optical flow direction, the real distance and the lane of each to-be-detected area according to the obtained position and size of each to-be-detected area and by combining the parallax matrix, the optical flow information and the lane line information;
6) and establishing an abnormal motion detection model, and calculating an abnormal value of each region to be detected according to the obtained information of the region to be detected to finish the detection of the abnormal motion region.
Further, the step 1) is specifically as follows:
1.1) calculating optical flow information of feature points in two adjacent frames of images, screening the feature points by calculating a forward optical flow and a reverse optical flow once, and reserving the feature points with the error of the optical flow information of two times smaller than a set threshold value;
1.2) detecting a lane line in a left view of a current frame: detecting an outgoing line section based on Hough transform, screening out a line section set of which the slope of the line section is within a set range, and further finding out a line section of which the starting point of the line section is closest to the vertical central line of the image and the end point of the line section does not exceed the vertical central line of the image from the line section set to serve as a left lane line; and finding out the line segment of which the starting point is closest to the vertical central line of the image and does not exceed the vertical central line as a right lane line. If no lane line is detected in the current frame image, the lane line information of the previous frame image is used.
Further, the step 3) is specifically as follows:
3.1) using all the characteristic points on the layer for establishing an affine transformation model at the 1 st time, wherein the characteristic points meeting the affine transformation model are called interior points, and the characteristic points not meeting the affine transformation model are called exterior points;
3.2) carrying out affine transformation modeling for M-1 times, wherein the feature point pair modeled each time is the external point modeled by the last affine transformation. Thus, the characteristic points on each layer are divided into a plurality of sets, and the characteristic points in each set conform to the same motion model.
Further, the step 5) is specifically as follows:
5.1) counting the optical flow direction histograms and the optical flow amplitude histograms of all the characteristic points in the area according to the position and the size of the area obtained by the clustering algorithm, and selecting the optical flow direction and the optical flow amplitude with the maximum statistical value as the optical flow direction and the optical flow amplitude of the area;
5.2) counting the parallax value of the pixel point with the parallax value larger than 0 in the area, calculating the average parallax, and calculating the real distance of the area by utilizing the relation between the parallax value and the real distance;
and 5.3) dividing the image into a left lane area, a right lane area and a same lane area according to the lane line, and then calculating the lane to which the area to be detected belongs by using the geometric relation between the lane line and the geometric center of the area to be detected.
Further, the step 6) is specifically as follows:
6.1) normalizing each optical flow amplitude of the area to be detected according to the maximum optical flow amplitude and the minimum optical flow amplitude of all the areas to be detected in the frame image, and taking the optical flow amplitude as an initial abnormal value of the area;
6.2) converting the real distance of the region into [0,1] through a Gaussian distribution model, reinforcing or inhibiting through an exponential generalized weighting operator, and taking the final distance weight as the influence weight of the real distance on an abnormal value;
6.3) calculating the light stream direction weight of the area to be detected through different light stream direction weight calculation formulas according to the lane to which the area to be detected belongs;
and 6.4) combining the initial abnormal value, the distance weight and the optical flow direction weight of the area to be detected, and calculating to obtain the abnormal value of the area.
Further, the step 5) is specifically as follows:
step 5.1, counting the light of all the characteristic points in the area to be detectedThe optical flow amplitude histogram with the highest statistical value is used as the optical flow amplitude f of the areai
Step 5.2, equally dividing 360 degrees into NBinCounting the optical flow direction histograms of all the feature points in the region to be detected, and taking the optical flow direction with the highest statistical value as the optical flow direction O of the regioni
Step 5.3, counting the region R to be detectediCalculating the sum and the number of parallax values of all pixel points with parallax values larger than 0, calculating the average parallax value as the parallax value of the region to be detected, and calculating to obtain the real distance of the region to be detected by utilizing the relationship between the parallax value and the real distance:
Figure BDA0001385248020000041
Figure BDA0001385248020000042
wherein the content of the first and second substances,
Figure BDA0001385248020000043
represents a region RiAverage disparity value of niRepresents a region RiThe number of pixels with a medium parallax value greater than 0, djRepresents a region RiAnd f is the focal length of the camera, and b is the base line.
Step 5.4, calculating the region R to be detectediThe lane to which the vehicle belongs:
Figure BDA0001385248020000044
wherein the content of the first and second substances,
Figure BDA0001385248020000045
represents a region RiY is hl(x) And y ═ hr(x) The left lane straight line equation and the right lane straight line equation are expressed respectively.
Further, the step 6) is specifically as follows:
step 6.1, normalizing the region R to be detected according to the maximum optical flow amplitude fmag _ max and the minimum optical flow amplitude fmag _ min in all the regions to be detected in the frame imageiAnd taking the optical flow amplitude as an initial anomaly value of the area to be detected:
Figure BDA0001385248020000051
wherein f isiRepresents a region RiThe optical flow magnitude of.
Step 6.2, assuming that the distance weight accords with the Gaussian distribution model, the distance weight calculation formula is as follows:
Figure BDA0001385248020000052
wherein D isiIndicates the region R to be detectediDistance to the vehicle, σ2The value is 5;
step 6.3, because the threat degree of the distance between the area to be detected and the vehicle is different, if the distance is longer, the abnormality of the target area is reduced, and if the distance is shorter, the abnormality of the target area is strengthened, therefore, an exponential generalized weighting operator is adopted to strengthen or inhibit the distance weight, and the result is taken as the influence weight of the real distance of the area to be detected on the abnormality value:
Figure BDA0001385248020000053
n=ln(2)/(ln(2)-ln(1-αd))
wherein, αdIndicating the degree of influence of distance on the abnormality, αdThe greater the effect, the stronger the effect, whereas the weaker the effect, αdA value of 0.9, edA critical value for inhibition or enhancement is indicated,
Figure BDA0001385248020000054
greater than edThen strengthen the distance weightInfluence of the value on the abnormality, and contrarily, the influence of the distance weight on the abnormality is inhibited, e in the inventiondThe value is 0.5.
Step 6.4, detecting the region RiThe calculation formulas of the optical flow direction weights of the lanes are different, and are as follows:
when L isiWhen the value is 1:
Figure BDA0001385248020000061
when L isiWhen the value is 2:
Figure BDA0001385248020000062
when L isiWhen the value is 0:
Figure BDA0001385248020000063
wherein, OiIndicates the region R to be detectediDirection of light flow of, OVIndicating the direction from the geometric centre of the area to be inspected to the vehicle, OldIndicates the direction from the lower end point to the upper end point of the left lane line, OluIndicates the direction from the upper end point to the lower end point of the left lane line, OrdRepresents the direction from the upper end point to the lower end point of the right lane line, OruIndicating the direction from the lower end point to the upper end point of the right lane, NBinRepresents a directional aliquot number;
step 6.5, detecting the region RiThe anomaly quantification formula of (a) is:
Figure BDA0001385248020000064
wherein norm (x) is a normalization function, normalizing the parameter x to [0,1 ];
and calculating an abnormality value of each region to be detected according to an abnormality quantification formula, wherein if the abnormality value is greater than a threshold value, the region is an abnormal motion region.
The invention provides a binocular vision-based vehicle-mounted video abnormal motion detection method. According to the method, the characteristic points are divided into multiple layers according to the real distance of the characteristic points in the binocular image, each layer of characteristic points are further divided into different sets based on the difference of the characteristic point motion models of each layer, the characteristic point sets corresponding to each motion model are clustered, so that a series of abnormal motion areas to be detected are obtained, and the abnormal value of each area to be detected is calculated through the established abnormal motion detection model according to the abnormal motion parameters of each area, so that the abnormal motion area in the image is obtained. The abnormal motion detection mechanism provided by the invention can effectively detect the abnormal motion area in the vehicle-mounted video and the threat size of the abnormal motion area to the vehicle. The method and the device solve the problem that only a specific type of target can be detected in the traditional target detection algorithm based on the image characteristics, and have higher practicability and higher accuracy.
Drawings
FIG. 1 is a flow chart of the binocular vision-based vehicle-mounted video abnormal motion detection method of the invention.
FIG. 2 is an operation example of the binocular vision-based abnormal motion detection method for the vehicle-mounted video.
FIG. 3 is an operation example of the binocular vision-based vehicle-mounted video to-be-detected region abnormality value calculation method.
FIG. 4 is an angle transformation model of the binocular vision-based vehicle-mounted video abnormal motion detection method of the invention.
Detailed Description
The invention provides a binocular vision-based vehicle-mounted video abnormal motion detection method. The method comprises the steps of dividing feature points into multiple layers according to the real distance of the feature points in a binocular image, further dividing the feature points of each layer into different sets based on the difference of motion models of the feature points of each layer, clustering the feature point sets corresponding to each motion model to obtain a series of abnormal motion areas to be detected, and according to the abnormal motion parameters of each area: and calculating the abnormal value of each area to be detected through the established abnormal motion detection model so as to obtain the abnormal motion area in the image. The binocular vision-based vehicle-mounted video abnormal motion detection method provided by the invention can effectively detect the abnormal motion area in the vehicle-mounted video and the threat magnitude of the abnormal motion area to the vehicle. The method and the device can solve the problem that the traditional target detection algorithm based on the image characteristics can only detect the target of a specific type, and have higher practicability and higher accuracy.
The invention comprises the following steps:
1) extracting pixel points from the left view of the current frame according to a certain step length to serve as feature points of the left view, calculating optical flow information of the feature points by combining the previous frame image to obtain a feature point pair set, and detecting lane lines of the left view of the current frame;
the specific steps for obtaining the characteristic point pair set are as follows:
1.1) extracting pixel points from the left view of the current frame according to a certain step length to serve as feature points of the left view;
1.2) calculating optical flow information of all feature points based on an optical flow method, and obtaining matching points corresponding to the feature points in a left view of a previous frame;
1.3) calculating the matching points of the matching points in the left view of the current frame based on an optical flow method, calculating the error between the matching points in the left view of the current frame obtained by the second optical flow method and the original corresponding feature points in the current frame, screening the feature points in the current frame, and if the error is greater than a set threshold value, rejecting the feature point pairs.
2) Calculating a parallax matrix through a left view and a right view of a current frame, obtaining a relation between a parallax value and a real distance based on a binocular vision imaging principle, calculating the real distance of each feature point in a camera coordinate system, and dividing the feature points into a plurality of layers from near to far, wherein the feature points with the parallax value less than or equal to 0 are divided into the 0 th layer, and the feature points with the parallax value greater than 0 are divided into corresponding layers according to different real distances;
3) according to the obtained multilayer characteristic point sets, carrying out multiple times of affine transformation modeling on the characteristic point pairs on each layer respectively, and further dividing the characteristic point pairs into multiple sets, wherein the characteristic points in each set conform to the same motion model;
firstly, performing first motion modeling on all characteristic point pairs on each layer by using a homography matrix calculated by an RANSAC algorithm, wherein the characteristic point pairs meeting the motion model are called inner points, and the characteristic point pairs not meeting the motion model are called outer points; then, each time the RANSAC algorithm is adopted to establish the characteristic point pairs of the affine transformation model, the characteristic point pairs are all outer points after the affine transformation model is established for the previous time; repeating the RANSAC algorithm for multiple times to perform affine transformation modeling on all feature point pairs on each layer; finally, the characteristic point pairs are divided into a plurality of sets, and the characteristic points in each set conform to the same motion model.
4) Clustering the feature point set corresponding to each motion model by adopting a density-based clustering algorithm so as to obtain the positions and sizes of a plurality of regions to be detected; since the distance of the feature point of the 0 th layer is 0 or less, it can be ignored. Meanwhile, in order to more accurately obtain the region to be detected, clustering the feature point sets of different layers by adopting different clustering densities;
5) calculating the optical flow amplitude, the optical flow direction, the real distance and the lane of each to-be-detected area according to the obtained position and size of each to-be-detected area and by combining the parallax image, the optical flow information and the lane line information;
the specific steps of calculating the optical flow amplitude, the optical flow direction, the real distance and the belonging lane of each area to be detected are as follows:
5.1) counting optical flow amplitude histograms of all feature points in the area, and taking the optical flow amplitude with the highest statistical value in the optical flow amplitude histograms as the optical flow amplitude of the area;
5.2) in order to accurately count the optical flow direction histograms of all the feature points in the region, 360 DEG is equally divided into NBinAn individual section in which the optical flow direction having the highest total value in the optical flow direction histogram is set as the optical flow direction of the area;
5.3) counting the sum and the number of the parallax values of all pixel points with the parallax value larger than 0 in the area, calculating the average parallax value as the parallax value of the area, and calculating the real distance of the area by utilizing the relation between the parallax value and the real distance;
and 5.4) dividing the image into a left lane area, a right lane area and a same lane area according to the lane line, and then calculating the lane to which the area to be detected belongs by using the geometric relation between the lane line and the geometric center of the area to be detected.
6) And establishing an abnormal motion detection model, and calculating an abnormal value of each region to be detected according to the obtained information of the region to be detected to finish the detection of the abnormal motion region.
6.1) normalizing the optical flow amplitude of each area to be detected according to the maximum and minimum optical flow amplitudes of all the areas to be detected in the frame image, and taking the optical flow amplitude as an initial abnormal value of the area;
6.2) assuming that the distance weight accords with a Gaussian distribution model, converting the real distance of each region to be detected to [0,1 ];
6.3) because the threat degree of the distance between the area to be detected and the vehicle is different, if the distance is far, the abnormality of the area to be detected is reduced, and if the distance is near, the abnormality of the area to be detected is strengthened, therefore, an exponential generalized weighting operator is adopted to strengthen or inhibit the distance weight, and the result is used as the influence weight of the real distance of the area to be detected on the abnormality value;
6.4) adopting different optical flow direction weight calculation formulas to calculate the optical flow direction weight of the area to be detected according to the different lanes to which the area to be detected belongs;
and 6.5) combining the initial abnormality value, the distance weight and the optical flow direction weight of the area to be detected, and calculating to obtain the abnormality value of the area to be detected.
The invention has wide application in the field of computer vision, such as: assisted driving, smart robots, and the like. The present invention will now be described in detail with reference to the accompanying drawings.
1) Extracting pixel points from the left view of the current frame according to a certain step length to serve as characteristic points of the left view, wherein the step length coefficient is 5;
2) and respectively calculating primary forward optical flow information and primary reverse optical flow information according to the previous frame image and the current frame image to obtain a characteristic point pair set, screening the obtained characteristic point pair set according to the error of the two times of optical flow information, and deleting the characteristic point pairs of which the errors of the two times of optical flow information are larger than a set threshold value. The invention sets the threshold value as 1 pixel;
3) based on Hough transform, lane line detection is carried out in the left view of the current frame;
3.1) carrying out binarization processing on the left view of the current frame, wherein the threshold value is 120, and carrying out primary corrosion and primary expansion processing;
3.2) extracting edge feature points of the image by adopting a Canny operator, and then detecting a line segment set in the image based on Hough transform;
3.3) scanning each line segment in the line segment set, calculating the slope k of each line segment, and judging whether the slope k belongs to the slope range of the left lane line or the slope range of the right lane line, thereby dividing the line segment set into a left line segment set and a right line segment set. Wherein, the slope range of the left lane line is (-1.5, -0.5), and the slope range of the right lane line is (0.5, 1.5);
3.4) scanning each line segment in the left line segment set, and searching a line segment of which the starting point is closest to the vertical central line of the image and the end point does not exceed the vertical central line of the image as a left lane line in the frame image;
and 3.5) similarly, scanning each line segment in the right line segment set, and searching a line segment of which the starting point is closest to the vertical central line of the image and does not exceed the vertical central line of the image to serve as the right lane line in the frame image.
4) Calculating a parallax matrix through a left view and a right view of a current frame, obtaining the relation between a parallax value and a real distance based on a binocular vision imaging principle, calculating the real distance of each feature point in a camera coordinate system, and dividing the feature points into N layers from near to far;
4.1) according to the binocular vision imaging principle, the relation between parallax and real distance can be obtained:
Figure BDA0001385248020000111
where f is the camera focal length, b is the baseline, and d is the disparity value.
And 4.2) dividing the characteristic points into N layers according to the real distance of each characteristic point. The distance interval is divided into N levels from near to far, which are respectively: (∞,0), [0,5), [5,10), [10,20), [20,40), [40,100), [100, + ∞). In the invention, N is 7;
5) carrying out affine transformation modeling on the feature point pairs on each layer for M times, and dividing the feature point pairs on each layer into M sets;
5.1) firstly, carrying out first motion modeling on all characteristic point pairs on each layer by using a homography matrix calculated by a RANSAC algorithm, wherein the characteristic point pairs meeting the motion model are called inner points, and the characteristic point pairs not meeting the motion model are called outer points;
5.2) establishing an affine transformation model by using an RANSAC algorithm for the outer points after the previous affine transformation modeling to obtain inner points meeting the motion model and outer points not meeting the motion model;
5.3) repeating the step 5.2M-2 times;
5.4) finally, all the characteristic point pairs on each layer are divided into M sets, and the characteristic points in each set conform to the same motion model.
6) Clustering the feature point sets in each motion model by adopting a density-based clustering algorithm according to the obtained M feature point sets conforming to different motion models so as to obtain the position and the size of each region to be detected; since the distance of the feature point of the 0 th layer is less than 0, it can be ignored. Meanwhile, in order to more accurately obtain each region to be detected, different clustering densities are adopted for the feature points located on different layers:
Figure BDA0001385248020000121
Figure BDA0001385248020000122
wherein R and N respectively represent a range based on density clustering and a threshold value of the number of characteristic points in the range, and L represents the level to which the characteristic points belong;
7) calculating the optical flow amplitude, the optical flow direction, the real distance and the belonging lane of the to-be-detected area according to the obtained position and size of the to-be-detected area and by combining the parallax image, the optical flow information and the lane line information;
7.1) counting the optical flow amplitude histograms of all the characteristic points in the area to be detected, and taking the optical flow amplitude with the highest statistical value as the optical flow amplitude f of the areai
7.2) equally divide 360 ° into N as shown in FIG. 4BinCounting the optical flow direction histograms of all the feature points in the region to be detected, and taking the optical flow direction with the highest statistical value as the optical flow direction O of the regioniIn the present invention, NBinThe value is 36.
7.3) statistics of the region R to be detectediCalculating the sum and the number of parallax values of all pixel points with parallax values larger than 0, calculating the average parallax value as the parallax value of the region to be detected, and calculating to obtain the real distance of the region to be detected by utilizing the relationship between the parallax value and the real distance:
Figure BDA0001385248020000123
Figure BDA0001385248020000124
wherein the content of the first and second substances,
Figure BDA0001385248020000125
represents a region RiAverage disparity value of niRepresents a region RiThe number of pixels with a medium parallax value greater than 0, djRepresents a region RiAnd f is the focal length of the camera, and b is the base line.
7.4) calculating the region R to be detectediThe lane to which the vehicle belongs:
Figure BDA0001385248020000131
wherein the content of the first and second substances,
Figure BDA0001385248020000132
represents a region RiY is hl(x) And y ═ hr(x) The left lane straight line equation and the right lane straight line equation are expressed respectively.
8) And establishing an abnormal motion detection model, and calculating an abnormal value of each region to be detected according to the obtained information of the region to be detected to finish the detection of the abnormal motion region.
8.1) normalizing the region R to be detected according to the maximum optical flow amplitude fmag _ max and the minimum optical flow amplitude fmag _ min in all the regions to be detected in the frame imageiAnd taking the optical flow amplitude as an initial anomaly value of the area to be detected:
Figure BDA0001385248020000133
wherein f isiRepresents a region RiThe optical flow magnitude of.
8.2) assuming that the distance weight conforms to the Gaussian distribution model, the distance weight is calculated as follows:
Figure BDA0001385248020000134
wherein D isiIndicates the region R to be detectediDistance to the vehicle, σ2The value is 5;
8.3) because the threat degree of the distance between the area to be detected and the vehicle is different, if the distance is far away, the abnormality of the target area is reduced, and if the distance is near, the abnormality of the target area is strengthened, therefore, an exponential generalized weighting operator is adopted to strengthen or inhibit the distance weight, and the result is taken as the influence weight of the real distance of the area to be detected on the abnormality value:
Figure BDA0001385248020000141
n=ln(2)/(ln(2)-ln(1-αd))
wherein, αdIndicating the degree of influence of distance on the abnormality, αdThe larger the effect, the stronger the effect, and conversely, the weaker the effect, α in the present inventiondA value of 0.9, edA critical value for inhibition or enhancement is indicated,
Figure BDA0001385248020000142
greater than edEnhancing the influence of the distance weight on the abnormality, and otherwise, inhibiting the influence of the distance weight on the abnormalitydThe value is 0.5.
8.4) region to be detected RiThe calculation formulas of the optical flow direction weights of the lanes are different, and are as follows:
when L isiWhen the value is 1:
Figure BDA0001385248020000143
when L isiWhen the value is 2:
Figure BDA0001385248020000144
when L isiWhen the value is 0:
Figure BDA0001385248020000151
wherein, OiIndicates the region R to be detectediDirection of light flow of, OVIndicating the direction from the geometric centre of the area to be inspected to the vehicle, OldIndicates the direction from the lower end point to the upper end point of the left lane line, OluIndicates the direction from the upper end point to the lower end point of the left lane line, OrdRepresents the direction from the upper end point to the lower end point of the right lane line, OruIndicating the direction from the lower end point to the upper end point of the right lane, NBinIndicating a directional aliquot. The directions in the above calculation formulas are all through the angle model of FIG. 4Type conversion to [0, NBin-1]The value of (c).
8.5) region to be detected RiThe anomaly quantification formula of (a) is:
Figure BDA0001385248020000152
where norm (x) is a normalization function, normalizing the parameter x to [0,1 ].
And calculating the abnormality value of each region to be detected according to an abnormality quantification formula, wherein if the abnormality value is greater than a threshold value, the region is an abnormal motion region, and the threshold value is 1.0.
The invention provides a binocular vision-based vehicle-mounted video abnormal motion detection method, which can quickly and accurately detect an abnormal motion area from a vehicle-mounted video stream, provide early warning information for a driver in time and improve driving safety.

Claims (6)

1. A binocular vision-based vehicle-mounted video abnormal motion detection method is characterized by comprising the following steps:
1) extracting pixel points from the left view of the current frame according to a certain step length to serve as feature points of the original left view, calculating optical flow information of the feature points by combining the previous frame image to obtain a feature point pair set, and detecting lane lines in the left view of the current frame;
2) calculating a parallax matrix through a left view and a right view of a current frame, obtaining the relation between the parallax value and the real distance based on a binocular vision imaging principle, calculating the real distance of each feature point in a camera coordinate system, and dividing the feature points into a plurality of layers from near to far;
3) according to the obtained multilayer characteristic point sets, multiple times of affine transformation modeling are respectively carried out on the characteristic point pairs corresponding to each layer, the characteristic point pairs are divided into a plurality of sets, and the characteristic points in each set accord with the same motion model;
4) according to the obtained multiple feature point sets which accord with different motion models, clustering the feature point set corresponding to each motion model by adopting a density-based clustering algorithm so as to obtain the position and the size of each region to be detected;
5) calculating the optical flow amplitude, the optical flow direction, the real distance and the lane of each to-be-detected area according to the obtained position and size of each to-be-detected area and by combining the parallax matrix, the optical flow information and the lane line information;
6) establishing an abnormal motion detection model, calculating an abnormal value of each region to be detected according to the obtained information of the region to be detected, and completing the detection of the abnormal motion region; wherein the content of the first and second substances,
the step 6) is specifically as follows:
step 6.1, according to the maximum optical flow amplitude f in all the areas to be detected in the frame imagemag_maxAnd minimum optical flow amplitude fmag_minNormalizing the region R to be detectediAnd taking the optical flow amplitude as an initial anomaly value of the area to be detected:
Figure FDA0002519611840000011
wherein f isiRepresents a region RiThe magnitude of the optical flow of (a),
step 6.2, assuming that the distance weight accords with the Gaussian distribution model, the distance weight calculation formula is as follows:
Figure FDA0002519611840000021
wherein D isiIndicates the region R to be detectediDistance to the vehicle, σ2The value is 5;
step 6.3, because the threat degree of the distance between the area to be detected and the vehicle is different, if the distance is longer, the abnormality of the target area is reduced, and if the distance is shorter, the abnormality of the target area is strengthened, therefore, an exponential generalized weighting operator is adopted to strengthen or inhibit the distance weight, and the result is taken as the influence weight of the real distance of the area to be detected on the abnormality value:
Figure FDA0002519611840000022
n=ln(2)/(ln(2)-ln(1-αd))
wherein, αdIndicating the degree of influence of distance on the abnormality, αdThe greater the effect, the stronger the effect, whereas the weaker the effect, αdA value of 0.9, edA critical value for inhibition or enhancement is indicated,
Figure FDA0002519611840000023
greater than edEnhancing the influence of the distance weight on the abnormality, and otherwise, inhibiting the influence of the distance weight on the abnormalitydThe value of the carbon dioxide is 0.5,
step 6.4, detecting the region RiThe calculation formulas of the optical flow direction weights of the lanes are different, and are as follows:
when L isiWhen the value is 1:
Figure FDA0002519611840000031
when L isiWhen the value is 2:
Figure FDA0002519611840000032
when L isiWhen the value is 0:
Figure FDA0002519611840000033
wherein, OiIndicates the region R to be detectediDirection of light flow of, OVIndicating the direction from the geometric centre of the area to be inspected to the vehicle, OldIndicates the direction from the lower end point to the upper end point of the left lane line, OluIndicates the direction from the upper end point to the lower end point of the left lane line, OrdUpper part of right lane lineDirection from end point to lower end point, OruIndicating the direction from the lower end point to the upper end point of the right lane, NBinRepresents a directional aliquot number;
step 6.5, detecting the region RiThe anomaly quantification formula of (a) is:
Figure FDA0002519611840000034
wherein norm (x) is a normalization function, normalizing the parameter x to [0,1 ];
and calculating an abnormality value of each region to be detected according to an abnormality quantification formula, wherein if the abnormality value is greater than a threshold value, the region is an abnormal motion region.
2. The binocular vision-based vehicle-mounted video abnormal motion detection method according to claim 1, wherein the step 1) specifically comprises the following steps:
1.1) calculating optical flow information of feature points in two adjacent frames of images, screening the feature points by calculating a forward optical flow and a reverse optical flow once, and reserving the feature points with the error of the optical flow information of two times smaller than a set threshold value;
1.2) detecting a lane line in a left view of a current frame: detecting an outgoing line section based on Hough transform, screening out a line section set of which the slope of the line section is within a set range, and further finding out a line section of which the starting point of the line section is closest to the vertical central line of the image and the end point of the line section does not exceed the vertical central line of the image from the line section set to serve as a left lane line; and finding out a line segment of which the starting point of the line segment is closest to the vertical central line of the image and does not exceed the vertical central line as a right lane line, and if no lane line is detected in the current frame image, using the lane line information of the previous frame image.
3. The binocular vision-based vehicle-mounted video abnormal motion detection method according to claim 1, wherein the step 3) is specifically as follows:
3.1) using all the characteristic points on the frame for establishing an affine transformation model for the 1 st time, wherein the characteristic points meeting the affine transformation model are called interior points, and the characteristic points not meeting the affine transformation model are called exterior points;
3.2) carrying out affine transformation modeling for M-1 times, wherein the feature point pair of each modeling is the external point of the last affine transformation modeling; thus, the characteristic points on each layer are divided into a plurality of sets, and the characteristic points in each set conform to the same motion model.
4. The binocular vision-based vehicle-mounted video abnormal motion detection method according to claim 1, wherein the step 5) specifically comprises the following steps:
5.1) counting the optical flow direction histograms and the optical flow amplitude histograms of all the characteristic points in the area according to the position and the size of the area obtained by the clustering algorithm, and selecting the optical flow direction and the optical flow amplitude with the maximum statistical value as the optical flow direction and the optical flow amplitude of the area;
5.2) counting the parallax value of the pixel point with the parallax value larger than 0 in the area, calculating the average parallax, and calculating the real distance of the area by utilizing the relation between the parallax value and the real distance;
and 5.3) dividing the image into a left lane area, a right lane area and a same lane area according to the lane line, and then calculating the lane to which the area to be detected belongs by using the geometric relation between the lane line and the geometric center of the area to be detected.
5. The binocular vision-based vehicle-mounted video abnormal motion detection method according to claim 1, wherein the step 6) specifically comprises the following steps:
6.1) normalizing each optical flow amplitude of the area to be detected according to the maximum optical flow amplitude and the minimum optical flow amplitude of all the areas to be detected in the frame image, and taking the optical flow amplitude as an initial abnormal value of the area;
6.2) converting the real distance of the region into [0,1] through a Gaussian distribution model, reinforcing or inhibiting through an exponential generalized weighting operator, and taking the final distance weight as the influence weight of the real distance on an abnormal value;
6.3) calculating the light stream direction weight of the area to be detected through different light stream direction weight calculation formulas according to the lane to which the area to be detected belongs;
and 6.4) combining the initial abnormal value, the distance weight and the optical flow direction weight of the area to be detected, and calculating to obtain the abnormal value of the area.
6. The binocular vision-based vehicle-mounted video abnormal motion detection method according to claim 1, wherein the step 5) specifically comprises the following steps:
step 5.1, counting the optical flow amplitude histograms of all the feature points in the area to be detected, and taking the optical flow amplitude with the highest statistical value as the optical flow amplitude f of the areai
Step 5.2, equally dividing 360 degrees into NBinCounting the optical flow direction histograms of all the feature points in the region to be detected, and taking the optical flow direction with the highest statistical value as the optical flow direction O of the regioni
Step 5.3, counting the region R to be detectediCalculating the sum and the number of parallax values of all pixel points with parallax values larger than 0, calculating the average parallax value as the parallax value of the region to be detected, and calculating to obtain the real distance of the region to be detected by utilizing the relationship between the parallax value and the real distance:
Figure FDA0002519611840000051
Figure FDA0002519611840000052
wherein the content of the first and second substances,
Figure FDA0002519611840000053
represents a region RiAverage disparity value of niRepresents a region RiThe number of pixels with a medium parallax value greater than 0, djRepresents a region RiThe parallax value of the pixel point with the middle parallax value larger than 0, f is the focal length of the camera, b is a base line,
step 5.4, calculating the region R to be detectediThe lane to which the vehicle belongs:
Figure FDA0002519611840000061
wherein the content of the first and second substances,
Figure FDA0002519611840000062
represents a region RiY is hl(x) And y ═ hr(x) The left lane straight line equation and the right lane straight line equation are expressed respectively.
CN201710722400.7A 2017-08-22 2017-08-22 Binocular vision-based vehicle-mounted video abnormal motion detection method Active CN107480646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710722400.7A CN107480646B (en) 2017-08-22 2017-08-22 Binocular vision-based vehicle-mounted video abnormal motion detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710722400.7A CN107480646B (en) 2017-08-22 2017-08-22 Binocular vision-based vehicle-mounted video abnormal motion detection method

Publications (2)

Publication Number Publication Date
CN107480646A CN107480646A (en) 2017-12-15
CN107480646B true CN107480646B (en) 2020-09-25

Family

ID=60601473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710722400.7A Active CN107480646B (en) 2017-08-22 2017-08-22 Binocular vision-based vehicle-mounted video abnormal motion detection method

Country Status (1)

Country Link
CN (1) CN107480646B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230305A (en) * 2017-12-27 2018-06-29 浙江新再灵科技股份有限公司 Method based on the detection of video analysis staircase abnormal operating condition
CN108491763B (en) * 2018-03-01 2021-02-02 北京市商汤科技开发有限公司 Unsupervised training method and device for three-dimensional scene recognition network and storage medium
CN111047908B (en) * 2018-10-12 2021-11-02 富士通株式会社 Detection device and method for cross-line vehicle and video monitoring equipment
CN109492609B (en) * 2018-11-27 2020-05-15 上海芯仑光电科技有限公司 Method for detecting lane line, vehicle and computing equipment
CN109816611B (en) 2019-01-31 2021-02-12 北京市商汤科技开发有限公司 Video repair method and device, electronic equipment and storage medium
CN112651269A (en) * 2019-10-12 2021-04-13 常州通宝光电股份有限公司 Method for rapidly detecting vehicles in front in same direction at night
CN112258462A (en) * 2020-10-13 2021-01-22 广州杰赛科技股份有限公司 Vehicle detection method and device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN106240458A (en) * 2016-07-22 2016-12-21 浙江零跑科技有限公司 A kind of vehicular frontal impact method for early warning based on vehicle-mounted binocular camera
CN106256606A (en) * 2016-08-09 2016-12-28 浙江零跑科技有限公司 A kind of lane departure warning method based on vehicle-mounted binocular camera
CN106991669A (en) * 2017-03-14 2017-07-28 北京工业大学 A kind of conspicuousness detection method based on depth-selectiveness difference

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN106240458A (en) * 2016-07-22 2016-12-21 浙江零跑科技有限公司 A kind of vehicular frontal impact method for early warning based on vehicle-mounted binocular camera
CN106256606A (en) * 2016-08-09 2016-12-28 浙江零跑科技有限公司 A kind of lane departure warning method based on vehicle-mounted binocular camera
CN106991669A (en) * 2017-03-14 2017-07-28 北京工业大学 A kind of conspicuousness detection method based on depth-selectiveness difference

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉注意机制的视频显著目标检测技术研究;吴卫东;《中国优秀硕士学位论文全文数据库-信息科技辑》;20160315(第3期);第1-56页 *

Also Published As

Publication number Publication date
CN107480646A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107480646B (en) Binocular vision-based vehicle-mounted video abnormal motion detection method
Bilal et al. Real-time lane detection and tracking for advanced driver assistance systems
Wu et al. Lane-mark extraction for automobiles under complex conditions
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
CN105460009B (en) Automobile control method and device
Yan et al. A method of lane edge detection based on Canny algorithm
KR100975749B1 (en) Method for recognizing lane and lane departure with Single Lane Extraction
Huang et al. Lane detection based on inverse perspective transformation and Kalman filter
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN113370977B (en) Intelligent vehicle forward collision early warning method and system based on vision
KR20150112656A (en) Method to calibrate camera and apparatus therefor
Šilar et al. The obstacle detection on the railway crossing based on optical flow and clustering
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
Wang et al. An improved hough transform method for detecting forward vehicle and lane in road
CN114693716A (en) Driving environment comprehensive identification information extraction method oriented to complex traffic conditions
Gupta et al. Robust lane detection using multiple features
EP4009228A1 (en) Method for determining a semantic free space
Hu et al. Effective moving object detection from videos captured by a moving camera
Ying et al. An illumination-robust approach for feature-based road detection
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
Schomerus et al. Camera-based lane border detection in arbitrarily structured environments
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
Chen et al. A new adaptive region of interest extraction method for two-lane detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant