CN112115889B - Intelligent vehicle moving target detection method based on vision - Google Patents

Intelligent vehicle moving target detection method based on vision Download PDF

Info

Publication number
CN112115889B
CN112115889B CN202011007965.5A CN202011007965A CN112115889B CN 112115889 B CN112115889 B CN 112115889B CN 202011007965 A CN202011007965 A CN 202011007965A CN 112115889 B CN112115889 B CN 112115889B
Authority
CN
China
Prior art keywords
point
obstacle
image
pixel
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011007965.5A
Other languages
Chinese (zh)
Other versions
CN112115889A (en
Inventor
蒋涛
贺喜
袁建英
吴思东
钟卓男
崔亚男
黄小燕
段翠萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202011007965.5A priority Critical patent/CN112115889B/en
Publication of CN112115889A publication Critical patent/CN112115889A/en
Application granted granted Critical
Publication of CN112115889B publication Critical patent/CN112115889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a vision-based intelligent vehicle moving target detection method, which comprises the following steps: generating a corresponding original parallax image for a road image acquired by a camera based on a stereo matching algorithm to construct a corresponding U-V parallax image; step two, obtaining a preprocessed image related to a vehicle travelable area based on the U-V parallax image; generating a region of interest related to potential movement on the road based on the preprocessed image to serve as a potential obstacle target; and step four, judging the motion attribute of the obstacle by combining the optical flow and the camera self-motion attribute. The invention provides a vision-based intelligent vehicle moving target detection method, which can effectively reduce the false detection rate of a moving target in the driving front of a vehicle by combining the possibility of calculating target movement by fusing optical flow and stereoscopic vision and combining a feasible region detection result.

Description

Intelligent vehicle moving target detection method based on vision
Technical Field
The invention relates to a way for detecting road environment information. More particularly, the present invention relates to a method for vision-based detection of moving objects in a road environment in an autonomous driving environment.
Background
The moving target detection based on the visual method is used as an important component of intelligent vehicle environment perception, is the basis of intelligent vehicle environment understanding, navigation, planning, behavior decision and control, and has important significance for safe driving and pedestrian protection.
At present, moving target detection based on a static camera mainly adopts methods such as background subtraction, frame difference method and optical flow method to realize moving target detection, and is also widely applied, for example, people in public places are monitored, however, in an automatic driving environment, a camera needs to be fixed on a vehicle to detect a moving target, so the existing method is not applicable, and the motion of the target and the motion of the background are mixed together due to the motion of the camera, which brings great difficulty to the detection of the moving target.
In the prior art, an optical flow technology is combined with a visual technology to detect a moving target so as to be applied to the field of automatic driving as environment perception data of an intelligent vehicle, but when the optical flow technology is applied to judge obstacles on a road, the detection precision can only reach about 50 percent under the influence of image complexity in the visual technology, and the requirement of intelligent driving on environment perception cannot be met.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided a vision-based intelligent vehicle moving object detecting method including:
generating a corresponding original parallax image for a road image acquired by a camera based on a stereo matching algorithm to construct a corresponding U-V parallax image;
obtaining a preprocessing image related to a vehicle travelable area based on the U-V parallax image;
generating a region of interest related to potential movement on the road based on the preprocessed image to serve as a potential obstacle target;
and step four, judging the motion attribute of the obstacle by combining the optical flow and the camera self-motion attribute.
Preferably, in step one, the stereo matching algorithm is configured to adopt an efficient large-scale matching ELAS algorithm to obtain the corresponding dense parallax image.
Preferably, in the second step, a corresponding U-V parallax image is constructed from the original parallax image to generate an initial road surface contour, and the specific method includes:
s21, binarizing the original parallax image by adopting a threshold tau in a U parallax image obtained by vertically projecting the original parallax image, and removing the original parallax image pixels marked as potential obstacles if the gray value of the pixel points on the U parallax image is higher than the threshold tau so as to generate a primary binarized obstacle image;
s22, removing narrow gaps and isolated small regions existing in the preliminarily constructed binaryzation obstacle image by adopting morphological closing operation and pixel point reverse color respectively to realize the pretreatment of the binaryzation obstacle image;
s23, in a V parallax image obtained by horizontal projection of the original parallax image, based on the preprocessed binaryzation obstacle image, detecting the initial contour of the road surface by adopting a nonparametric method;
and S24, eliminating the areas which are small in area and are not communicated in the detected road surface initial contour through an area threshold value, and further obtaining a preprocessed image related to the vehicle driving-feasible area.
Preferably, in step three, the obstacle region-of-interest obtaining method includes:
s31, scanning each line of the road profile from left to right and from top to bottom in the preprocessed image, judging whether the pixel point is a boundary point according to the gray level change condition of the adjacent pixel, and judging to obtain the gap of the current line according to the effectiveness of the pixel point;
s32, based on the parallax similarity of the pixels of the same object in each line of gaps, the width of the current line of gaps is screened, and meanwhile, the initial mask of the obstacle is expanded by adopting a region growing algorithm to serve as a potential motion-related region of interest on the road.
Preferably, in S31, the method for determining the region of interest by judging the validity of the pixel point includes:
s311, traversing from the upper left corner of the road surface contour, marking as a starting point when one pixel is found to be a white pixel and the next pixel is a black pixel, moving leftwards in sequence, and marking as an end point when one pixel is found to be a black pixel and the next pixel is a white pixel;
s312, if the starting point and the end point exist at the same time, obtaining the width of the starting point and the end point, judging whether the width is in an effective range, if so, keeping the length and the central point, otherwise, discarding;
s313, if only a starting point or an end point exists, taking the outermost convex hull contour point of the current row of the road surface contour as the end point or the starting point, and executing S312;
s314, if a plurality of contour interesting regions exist in a row, after an interesting region is processed, all marks are removed, and a second interesting region is sequentially judged;
s315 updates to the next line, and returns to S311.
Preferably, in S32, based on the disparity similarity of the pixels of the same object in each line of the notch, the relationship between the width of the obstacle and the disparity is obtained, and the width value corresponding to the obstacle on the image is obtained according to the preset width of the obstacle, so as to implement the screening of the width of the current line of the notch, where the screening formula is:
Figure BDA0002696612290000031
wherein W is the width of a vehicle or pedestrian; z is the depth of the camera to the obstacle; f is the focal length of the camera; c u Is the coordinate of the principal point of the camera; b is the baseline of the camera; d average Is the average value of the parallax of the gap position of the current line.
Preferably, in S32, the initial seed point of the region growing algorithm is selected as a central pixel of each line in the initial mask of the obstacle, the candidate points belonging to the ground are filtered and removed based on the road surface point obtained from the V-disparity image, and the obstacle with potential movement is obtained by constraining the corresponding width of each line of the obstacle.
Preferably, the width constraint method is configured to include:
s321, carrying out breadth-first search on a first source point relative to the seed point, marking the first source point as visited, carrying out parallax similarity judgment on four adjacent domains of the source point, and marking the first source point if the first source point is consistent with the seed point;
s322, in breadth-first search, limiting the horizontal search depth by limiting the width;
and S323, sequentially judging the states of the rest source points, judging whether the rest source points are accessed, and if not, returning to S321.
Preferably, in the fourth step, when determining the motion attribute of the obstacle, the uncertainty of the first-order error forward propagation model is transferred from the sensor to the final result, and the manhattan distance of the independent optical flow is used to measure the possibility of motion of each pixel, so as to determine the motion attribute.
The invention at least comprises the following beneficial effects: firstly, the probability of target motion is calculated by fusing the optical flow and the stereoscopic vision, the false detection rate of the moving target in the driving front of the vehicle is reduced by combining the feasible region detection result, and the detection precision can reach 70-80%.
Compared with the prior art, the detection method provided by the invention has the advantages that the motion of the camera is not required to be restricted, the method can be directly used for detecting the moving target of the mobile platform, the dynamic target can be effectively detected, the practicability is enhanced, the error influence caused by estimation of each stage in the whole algorithm is reduced by adopting an error model, the detection result of a travelable area is fused in the algorithm, the influence caused by complicated complex scenery and fuzzification in a real scene is reduced, and the false detection rate of the moving target is reduced to a certain extent.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Drawings
FIG. 1 is a diagram illustrating the effect of detecting a travelable area according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a dense parallax image obtained by using ELAS algorithm according to the present invention;
FIG. 3 is a diagram illustrating the effect of the obstacle detection according to the present invention;
FIG. 4 is a diagram of the effect of the initial mask of the obstacle after performing the expanding growth;
FIG. 5 is a diagram of a single pinhole camera model;
FIG. 6 is a diagram of one of the models of a binocular camera;
fig. 7 is another model diagram of a binocular camera.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
Fig. 1 shows an implementation form of a vision-based intelligent vehicle moving target detection method according to the present invention, which includes:
generating a corresponding original parallax image for a road image acquired by a camera based on a stereo matching algorithm to construct a corresponding U-V parallax image;
step two, obtaining a preprocessed image related to a vehicle travelable area based on the U-V parallax image; generating an interested area related to potential movement on the road based on the preprocessed image to serve as a potential obstacle target, wherein the interested area is regarded as a potential moving target, and subsequent movement attributes are judged in the interested areas;
step four, judge the movement attribute of the obstacle in combination with the optical flow and the camera self-movement attribute, in this scheme, mainly utilize parallax image that the stereoscopic matching algorithm produces to construct U-V parallax image, use the detection method of the non-parameterization travelable area to withdraw the road information in the U-V parallax image and detect the travelable area of vehicle, produce the interest zone according to the priori information (preconditioning image) of the obstacle, judge the movement attribute of the target in combination with optical flow and camera self-movement, namely the invention calculates the possibility of the target movement through amalgamating optical flow and stereoscopic vision, combine the detection result of the travelable area, used for reducing the false detection rate of the moving target in front of vehicle travel, in the actual use, carry on the preconditioning to the image that the camera obtains through this method, make its later stage when using the optical flow technology to detect the moving target, the detection precision can reach 70-80%.
In another example, the stereo Matching algorithm in step one is configured to use an ELAS algorithm for Efficient Large-Scale Matching to obtain a corresponding dense parallax image, and the processing structure is shown in fig. 2, in this scheme, an Efficient Large-Scale Matching-algorithm-Scale Matching (ELAS) algorithm is used to calculate the dense parallax image because it has very good performance in terms of both precision and operation speed, but it is also feasible to use a sparse parallax image to apply the method of the present invention, i.e., it is not necessary to use the ELAS algorithm, but it may also be another stereo Matching algorithm.
In another example, in the second step, a corresponding U-V parallax image is constructed from the original parallax image to generate an initial road surface contour, and the specific method includes:
s21, in the U parallax image obtained by the vertical projection of the original parallax image, the U parallax image is binarized by adopting a threshold value tau, if the gray value of the pixel point on the U parallax image is higher than the threshold value tau (the preset range can be set to be about 12-15 according to the requirement), the original parallax image pixel is marked as a potential obstacle to be removed, so as to generate a preliminary binary obstacle image, in practical operation, the parallax values of the obstacles perpendicular to the road surface in the parallax map are basically the same, and in the U-parallax map obtained by vertical projection, the obstacle forms a peak in a certain column, so that a smaller threshold tau can be used for binarizing the U parallax image to generate an obstacle image Obstaclemask, if the gray value of a pixel point on the U parallax image is higher than a threshold value tau, the pixel of the original parallax image is marked as a potential obstacle;
s22, removing narrow gaps and isolated small regions existing in the preliminarily constructed binarization obstacle image by adopting morphological closing operation and pixel point reverse color respectively, and realizing preprocessing of the binarization obstacle image. The narrow air attack is mainly caused by the gap between obstacles and is also regarded as impassable; isolated small regions are mainly caused by matching errors or invalid matching points, and the small regions can cause misjudgment and are also eliminated, narrow gaps can be eliminated through morphological closed operation, and the elimination of the isolated small regions can be realized through detecting contours and reversing the color of pixel points surrounded by the contours smaller than a certain area;
s23, in a V disparity map obtained by horizontally projecting an original disparity image, based on a preprocessed binarized barrier image, detecting an initial contour of a road surface by a nonparametric method, in an actual operation, horizontally projecting the original disparity image by the binarized barrier image obsaclemask to generate a V disparity image, and detecting the initial contour of the road surface by the nonparametric method according to the V disparity image, which is a method for fitting a road surface with a mathematical model, for example: based on Hough transform line detection, RANSAC line fitting and the like, the method can simply and effectively detect the road surface and is suitable for the situations of planes and non-planes;
s24, eliminating the areas with small areas and not connected in the detected road surface initial contour through an area threshold value, further obtaining the preprocessing image related to the vehicle driving area, in the actual operation, because the road surface has continuity and occupies most areas in the initial contour, eliminating some areas with small areas and not connected through an area threshold value, the specific operation effect is shown in figure 1, in the scheme, the U-V parallax image is calculated through the original parallax image, the initial road surface contour is generated through the U-V parallax image, the road environment image adopted by the camera is simplified, the post processing and judging effect is ensured, and the detection precision is improved.
In another example, the position where the obstacle intersects with the ground in the initial contour image of the road surface may form a gap on the road surface contour, and the region of interest of the person or vehicle on the road may be obtained by detecting the gap, in step three, the method for obtaining the region of interest of the obstacle includes:
in actual operation, in the initial contour image of the road surface, the obstacle is a black pixel, and the road surface is a white pixel, so that the edge of the obstacle and the road surface contour have a boundary point in a certain line, and a lateral gray level change exists near the boundary point. Therefore, each line of the road surface contour is scanned in sequence, whether the pixel point is a junction point or not is judged according to the gray level change condition of the adjacent pixels from left to right and from top to bottom, and the validity of the pixel point is judged because the edge of the road surface contour is not smooth and contains certain noise. Then, sequentially detecting the position of the defect of the current line, as shown in fig. 3, scanning each line of the road profile from left to right in a top-to-bottom manner in the preprocessed image of S31, so as to judge whether the pixel point is a boundary point according to the gray level change condition of the adjacent pixel, and judging to obtain the gap of the current line according to the effectiveness of the pixel point;
the road surface contour region-of-interest detection is to output the source points and the limiting width of pedestrians and vehicles by inputting a road surface contour binary image, and comprises the following steps:
s311, traversing from the upper left corner of the road surface contour, marking as a starting point when one pixel is found to be a white pixel and the next pixel is a black pixel, moving leftwards in sequence, and marking as an end point when one pixel is found to be a black pixel and the next pixel is a white pixel;
s312, if the starting point and the end point exist at the same time, obtaining the width of the starting point and the end point, judging whether the width is in an effective range, if so, keeping the length and the central point, otherwise, discarding;
s313, if only a starting point or an end point exists, taking the outermost convex hull contour point of the current row of the road surface contour as the end point or the starting point, and executing S312;
s314, if a plurality of contour interesting regions exist in a row, after an interesting region is processed, all marks are removed, and a second interesting region is sequentially judged;
s315, updating to the next row, and returning to S311;
s32, based on the parallax similarity of the pixels of the same object in each row of gaps, screening the width of the current row of gaps, and simultaneously expanding the initial mask of the obstacle by adopting a region growing algorithm to be used as a potential motion-related interest area on the road;
in actual operation, not all positions correspond to vehicles or pedestrians, and some positions may be caused by noise, so that according to the principle that the farther an obstacle is, the smaller the parallax is, the closer the obstacle is, the larger the parallax is, and the parallaxes of the pixels of the same object in each line gap have similarity, the relationship between the width of the obstacle and the parallax can be obtained, the width value corresponding to the obstacle on the image is obtained according to the preset width of the obstacle, the width of the current line gap is screened, in S32, the relationship between the width of the obstacle and the parallax is obtained based on the parallax similarity of the pixels of the same object in each line gap, the width value corresponding to the obstacle on the image is obtained according to the preset width of the obstacle, the screening of the width of the current line gap is realized because some noise exists in the road surface contour, and not all gaps are the region of interest that we need, therefore, it is filtered, and based on different camera models, the flow of the filtering formula is as follows:
a single pinhole camera model as shown in fig. 5, which has three coordinate systems: camera coordinate system, image coordinate system, pixel coordinate system, and the derivation from camera coordinate system to pixel coordinate system is as follows:
1. camera coordinate system- > image coordinate system:
Figure BDA0002696612290000081
Figure BDA0002696612290000082
2. image coordinate system- > pixel coordinate system:
Figure BDA0002696612290000083
Figure BDA0002696612290000084
Figure BDA0002696612290000085
3. camera matrix equation:
Figure BDA0002696612290000086
while the model for a binocular camera is shown in FIGS. 6-7, with the formula
Figure BDA0002696612290000087
According to the camera model:
Figure BDA0002696612290000088
Figure BDA0002696612290000089
Figure BDA00026966122900000810
obtaining:
X=(u-c x )×Z/f
Y=(u-c y )×Z/f
width W ═ for a car or a person (X) max -X min )
Figure BDA00026966122900000811
Figure BDA0002696612290000091
For the same obstacle, the disparities have similarity, so we use the mean d of the disparities average Instead, a screening formula is obtained:
Figure BDA0002696612290000092
wherein W is the width of a vehicle or pedestrian; z is the depth of the camera to the obstacle; f is the focal length of the camera; c u Is the camera principal point coordinate; b is the baseline of the camera; d average Is the average value of the parallax of the gap position of the current line.
The method for width constraint in complete mask generation is realized by adopting an improved breadth-first search, wherein the width and parallax images are limited by inputting seed points of pedestrians and vehicles to output a complete mask image, and the breadth-first search is configured to comprise:
s321, carrying out breadth-first search on a first source point relative to the seed point, marking the first source point as visited, carrying out parallax similarity judgment on four adjacent domains of the source point, and marking the first source point if the first source point is consistent with the seed point;
s322, in breadth-first search, limiting the horizontal search depth of the target by limiting the width, wherein the value of the horizontal search depth is not necessarily accurate because the parallax of the boundary contour between the targets has ambiguity;
and S323, repeating the steps, sequentially judging the states of the remaining source points, judging whether the remaining source points are accessed, and if not, performing breadth-first search.
In another example, in S32, the initial seed point of the region growing algorithm is selected as the central pixel of each line in the initial mask of the obstacle, the road surface point obtained based on the V-disparity image is screened to remove the candidate points belonging to the ground, and the width corresponding to each line of the obstacle is constrained to obtain the obstacle with potential movement. Because the matching at the edge in the parallax image is easy to appear blurred, the parallax at the edge of the obstacle is possibly close to the ground or the surrounding background, and the pixel points far away from the edge are less influenced by the parallax, the initial seed point of the region growing algorithm is selected as the central pixel of each row in the initial mask of the obstacle, meanwhile, in order to prevent excessive growth, candidate points belonging to the ground are screened out according to the road surface points obtained from the V parallax image, the growing depth is limited according to the corresponding width of each row of the obstacle, and the finally obtained obstacle is shown as a white region (the region where vehicles are located on the road) in fig. 4.
In another example, in the fourth step, when the motion attribute of the obstacle is judged, the uncertainty of the first-order error forward propagation model is transferred from the sensor to the final result, and the manhattan distance of the independent optical flow is used for measuring the possibility of motion of each pixel, so as to judge the motion attribute of the obstacle. In order to detect moving objects in an image, it is a straightforward practice to compensate for motion vectors generated by the motion of the camera itself, which is equivalent to the case of a stationary background. For clarity of description, the following variables are defined: independent light flow: only the optical flow generated by the object motion. Self-moving light flow: only the optical flow generated by the camera motion. Mixed light flow: including self-moving optical flows generated by camera motion and independent optical flows generated by moving objects.
The independent optical flow can be used to determine the motion attributes of the obstacle, and to obtain the independent optical flow, firstly, the self-motion optical flow is estimated according to the pose of the camera and the depth information of the scene, and secondly, the mixed optical flow of the scene is estimated. The independent optical flow is equal to the mixed optical flow minus the self-moving optical flow, and the specific method for detecting the moving pixels is the prior art, so the method is only briefly described as follows:
firstly, according to the image characteristic point of the current frame
Figure BDA0002696612290000101
And 3D points obtained by triangulation from the previous image
Figure BDA0002696612290000102
Pose parameters { R, T } of the camera are estimated by minimizing reprojection errors using a nonlinear optimization algorithm.
Figure BDA0002696612290000103
Where ρ is the Huber cost function; Σ is a covariance matrix; pi is the projection function of the pinhole camera model.
Secondly, a pixel point p of a t-1 th frame is given t-1 =(u t-1 ,v t-1 ,1) T At the predicted point p of the t-th frame t =(u t ,v t ,1) T Comprises the following steps:
Figure BDA0002696612290000104
wherein K is an internal reference matrix of the camera; z t-1 Is the depth at time t-1;
then p is t Self-moving luminous flux g ═ g (g) u ,g v ) T Comprises the following steps:
g=(g u ,g v ) T =(u t -u t-1 ,v t -v t-1 ) T
let m be (m) of mixed light stream u ,m v ) T If the independent optical flow q is equal to (q) u ,q v ) T
q=g-m=(g u -m u ,g v -m v ) T
Theoretically, if a point is stationary, its independent optical flow is 0, otherwise it is not 0. If a fixed threshold is simply used to distinguish between motion and stationary, independent optical flow comparisons, often a satisfactory result cannot be obtained because there are different motion vectors in the image for different 3d points, which cannot be measured by a fixed threshold. Moreover, the independent optical flow calculation process has errors, and if the estimation errors are ignored, a large amount of false detection is caused. In order to solve the problems, the scheme proposes to consider the estimation errors of each stage in the whole calculation process, namely a first-order error forward propagation model is used for transferring the uncertainty of the estimation errors from the sensor to the final result, and simultaneously the Manhattan distance of the independent optical flow is used for measuring the possibility of the motion of each pixel so as to judge the motion attribute of the pixel.
The present invention recognizes obstacles in the road as potential moving objects, which may be stationary and moving. When the optical flow technology is adopted to judge the motion characteristics of the camera, the pose of the camera is estimated by minimizing the reprojection error through two continuous frames, so that the self-motion optical flow generated by the motion of the camera is obtained, the mixed optical flow in a scene is estimated through the optical flow technology, the independent optical flow of the target is obtained according to the difference of the mixed optical flow and the self-motion optical flow, so that the motion attribute of the target is judged according to the independent optical flow, the image needs to be preprocessed before judgment, complex environmental factors irrelevant to a road in the image are removed, only road information and obstacle information on the road information are reserved, and the accuracy of the judgment of the motion target based on the optical flow in the later period can be effectively improved.
The above scheme is merely illustrative of a preferred example, and is not limiting. When the invention is implemented, appropriate replacement and/or modification can be carried out according to the requirements of users.
The number of apparatuses and the scale of the process described herein are intended to simplify the description of the present invention. Applications, modifications and variations of the present invention will be apparent to those skilled in the art.
While embodiments of the invention have been disclosed above, it is not intended to be limited to the uses set forth in the specification and examples. It can be applied to all kinds of fields suitable for the present invention. Additional modifications will readily occur to those skilled in the art. It is therefore intended that the invention not be limited to the exact details and illustrations described and illustrated herein, but fall within the scope of the appended claims and equivalents thereof.

Claims (8)

1. A vision-based intelligent vehicle moving target detection method is characterized by comprising the following steps:
generating a corresponding original parallax image for a road image acquired by a camera based on a stereo matching algorithm to construct a corresponding U-V parallax image;
obtaining a preprocessing image related to a vehicle travelable area based on the U-V parallax image;
generating a region of interest related to potential movement on the road based on the preprocessed image to serve as a potential obstacle target;
step four, judging the movement attribute of the obstacle by combining the optical flow and the camera self-movement attribute;
in step three, the method for obtaining the region of interest of the obstacle includes:
s31, scanning each line of the road profile from left to right and from top to bottom in the preprocessed image, judging whether the pixel point is a junction point according to the gray level change condition of the adjacent pixel, and judging the effectiveness of the pixel point to obtain the gap of the current line;
s32, based on the parallax similarity of the pixels of the same object in each row of gaps, the width of the current row of gaps is screened, and meanwhile, an initial mask of the obstacle is expanded by adopting a region growing algorithm to serve as a potential motion-related interested region on the road.
2. The vision-based smart vehicle moving object detecting method as claimed in claim 1, wherein in step one, the stereo matching algorithm is configured to adopt an efficient large-scale matching ELAS algorithm to obtain a corresponding dense parallax image.
3. The vision-based intelligent vehicle moving target detection method as claimed in claim 1, wherein in step two, a corresponding U-V parallax image is constructed through the original parallax image to generate an initial road surface profile, and the specific method comprises:
s21, binarizing the original parallax image by adopting a threshold tau in a U parallax image obtained by vertically projecting the original parallax image, and removing the original parallax image pixels marked as potential obstacles if the gray value of the pixel points on the U parallax image is higher than the threshold tau so as to generate a primary binarized obstacle image;
s22, removing narrow gaps and isolated small regions existing in the preliminarily constructed binaryzation obstacle image by adopting morphological closing operation and pixel point reverse color respectively to realize the pretreatment of the binaryzation obstacle image;
s23, detecting the initial contour of the road surface by adopting a nonparametric method based on the preprocessed binary obstacle image in the V parallax image obtained by the horizontal projection of the original parallax image;
and S24, eliminating the areas which are small in area and are not communicated in the detected road surface initial contour through an area threshold value, and further obtaining a preprocessed image related to the vehicle driving-feasible area.
4. The vision-based intelligent vehicle moving object detecting method as claimed in claim 1, wherein in S31, the method for determining the region of interest through pixel validity judgment comprises:
s311, traversing from the upper left corner of the road surface contour, when one pixel is found to be a white pixel and the next pixel is a black pixel, marking the pixel as a starting point, moving the pixel leftwards in sequence, and when one pixel is found to be a black pixel and the next pixel is a white pixel, marking the pixel as an end point;
s312, if the starting point and the end point exist at the same time, the width of the starting point and the end point is obtained, whether the width of the starting point and the end point is in the effective range or not is judged, if the width of the starting point and the end point is in the effective range, the length and the central point of the starting point and the end point are reserved, and if the width of the starting point and the end point is not in the effective range, the width of the starting point and the end point is discarded;
s313, if only a starting point or an end point exists, taking the outermost convex hull contour point of the current row of the road surface contour as the end point or the starting point, and executing S312;
s314, if a plurality of contour interesting regions exist in a row, after an interesting region is processed, all marks are removed, and a second interesting region is sequentially judged;
s315 updates to the next line, and returns to S311.
5. The vision-based intelligent vehicle moving target detection method of claim 1, wherein in S32, based on the parallax similarity of the pixels of the same object in each line of notch, the relationship between the width of the obstacle and the parallax is obtained, and the width value of the obstacle corresponding to the image is obtained according to the preset width of the obstacle, so as to realize the screening of the width of the current line of notch, and the screening formula is as follows:
Figure FDA0003663732710000021
wherein W is the width of a vehicle or pedestrian;z is the depth of the camera to the obstacle; f is the focal length of the camera; c u Is the coordinate of the principal point of the camera; b is the baseline of the camera; d average Is the average value of the parallax of the gap position of the current line.
6. The vision-based intelligent vehicle moving target detection method of claim 5, wherein in S32, the initial seed point of the region growing algorithm is selected as the central pixel of each line in the initial mask of the obstacle, the road surface point obtained based on the V parallax image is screened to remove the candidate points belonging to the ground, and the obstacle with potential movement is obtained by restricting the corresponding width of each line of the obstacle.
7. The vision-based smart vehicle moving object detecting method of claim 6, wherein the width constraining method is configured to include:
s321, performing breadth-first search on a first source point relative to the seed point, marking the first source point as visited, performing parallax similarity judgment on four adjacent domains of the source point, and marking the first source point if the first source point is consistent with the source point;
s322, in breadth-first search, limiting the horizontal search depth by limiting the width;
and S323, sequentially judging the states of the residual source points, judging whether the residual source points are accessed, and returning to S321 if the residual source points are not accessed.
8. The vision-based intelligent vehicle moving object detecting method as claimed in claim 1, wherein in step four, when the moving attribute of the obstacle is determined, the uncertainty of the first-order error forward propagation model is transferred from the sensor to the final result, and the manhattan distance of the independent optical flow is used to measure the possibility of each pixel moving, thereby determining the moving attribute.
CN202011007965.5A 2020-09-23 2020-09-23 Intelligent vehicle moving target detection method based on vision Active CN112115889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011007965.5A CN112115889B (en) 2020-09-23 2020-09-23 Intelligent vehicle moving target detection method based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011007965.5A CN112115889B (en) 2020-09-23 2020-09-23 Intelligent vehicle moving target detection method based on vision

Publications (2)

Publication Number Publication Date
CN112115889A CN112115889A (en) 2020-12-22
CN112115889B true CN112115889B (en) 2022-08-30

Family

ID=73800527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011007965.5A Active CN112115889B (en) 2020-09-23 2020-09-23 Intelligent vehicle moving target detection method based on vision

Country Status (1)

Country Link
CN (1) CN112115889B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972276A (en) * 2022-06-05 2022-08-30 长沙烽铭智能科技有限公司 Automatic driving distance judgment algorithm for vehicle
CN115880674B (en) * 2023-03-01 2023-05-23 上海伯镭智能科技有限公司 Obstacle avoidance steering correction method based on unmanned mine car
CN116206281B (en) * 2023-04-27 2023-07-18 北京惠朗时代科技有限公司 Sight line detection method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109269478A (en) * 2018-10-24 2019-01-25 南京大学 A kind of container terminal based on binocular vision bridge obstacle detection method
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109903334A (en) * 2019-02-25 2019-06-18 北京工业大学 A kind of binocular video Mobile object detection method based on time consistency
CN110189377A (en) * 2019-05-14 2019-08-30 河南省计量科学研究院 A kind of high precision speed-measuring method based on binocular stereo vision

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2463843B1 (en) * 2010-12-07 2015-07-29 Mobileye Vision Technologies Ltd. Method and system for forward collision warning
JP5773944B2 (en) * 2012-05-22 2015-09-02 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109269478A (en) * 2018-10-24 2019-01-25 南京大学 A kind of container terminal based on binocular vision bridge obstacle detection method
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109903334A (en) * 2019-02-25 2019-06-18 北京工业大学 A kind of binocular video Mobile object detection method based on time consistency
CN110189377A (en) * 2019-05-14 2019-08-30 河南省计量科学研究院 A kind of high precision speed-measuring method based on binocular stereo vision

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Bird’s eye view localization of surrounding vehicles: Longitudinal and lateral distance estimation with partial appearance;Elijah S.Lee等;《Robotics and Autonomous Systems》;20181130;第112卷;178-189 *
基于双目视觉的可行驶区域分割方法;段建民 等;《电子测量技术》;20190930(第(2019)18期);第2.1节第140-141页 *
基于双目视觉的行人检测与跟踪研究;龙玲丽;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190115(第(2019)01期);第1.3.1节第2页,第2.3.1节第14页,第25页图3-6,第3.2-3.3节第19-20页,图3-1、3-2 *
基于稀疏超完备的异常行为检测算法;逯鹏 等;《郑州大学学报(工学版)》;20160606(第(2016)06期);72-76 *

Also Published As

Publication number Publication date
CN112115889A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
Barnes et al. Find your own way: Weakly-supervised segmentation of path proposals for urban autonomy
CN112115889B (en) Intelligent vehicle moving target detection method based on vision
Van Der Mark et al. Real-time dense stereo for intelligent vehicles
CN105550665B (en) A kind of pilotless automobile based on binocular vision can lead to method for detecting area
CN109934848B (en) Method for accurately positioning moving object based on deep learning
Franke et al. Real-time stereo vision for urban traffic scene understanding
Kochanov et al. Scene flow propagation for semantic mapping and object discovery in dynamic street scenes
GB2554481A (en) Autonomous route determination
Budzan et al. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
CN111814602B (en) Intelligent vehicle environment dynamic target detection method based on vision
CN104508728B (en) Three-dimensional body detection device
Perrollaz et al. Using the disparity space to compute occupancy grids from stereo-vision
CN103679121B (en) Method and system for detecting roadside using visual difference image
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
Dornaika et al. A new framework for stereo sensor pose through road segmentation and registration
Zhou et al. On modeling ego-motion uncertainty for moving object detection from a mobile platform
Neumann et al. Free space detection: A corner stone of automated driving
Ma et al. A real time object detection approach applied to reliable pedestrian detection
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
Omar et al. Detection and localization of traffic lights using yolov3 and stereo vision
CN113706599B (en) Binocular depth estimation method based on pseudo label fusion
CN113658240B (en) Main obstacle detection method and device and automatic driving system
Ramirez et al. Go with the flow: Improving Multi-View vehicle detection with motion cues
Nedevschi et al. Improving accuracy for Ego vehicle motion estimation using epipolar geometry
Dornaika et al. A featureless and stochastic approach to on-board stereo vision system pose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant