CN112506195B - Vehicle autonomous positioning system and positioning method based on vision and chassis information - Google Patents

Vehicle autonomous positioning system and positioning method based on vision and chassis information Download PDF

Info

Publication number
CN112506195B
CN112506195B CN202011402425.7A CN202011402425A CN112506195B CN 112506195 B CN112506195 B CN 112506195B CN 202011402425 A CN202011402425 A CN 202011402425A CN 112506195 B CN112506195 B CN 112506195B
Authority
CN
China
Prior art keywords
vehicle
module
image
moment
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011402425.7A
Other languages
Chinese (zh)
Other versions
CN112506195A (en
Inventor
张素民
卢守义
支永帅
何睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202011402425.7A priority Critical patent/CN112506195B/en
Publication of CN112506195A publication Critical patent/CN112506195A/en
Application granted granted Critical
Publication of CN112506195B publication Critical patent/CN112506195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Abstract

The invention provides a vehicle autonomous positioning system and a positioning method based on vision and chassis information, wherein the positioning system comprises a vision perception module and a chassis motion information acquisition module, the vision perception module and the chassis motion information acquisition module input collected data into a data alignment module, the data alignment module aligns the data according to a timestamp and then inputs an image into an image processing module for processing, the image processing module sequentially inputs processing results into a state quantity eliminating and amplifying module and a motion prediction module, the data alignment module also inputs vehicle chassis information into a motion prediction module, the motion prediction module inputs the prediction results into an observation updating module, and the observation updating module positions a vehicle in real time based on observed quantity and the prediction results; the invention fuses visual information and vehicle chassis information, can reduce the influence of positioning errors on the positioning process, and realizes accurate positioning of the vehicle when the light is uneven or darker.

Description

Vehicle autonomous positioning system and positioning method based on vision and chassis information
Technical Field
The invention belongs to the technical field of automatic positioning of vehicles, and particularly relates to an autonomous positioning system and method of a vehicle based on vision and chassis information.
Background
In recent years, with the improvement of living standards of people, the quantity of automobiles kept is continuously increased, in order to enable more automobiles to be accommodated in a limited parking space, the design of parking spaces is more and more narrow, the requirements on the driving technology of drivers are more and more high, the automatic parking system can enable drivers who are not skilled in the driving technology to free hands, the parking safety is improved, and the automatic parking system has a wide application prospect.
At present, a visual SLAM system based on a monocular camera and a track calculation system based on vehicle chassis information are widely applied in a vehicle positioning process, but in an environment with uneven light or darker light, the monocular camera cannot collect enough environmental characteristic points and accurately track, so that the estimation of the SLAM system on the vehicle pose is not accurate enough; when the track calculation system based on the vehicle chassis information realizes positioning, the positioning error of the track calculation system is accumulated continuously along with the positioning process, and finally the positioning information is unavailable.
Disclosure of Invention
The invention aims to provide a vehicle autonomous positioning system based on vision and chassis information, which fuses a vision SLAM positioning process with the chassis information of a vehicle by utilizing an observation updating module, realizes the accurate positioning of the vehicle in a parking lot, and provides accurate vehicle position information for vehicle track planning and control.
The invention also aims to provide a vehicle autonomous positioning method based on vision and chassis information, which determines the vehicle pose and roadside point coordinates under a world coordinate system by detecting the characteristic points in the image obtained by visual perception so as to obtain a state vector, obtains a motion prediction result of a vehicle through the chassis information, and corrects the motion prediction result by using the observed quantity brought by the state vector so as to obtain the accurate positioning of the vehicle.
The technical scheme adopted by the invention is that the vehicle autonomous positioning system based on vision and chassis information comprises:
the visual perception module is used for acquiring an environment image in the running process of the vehicle and inputting the acquired image into the data alignment module;
the chassis motion information acquisition module is used for acquiring wheel speed data and steering wheel corner data in the vehicle running process and inputting the acquired data into the data alignment module;
the data alignment module is used for aligning the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputting the image data under the time sequence into the image processing module, and inputting the wheel speed data and the steering wheel corner data into the motion prediction module;
the image processing module is used for preprocessing the aligned images, detecting and tracking the characteristic points in the images, converting the coordinates of the characteristic points in the images into the coordinates of the landmark points in a world coordinate system, and inputting the coordinates of the landmark points into the state quantity eliminating and increasing module;
the state quantity eliminating and amplifying module is used for updating the landmark points to obtain state vectors and inputting the state vectors into the motion prediction module;
the motion prediction module is used for predicting the motion of the vehicle according to the vehicle chassis information and inputting a prediction result into the observation updating module;
and the observation updating module is used for correcting the prediction result by using the observed landmark point coordinates to obtain the real-time position information of the vehicle.
Furthermore, the vision perception module is a monocular camera, and the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel corner sensor.
Further, the image processing module comprises an image preprocessing module, a feature point detection module and a feature point tracking module;
the image preprocessing module is used for preprocessing images in a time sequence, the characteristic point detection module is used for detecting characteristic points in each image, and the characteristic point tracking module is used for tracking the movement of the characteristic points in the time sequence images.
The autonomous positioning method of the vehicle autonomous positioning system based on vision and chassis information comprises the following steps:
step 1, a vision perception module arranged in the middle of the upper side of a vehicle windshield collects an environment image of a roadside in the vehicle running process, a chassis motion information collection module respectively collects wheel speed data and steering wheel corner data in the vehicle running process, and the vision perception module and the chassis motion information collection module input the collected data into a data alignment module;
step 2, the data alignment module aligns the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputs the aligned image sequence into the image processing module, and inputs the wheel speed and steering wheel corner data into the motion prediction module;
step 3, the image processing module preprocesses all images, detects the characteristic points in each image and tracks the coordinates of the characteristic points in the images;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as an X axis and the transverse axis of the vehicle as a Y axis, and determining the pose of the vehicle at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image into the coordinates of the landmark points in a world coordinate system by using a triangulation method, inputting the coordinates of the landmark points into a state quantity eliminating and amplifying module to eliminate and amplify the state quantity, and obtaining an observation state vector X at the moment kk', mixing Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskAn input observation update module for calculating a predicted state vector X at time kkBrought-in observed quantity ZkAnd use of ZkPredicted state vector X for time kkAnd updating to obtain the position of the vehicle at the moment k, so as to realize autonomous positioning.
Further, the preprocessing of the image in step 3 includes distortion removal processing and histogram equalization processing.
Further, the step 3 determines the coordinates of the feature point k in the image at the moment according to the following steps:
step 31, using Gaussian filtering to obtain a blurred image of the original image, performing down-sampling on the blurred image to enable the height and width of the blurred image to be half of the original image, and repeating the Gaussian filtering and the down-sampling to establish an image pyramid;
step 32, calculating rotation matrixes of the k-1 moment and the k moment, and calculating the rotation matrixes
Figure BDA0002812906240000031
Psi is the plane motion yaw angle of the vehicle between time k-1 and time k;
step 33, determining feature points to be tracked in the image at the moment k-1 by using a FAST corner detection rule;
step 34, rotating the feature points in the image at the time k-1 by using a rotation matrix, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as input of an LK optical flow, and tracking the coordinates of the feature points in the image at the time k by using an LK optical flow method;
and step 35, if the number of the successfully tracked feature points is small, dividing the image into 8 × 8 grids, determining a FAST corner Point of the image, removing an outer Point by using 2-Point RANSAC, supplementing an inner Point in the FAST corner Point to the feature Point set until the number of the feature points in each grid is greater than or equal to 10, and taking the feature points as feature points to be tracked in next positioning.
Further, in the step 5, the state quantity is removed and augmented according to the following rules:
if the new feature point is continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain the coordinate of the landmark point in a world coordinate system, and adding the coordinate into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, removing the coordinates of the landmark point from the state vector.
Further, the predicted state vector X at the time k in step 6kIs calculated as shown in equation (2):
Figure BDA0002812906240000041
wherein U iskFor control input at time k, Uk=(vkk),vk、δkRespectively the speed and the front wheel angle x of the vehicle at the moment kk-1、yk-1
Figure BDA0002812906240000042
Respectively the horizontal and vertical coordinates and the yaw angle of the vehicle under the world coordinate system at the moment of k-1, N is the total number of the road sign points,
Figure BDA0002812906240000043
i is more than or equal to 1 and less than or equal to N, delta t is adjacent sampling time interval,
Figure BDA0002812906240000044
the yaw angle of the vehicle at the moment k and the wheelbase l are shown;
Xkbrought-in observed quantity ZkIs calculated as shown in equation (3):
Figure BDA0002812906240000045
in which ξkTo observe the noise, which is a gaussian distribution with a mean of 0 and a covariance matrix of Q,
Figure BDA0002812906240000046
Figure BDA0002812906240000047
rithe distance from the ith landmark point to the center of mass of the vehicle,
Figure BDA0002812906240000048
is the angle between the line connecting the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k.
Further, the prediction and update process in step 6 is as follows:
step 61, prediction
Computing a priori mean of the predicted state vectors at time k using equation (4)
Figure BDA0002812906240000049
And a priori covariance matrix
Figure BDA00028129062400000410
Figure BDA00028129062400000411
Wherein XkAn observed state vector of' time kxx' (k) is Xk' covariance matrix, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,
Figure BDA0002812906240000051
i is an identity matrix and is a matrix of the identity,
Figure BDA0002812906240000052
is JtTranspose of (J)uIs f (X)k′,Uk) To UkThe jacobian matrix of (a) is,
Figure BDA0002812906240000053
Σuis UkThe covariance matrix of (a) is determined,
Figure BDA0002812906240000054
is JuTransposing;
step 62, update
Calculating Kalman gain K at time KkUsing KkUpdating XkThe prior mean and prior covariance matrix of (a) obtain XkPosterior mean of
Figure BDA0002812906240000055
Sum a posteriori covariance matrix
Figure BDA0002812906240000056
And further acquiring the pose of the vehicle at the moment k under a world coordinate system, and autonomously positioning the final position of the vehicle at the moment k, wherein the calculation is as shown in a formula (5) and a formula (6):
Figure BDA0002812906240000057
Figure BDA0002812906240000058
wherein G iskIs h (X)k) To XkThe jacobian matrix of (a) is,
Figure BDA0002812906240000059
is GkThe transpose of (a) is performed,
Figure BDA00028129062400000510
the invention has the beneficial effects that: according to the invention, the monocular camera is used for obtaining the state vector of observation formed by the pose of the vehicle in the world coordinate system and the landmark point coordinates, the wheel speed sensor and the steering wheel angle sensor are used for obtaining the chassis information of the vehicle, the predicted state vector of the vehicle is further obtained, the predicted state vector is corrected by using the observed quantity introduced by the state vector sensed visually to obtain the real-time position of the vehicle, the robustness of a positioning system is enhanced, the state vector is timely eliminated and expanded, the positioning real-time performance is improved, and the positioning result is more accurate and reliable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow chart of the autonomous positioning of the present invention.
FIG. 2 is a diagram of an Ackerman steering model for a vehicle.
Fig. 3 is a block diagram of the autonomous positioning system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, the vehicle autonomous positioning system based on vision and chassis information comprises a vision sensing module and a chassis motion information acquisition module, wherein the vision sensing module is a monocular camera and is used for acquiring an environmental image in the running process of a vehicle, the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel corner sensor, the wheel speed sensor is used for acquiring the wheel speed information of the vehicle in real time so as to obtain the speed information of the vehicle, the steering wheel corner sensor is used for acquiring the steering wheel corner data of the vehicle in real time, the vision sensing module and the chassis motion information acquisition module input the acquired data into a data alignment module, the data alignment module aligns the image and the chassis data according to a timestamp and inputs the aligned image data into an image processing module, and the image processing module comprises an image preprocessing module, a feature point tracking module and a feature point detection module, the image preprocessing module preprocesses the images, the feature point detecting module detects the feature points in the images, the feature point tracking module tracks the movement of the feature points in a plurality of groups of images under a time sequence, and converting the feature point data of the image into road sign point coordinate input state quantity eliminating and amplifying module under world coordinate system, the state quantity eliminating and amplifying module is used for eliminating and amplifying the state quantity according to the state quantity eliminating and amplifying condition, and transmits the updated observation state vector data to the motion prediction module, the data alignment module also inputs the aligned chassis information into the motion prediction module, the motion prediction module predicts the motion of the vehicle according to the chassis signal and the observation state vector of the vehicle, and the prediction result is transmitted to an observation updating module, and the observation updating module corrects the prediction result by using the observed quantity to obtain the real-time position data of the vehicle.
As shown in fig. 1, the method for autonomous positioning of a vehicle based on visual and chassis information comprises the following steps:
step 1, a monocular camera serving as a visual perception module is arranged in the middle of the upper side of a vehicle windshield, the monocular camera is used for collecting an environment image on the roadside in the driving process of the vehicle, a chassis motion information collection module is used for respectively collecting wheel speed data and steering wheel corner data in the driving process of the vehicle, and the visual perception module and the chassis motion information collection module input the collected data into a data alignment module;
step 2, after the data alignment module aligns the image with the vehicle chassis data according to the time stamp, the aligned image sequence is input into the image processing module, and the data of the vehicle wheel speed and the steering wheel angle are input into the motion prediction module;
step 3, an image preprocessing module in the image processing module carries out distortion removal processing and histogram equalization processing on all images, a characteristic point detection module determines FAST corner points in all images and takes the FAST corner points as characteristic points to be tracked, a characteristic point tracking module calculates rotation matrixes of two images at the k-1 moment and the k moment by using a vehicle kinematics model shown in fig. 2, characteristic points in the images at the k-1 moment are rotated, coordinates of the rotated characteristic points serve as initial estimation coordinates of optical flow tracking, and coordinates of the characteristic points in the images at the k moment are tracked by using an LK optical flow method;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as the X axis and the forward direction as the advancing direction of the vehicle, the transverse axis of the vehicle as the Y axis and the forward direction as the left turning direction of the vehicle, and obtaining the pose of the vehicle under the world coordinate system at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image into the coordinates of the landmark points in a world coordinate system by using a triangulation method, inputting the coordinates into a state quantity eliminating and amplifying module, and eliminating and amplifying the state quantity according to the state quantity eliminating and amplifying conditions to obtain an observation state vector X at the moment kk', and Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskThe input of the observation updating module is used for updating the observation,the observation updating module predicts the state vector X according to the k timekCalculating the observed quantity Z brought in by the measured quantitykUsing ZkPredicted state vector X for time kkAnd updating to obtain the position of the vehicle at the moment k, so as to realize autonomous positioning.
The invention carries out the processes of distortion removal processing and histogram equalization processing on the image as follows:
1, shooting a calibration board for multiple times to establish multiple constraints, and calculating a distortion parameter k of a camera by utilizing maximum likelihood estimation1、k2、k3、p1、p2And determining the correct position of any point in the camera coordinate system in the image through the distortion parameters:
xdistorted=x0(1+k1r2+k2r4+k3r6)+2p1x0y0+p2(r2+2x0 2)
ydistorted=y0(1+k1r2+k2r4+k3r6)+p1(r2+2y0 2)+2p2x0y0
u=fxxdistorted+cx
v=fyydistorted+cy
wherein (x)0,y0) For the original coordinates of the feature points in the image before distortion correction, (x)distorted,ydistorted) (u, v) the pixel coordinates of the feature point in the image after the distortion correction, fx、fy、cx、cyAll are internal references of the monocular camera, and r is the distance between the characteristic point and the origin of coordinates in a camera coordinate system;
2, calculating the m pixel gray level g in the imagemFrequency of occurrence pmAnd then p is obtainedmForward accumulation of
Figure BDA0002812906240000081
pjIs pmFrequency of occurrence of any previous pixel, for smRounding and expanding to obtain Sm=int[255×sm+0.5]Determining the corresponding relation between the original pixel and the rounding expansion value, and carrying out histogram equalization processing on the original image based on the corresponding relation, wherein m is a variable representing the number of pixels, m is more than or equal to 0 and less than or equal to 255, j belongs to [0, m ∈]。
The invention carries out the characteristic point detection and tracking process on the preprocessed image as follows:
31, in order to avoid the situation that the optical flow tracking is trapped in the local minimum due to the fact that the camera moves fast, Gaussian filtering is conducted on the histogram equalization image to obtain a fuzzy image of the original image, downsampling is conducted on the fuzzy image to enable the height and the width of the fuzzy image to be half of the original image, and the Gaussian filtering-downsampling process is repeated to establish a complete image pyramid;
step 32, knowing the speed v of the vehicle at the moment k-1k-1And steering wheel angle thetak-1And calculating the front wheel rotation angle delta of the vehicle at the moment k-1k-1And yaw angular velocity ωk-1
Figure BDA0002812906240000082
i is the steering gear ratio, ρ, of the vehiclek-1The turning radius of the vehicle at the moment k-1;
yaw rate omega of vehicle according to k-1 momentk-1Calculating a plane motion yaw angle psi of the vehicle between the k-1 moment and the k moment according to the sampling time interval delta t, and further obtaining a rotation matrix from the k-1 moment to the k moment
Figure BDA0002812906240000083
Step 32, determining characteristic points needing to be tracked in the images at the k-1 moment by using a FAST corner detection rule;
the process of determining the feature points is as follows: calculating the difference value between the gray value of each pixel point in the image and the gray value of the pixel points around the pixel point, taking 3 pixels as the radius, and if the difference value between the gray value of a certain pixel point and the gray value of 12 pixel points around the pixel point (total 16 pixel points) is greater than a set threshold, considering the pixel point as a FAST corner point, namely a feature point needing to be tracked;
step 34, utilizing a rotation matrix to rotate the feature points in the image at the time k-1, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as the input of an LK optical flow algorithm, decomposing the offset of the pixel points into a transfer optical flow vector and a residual optical flow vector, wherein the transfer optical flow vector is the offset transferred from the previous layer to the current layer of the image pyramid, the transfer optical flow vector at the top layer of the image pyramid is initialized to 0, and an error matching function based on the assumption of unchanged gray level is obtained as follows:
Figure BDA0002812906240000091
Px、Pythe coordinates of the pixel points in the X axis and the Y axis, wx、wyFor the size of the pixel point window in the optical flow tracking in the X axis and the Y axis, the pixel points in the window have the same motion, PLLevel L, C of the image pyramid created for the previous frame of imageLThe lth layer of the image pyramid created for the current frame image,
Figure BDA0002812906240000092
the coordinates of the transmitted light flow vector in the X axis and the Y axis, vx、vyThe motion speeds of the pixel points on the X axis and the Y axis are respectively;
the above process is established when the pixel point displacement is very small, and in order to make the obtained solution more accurate, the above process is iterated for many times, and the residual optical flow vector v obtained by the nth iteration isn=vn-1n,ηnResidual optical flow vector increment from error matching function for nth iterationnStopping iteration when | is less than a set threshold or reaches a set iteration number, and obtaining the final offset d ═ t of the pixel point0+v0,t0For the transmitted light flow vector at layer 0 after the iteration ends, v0The residual optical flow vector of the 0 th layer after the iteration is finished;
and step 35, if the number of the successfully tracked feature points in the step 34 is small, and the vehicle cannot be accurately positioned based on the observation state vector formed by converting the successfully tracked feature points into the landmark points, dividing the image into 8 × 8 grids, determining the FAST corner points in the image again by using a FAST corner Point rule, removing external points in the FAST corner points by using 2-Point RANSAC to remove error-associated data, supplementing internal points of the FAST corner points to a feature Point set, ensuring that the number of feature points in each grid is more than or equal to 10, and using the feature Point set as feature points to be tracked in next positioning so as to ensure the accuracy of subsequent positioning.
The process of removing outliers is as follows: randomly selecting two pairs of FAST corners from two frames of images, solving a translation vector t of a camera between the two frames of images according to the two pairs of FAST corners, calculating the error of the residual FAST corners relative to the translation vector t, dividing an inner point and an outer point according to a set error threshold value, wherein the error threshold value is 2 multiplied by 10-7And solving the translation vector again according to the obtained inner point set, and outputting the set with the most reserved inner points as a result by iterative loop.
The process of feature point supplementation is as follows: in order to uniformly distribute the characteristic points in the whole image, the whole image is equally divided into a plurality of grids, and the range of the number of the characteristic points which can be contained by each grid is specified; counting the number of the feature points in each grid, supplementing points with stronger features in the inner point set to the feature point set if the number of the feature points is smaller than the minimum value which can be accommodated, sequencing the feature points according to the strength of the features if the number of the feature points is larger than the maximum value which can be accommodated, and rejecting the feature points with weaker features until the number of the feature points is within the range.
The invention eliminates and enlarges the coordinates of the road sign points in the state quantity according to the following rules:
when the new feature points are continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain coordinates of the landmark points in a world coordinate system, and adding the coordinates into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, the update of the landmark point is finished, and the landmark point is removed from the state vector.
The invention calculates the discrete equation of the vehicle position coordinate and the yaw angle at the moment k according to the position coordinate and the yaw angle of the vehicle at the moment k-1, as shown in the formula (1):
Figure BDA0002812906240000101
wherein xk、yk
Figure BDA0002812906240000102
Respectively the abscissa, ordinate and yaw angle of the vehicle at the moment k in the world coordinate system, xk-1、yk-1
Figure BDA0002812906240000103
Respectively the abscissa, the ordinate and the yaw angle of the vehicle under the world coordinate system at the moment k-1, delta t is the adjacent sampling time interval, vk、δkThe vehicle speed and the front wheel rotation angle of the vehicle at the moment k respectively,
Figure BDA0002812906240000104
the yaw angle of the vehicle at the moment k and the wheelbase l are shown;
adding the position coordinates of road mark points under the world coordinate system at the moment k on the basis of a vehicle discrete equation to obtain a predicted state vector X at the moment kkThe process of (2) is shown in formula (2):
Figure BDA0002812906240000105
wherein U iskFor control input at time k, Uk=(vkk) N is the total number of the road sign points,
Figure BDA0002812906240000106
respectively the abscissa and ordinate of the ith road sign point at the moment k under a world coordinate system, wherein i is more than or equal to 1 and is less than or equal to N;
Xkbrought-in observed quantity ZkThe observation equation is shown in formula (3):
Figure BDA0002812906240000107
wherein h (X)k) As a function of the observation equation, ξkIs the observed noise of the system, which is a gaussian distribution with a mean of 0 and a covariance matrix of Q,
Figure BDA0002812906240000111
rithe distance from the ith landmark point to the center of mass of the vehicle,
Figure BDA0002812906240000112
is the angle between the line connecting the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k.
According to X in step 6k' with Chassis information prediction XkUsing ZkPredicted state vector X for time kkThe process of updating to obtain the vehicle's position at time k is as follows:
step 61, prediction
Obtaining a predicted state vector X at the moment k according to a discrete equation of the position coordinate and the yaw angle of the vehiclekPrior mean of
Figure BDA0002812906240000113
And a priori covariance matrix
Figure BDA0002812906240000114
As shown in equation (4):
Figure BDA0002812906240000115
wherein XkAn observed state vector of' time kxx' (k) is Xk' covariance matrix of Xk-1The predicted value of the covariance matrix of
Figure BDA0002812906240000116
Covariance matrix, X, after state quantity rejection and augmentationk-1Is the predicted state vector at time k-1, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,
Figure BDA0002812906240000117
Jξis a Jacobian matrix of discrete equations versus vehicle pose,
Figure BDA0002812906240000118
i is an identity matrix and is a matrix of the identity,
Figure BDA00028129062400001111
is JtTranspose of (J)uIs f (X)k′,Uk) To UkJacobian matrix of ∑uIs UkThe covariance matrix of (a) is determined,
Figure BDA0002812906240000119
is JuThe transpose of (a) is performed,
Figure BDA00028129062400001110
step 62, update
Calculating Kalman gain K at the K moment by using formula (5) and formula (6)kUpdating X using Kalman gainkTo obtain X from the prior mean and the prior covariance matrixkPosterior mean of
Figure BDA0002812906240000121
Sum a posteriori covariance matrix
Figure BDA0002812906240000122
By
Figure BDA0002812906240000123
Obtaining the pose of the vehicle at the moment k in the world coordinate system, and obtaining the final pose of the vehicle at the moment kThe position is autonomously positioned:
Figure BDA0002812906240000124
Figure BDA0002812906240000125
wherein G iskIs h (X)k) To XkThe jacobian matrix of (a) is,
Figure BDA0002812906240000126
is GkThe transpose of (a) is performed,
Figure BDA0002812906240000127
the method comprises the steps of using a vehicle provided with an industrial personal computer, a monocular camera, a steering wheel corner sensor and a wheel speed sensor as an experimental vehicle, drawing a guide line on the ground as a reference road sign, respectively controlling the experimental vehicle to move along the guide line according to the autonomous positioning method and the traditional ORB-SLAM2 algorithm, obtaining the real movement track of the experimental vehicle, and knowing from the experimental result that the traditional positioning algorithm fails when the ambient light is uneven, the light is dark and the guide line texture is not rich, so that the vehicle cannot be accurately positioned.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (5)

1. The vehicle autonomous positioning method based on vision and chassis information is characterized by comprising the following steps:
step 1, a vision perception module arranged in the middle of the upper side of a vehicle windshield collects an environment image of a roadside in the vehicle running process, a chassis motion information collection module respectively collects wheel speed data and steering wheel corner data in the vehicle running process, and the vision perception module and the chassis motion information collection module input the collected data into a data alignment module;
the visual perception module is a monocular camera, and the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel corner sensor;
step 2, the data alignment module aligns the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputs the aligned image sequence into the image processing module, and inputs the wheel speed and steering wheel corner data into the motion prediction module;
step 3, the image processing module preprocesses all images, detects the characteristic points in each image and tracks the coordinates of the characteristic points in the images;
the image processing module comprises an image preprocessing module, a characteristic point detection module and a characteristic point tracking module;
the image preprocessing module is used for preprocessing images in a time sequence, the characteristic point detection module is used for detecting characteristic points in each image, and the characteristic point tracking module is used for tracking the movement of the characteristic points in the time sequence images;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as an X axis and the transverse axis of the vehicle as a Y axis, and determining the pose of the vehicle at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image by using a triangulation methodThe coordinates of the landmark points under a world coordinate system are input into a state quantity eliminating and amplifying module to eliminate and amplify the state quantity to obtain an observation state vector X at the moment kk', mixing Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskAn input observation update module for calculating a predicted state vector X at time kkBrought-in observed quantity ZkAnd use of ZkPredicted state vector X for time kkUpdating to obtain the position of the vehicle at the moment k, and realizing autonomous positioning;
the prediction and update process is as follows:
step 61, prediction
Computing a priori mean of the predicted state vectors at time k using equation (4)
Figure FDA0003233089460000011
And a priori covariance matrix
Figure FDA0003233089460000012
Figure FDA0003233089460000013
Wherein Xk' is the observed State vector at time k, UkFor control input at time k, Uk=(vkk),vk、δkSpeed and front wheel angle, sigma, of the vehicle at time k, respectivelyxx' (k) is Xk' covariance matrix, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,
Figure FDA0003233089460000021
i is the identity matrix, at is the adjacent sampling time interval,
Figure FDA0003233089460000022
for the yaw angle of the vehicle at time k,
Figure FDA0003233089460000023
is JtTranspose of (J)uIs f (X)k′,Uk) To UkThe jacobian matrix of (a) is,
Figure FDA0003233089460000024
Σuis UkI is the wheel base,
Figure FDA0003233089460000025
is JuTransposing;
step 62, update
Calculating Kalman gain K at time KkUsing KkUpdating XkThe prior mean and prior covariance matrix of (a) obtain XkPosterior mean of
Figure FDA0003233089460000026
Sum a posteriori covariance matrix
Figure FDA0003233089460000027
And further acquiring the pose of the vehicle at the moment k under a world coordinate system, and autonomously positioning the final position of the vehicle at the moment k, wherein the calculation is as shown in a formula (5) and a formula (6):
Figure FDA0003233089460000028
Figure FDA0003233089460000029
wherein G iskIs h (X)k) To XkJacobian matrix of h (X)k) As a function of the observation equation(s),
Figure FDA00032330894600000210
is GkThe transpose of (a) is performed,
Figure FDA00032330894600000211
Figure FDA00032330894600000212
respectively the abscissa and ordinate of the ith road sign point at the moment k in the world coordinate system, i is more than or equal to 1 and less than or equal to N, N is the total number of the road sign points, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k, and Q is observation noise xikThe covariance matrix of (a) is determined,
Figure FDA0003233089460000031
rithe distance from the ith landmark point to the center of mass of the vehicle,
Figure FDA0003233089460000032
is the included angle between the connecting line of the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle.
2. The method of claim 1, wherein the preprocessing of the image in step 3 comprises a distortion removal process and a histogram equalization process.
3. The method for autonomous vehicle localization based on vision and chassis information according to claim 1, wherein the step 3 determines the coordinates of the feature point k in the image at the moment according to the following steps:
step 31, using Gaussian filtering to obtain a blurred image of the original image, performing down-sampling on the blurred image to enable the height and width of the blurred image to be half of the original image, and repeating the Gaussian filtering and the down-sampling to establish an image pyramid;
step 32, calculating rotation matrixes of the k-1 moment and the k moment, and calculating the rotation matrixes
Figure FDA0003233089460000033
Psi is the plane motion yaw angle of the vehicle between time k-1 and time k;
step 33, determining feature points to be tracked in the image at the moment k-1 by using a FAST corner detection rule;
step 34, rotating the feature points in the image at the time k-1 by using a rotation matrix, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as input of an LK optical flow, and tracking the coordinates of the feature points in the image at the time k by using an LK optical flow method;
and step 35, if the number of the successfully tracked feature points is small, dividing the image into 8 × 8 grids, determining a FAST corner Point of the image, removing an outer Point by using 2-Point RANSAC, supplementing an inner Point in the FAST corner Point to the feature Point set until the number of the feature points in each grid is greater than or equal to 10, and taking the feature points as feature points to be tracked in next positioning.
4. The method for autonomous vehicle positioning based on vision and chassis information according to claim 1, wherein the state quantity is eliminated and augmented in step 5 according to the following rules:
if the new feature point is continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain the coordinate of the landmark point in a world coordinate system, and adding the coordinate into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, removing the coordinates of the landmark point from the state vector.
5. The method of claim 1, wherein the predicted state vector X at time k in step 6 is the predicted state vector XkIs calculated as shown in equation (2):
Figure FDA0003233089460000041
wherein xk-1、yk-1
Figure FDA0003233089460000042
Respectively the horizontal and vertical coordinates and the yaw angle of the vehicle under the world coordinate system at the moment k-1;
Xkbrought-in observed quantity ZkIs calculated as shown in equation (3):
Figure FDA0003233089460000043
in which the observation noise ξkIs a gaussian distribution with a mean of 0 and a covariance matrix of Q.
CN202011402425.7A 2020-12-02 2020-12-02 Vehicle autonomous positioning system and positioning method based on vision and chassis information Active CN112506195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011402425.7A CN112506195B (en) 2020-12-02 2020-12-02 Vehicle autonomous positioning system and positioning method based on vision and chassis information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011402425.7A CN112506195B (en) 2020-12-02 2020-12-02 Vehicle autonomous positioning system and positioning method based on vision and chassis information

Publications (2)

Publication Number Publication Date
CN112506195A CN112506195A (en) 2021-03-16
CN112506195B true CN112506195B (en) 2021-10-29

Family

ID=74969850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011402425.7A Active CN112506195B (en) 2020-12-02 2020-12-02 Vehicle autonomous positioning system and positioning method based on vision and chassis information

Country Status (1)

Country Link
CN (1) CN112506195B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064193B (en) * 2021-03-25 2022-12-16 上海智能新能源汽车科创功能平台有限公司 Combined positioning system based on vehicle road cloud cooperation
CN113063414A (en) * 2021-03-27 2021-07-02 上海智能新能源汽车科创功能平台有限公司 Vehicle dynamics pre-integration construction method for visual inertia SLAM
CN113074754A (en) * 2021-03-27 2021-07-06 上海智能新能源汽车科创功能平台有限公司 Visual inertia SLAM system initialization method based on vehicle kinematic constraint
CN113341968A (en) * 2021-06-01 2021-09-03 山东建筑大学 Accurate parking system and method for multi-axis flat car
CN113848696B (en) * 2021-09-15 2022-09-16 北京易航远智科技有限公司 Multi-sensor time synchronization method based on position information
CN114018284B (en) * 2021-10-13 2024-01-23 上海师范大学 Wheel speed odometer correction method based on vision
CN114212078B (en) * 2022-01-18 2023-10-10 武汉光庭信息技术股份有限公司 Method and system for detecting positioning accuracy of self-vehicle in automatic parking

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU642638B2 (en) * 1989-12-11 1993-10-28 Caterpillar Inc. Integrated vehicle positioning and navigation system, apparatus and method
CN108280847A (en) * 2018-01-18 2018-07-13 维森软件技术(上海)有限公司 A kind of vehicle movement track method of estimation
CN109631896B (en) * 2018-07-23 2020-07-28 同济大学 Parking lot autonomous parking positioning method based on vehicle vision and motion information
US20200132473A1 (en) * 2018-10-26 2020-04-30 Ford Global Technologies, Llc Systems and methods for determining vehicle location in parking structures
CN111238472B (en) * 2020-01-20 2022-03-15 北京四维智联科技有限公司 Real-time high-precision positioning method and device for full-automatic parking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Self-localization of Mobile Robot Based on Monocular and Extended Kalman Filter;Rongbao Chen;《ICEMI 2009》;20091231;全文 *
Single Camera Based Location Estimation with Dissimilarity Measurement;Lukasz Adrjanowicz;《HSI 2013》;20130608;全文 *
智能行驶车辆定位技术研究;汪涛;《中国优秀硕士论文全文数据库》;20171231;全文 *

Also Published As

Publication number Publication date
CN112506195A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112506195B (en) Vehicle autonomous positioning system and positioning method based on vision and chassis information
CN110108258B (en) Monocular vision odometer positioning method
CN111784747B (en) Multi-target vehicle tracking system and method based on key point detection and correction
EP2418622B1 (en) Image processing method and image processing apparatus
CN106875425A (en) A kind of multi-target tracking system and implementation method based on deep learning
CN107590438A (en) A kind of intelligent auxiliary driving method and system
CN109871938A (en) A kind of components coding detection method based on convolutional neural networks
CN107577996A (en) A kind of recognition methods of vehicle drive path offset and system
WO2020062433A1 (en) Neural network model training method and method for detecting universal grounding wire
CN107609486A (en) To anti-collision early warning method and system before a kind of vehicle
CN110738690A (en) unmanned aerial vehicle video middle vehicle speed correction method based on multi-target tracking framework
CN102999759A (en) Light stream based vehicle motion state estimating method
CN110059683A (en) A kind of license plate sloped antidote of wide-angle based on end-to-end neural network
CN109727273B (en) Moving target detection method based on vehicle-mounted fisheye camera
CN111680713B (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN109299656B (en) Scene depth determination method for vehicle-mounted vision system
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN108109177A (en) Pipe robot vision processing system and method based on monocular cam
CN112541423A (en) Synchronous positioning and map construction method and system
CN113744315A (en) Semi-direct vision odometer based on binocular vision
Kang et al. Robust visual tracking framework in the presence of blurring by arbitrating appearance-and feature-based detection
Shu et al. Vision based lane detection in autonomous vehicle
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
Kim et al. Tracking moving object using Snake’s jump based on image flow
Jin et al. Road curvature estimation using a new lane detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant