CN112506195A - Vehicle autonomous positioning system and positioning method based on vision and chassis information - Google Patents
Vehicle autonomous positioning system and positioning method based on vision and chassis information Download PDFInfo
- Publication number
- CN112506195A CN112506195A CN202011402425.7A CN202011402425A CN112506195A CN 112506195 A CN112506195 A CN 112506195A CN 202011402425 A CN202011402425 A CN 202011402425A CN 112506195 A CN112506195 A CN 112506195A
- Authority
- CN
- China
- Prior art keywords
- module
- vehicle
- image
- coordinates
- moment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000004438 eyesight Effects 0.000 title claims abstract description 28
- 230000033001 locomotion Effects 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000008447 perception Effects 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 60
- 239000011159 matrix material Substances 0.000 claims description 40
- 230000003287 optical effect Effects 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 5
- 230000016776 visual perception Effects 0.000 claims description 5
- 230000001502 supplementing effect Effects 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000009469 supplementation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Aviation & Aerospace Engineering (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a vehicle autonomous positioning system and a positioning method based on vision and chassis information, wherein the positioning system comprises a vision perception module and a chassis motion information acquisition module, the vision perception module and the chassis motion information acquisition module input collected data into a data alignment module, the data alignment module aligns the data according to a timestamp and then inputs an image into an image processing module for processing, the image processing module sequentially inputs processing results into a state quantity eliminating and amplifying module and a motion prediction module, the data alignment module also inputs vehicle chassis information into a motion prediction module, the motion prediction module inputs the prediction results into an observation updating module, and the observation updating module positions a vehicle in real time based on observed quantity and the prediction results; the invention fuses visual information and vehicle chassis information, can reduce the influence of positioning errors on the positioning process, and realizes accurate positioning of the vehicle when the light is uneven or darker.
Description
Technical Field
The invention belongs to the technical field of automatic positioning of vehicles, and particularly relates to an autonomous positioning system and method of a vehicle based on vision and chassis information.
Background
In recent years, with the improvement of living standards of people, the quantity of automobiles kept is continuously increased, in order to enable more automobiles to be accommodated in a limited parking space, the design of parking spaces is more and more narrow, the requirements on the driving technology of drivers are more and more high, the automatic parking system can enable drivers who are not skilled in the driving technology to free hands, the parking safety is improved, and the automatic parking system has a wide application prospect.
At present, a visual SLAM system based on a monocular camera and a track calculation system based on vehicle chassis information are widely applied in a vehicle positioning process, but in an environment with uneven light or darker light, the monocular camera cannot collect enough environmental characteristic points and accurately track, so that the estimation of the SLAM system on the vehicle pose is not accurate enough; when the track calculation system based on the vehicle chassis information realizes positioning, the positioning error of the track calculation system is accumulated continuously along with the positioning process, and finally the positioning information is unavailable.
Disclosure of Invention
The invention aims to provide a vehicle autonomous positioning system based on vision and chassis information, which fuses a vision SLAM positioning process with the chassis information of a vehicle by utilizing an observation updating module, realizes the accurate positioning of the vehicle in a parking lot, and provides accurate vehicle position information for vehicle track planning and control.
The invention also aims to provide a vehicle autonomous positioning method based on vision and chassis information, which determines the vehicle pose and roadside point coordinates under a world coordinate system by detecting the characteristic points in the image obtained by visual perception so as to obtain a state vector, obtains a motion prediction result of a vehicle through the chassis information, and corrects the motion prediction result by using the observed quantity brought by the state vector so as to obtain the accurate positioning of the vehicle.
The technical scheme adopted by the invention is that the vehicle autonomous positioning system based on vision and chassis information comprises:
the visual perception module is used for acquiring an environment image in the running process of the vehicle and inputting the acquired image into the data alignment module;
the chassis motion information acquisition module is used for acquiring wheel speed data and steering wheel corner data in the vehicle running process and inputting the acquired data into the data alignment module;
the data alignment module is used for aligning the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputting the image data under the time sequence into the image processing module, and inputting the wheel speed data and the steering wheel corner data into the motion prediction module;
the image processing module is used for preprocessing the aligned images, detecting and tracking the characteristic points in the images, converting the coordinates of the characteristic points in the images into the coordinates of the landmark points in a world coordinate system, and inputting the coordinates of the landmark points into the state quantity eliminating and increasing module;
the state quantity eliminating and amplifying module is used for updating the landmark points to obtain state vectors and inputting the state vectors into the motion prediction module;
the motion prediction module is used for predicting the motion of the vehicle according to the vehicle chassis information and inputting a prediction result into the observation updating module;
and the observation updating module is used for correcting the prediction result by using the observed landmark point coordinates to obtain the real-time position information of the vehicle.
Furthermore, the vision perception module is a monocular camera, and the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel corner sensor.
Further, the image processing module comprises an image preprocessing module, a feature point detection module and a feature point tracking module;
the image preprocessing module is used for preprocessing images in a time sequence, the characteristic point detection module is used for detecting characteristic points in each image, and the characteristic point tracking module is used for tracking the movement of the characteristic points in the time sequence images.
The autonomous positioning method of the vehicle autonomous positioning system based on vision and chassis information comprises the following steps:
step 2, the data alignment module aligns the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputs the aligned image sequence into the image processing module, and inputs the wheel speed and steering wheel corner data into the motion prediction module;
step 3, the image processing module preprocesses all images, detects the characteristic points in each image and tracks the coordinates of the characteristic points in the images;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as an X axis and the transverse axis of the vehicle as a Y axis, and determining the pose of the vehicle at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image into the coordinates of the landmark points in a world coordinate system by using a triangulation method, inputting the coordinates of the landmark points into a state quantity eliminating and amplifying module to eliminate and amplify the state quantity, and obtaining an observation state vector X at the moment kk', mixing Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskAn input observation update module for calculating a predicted state vector X at time kkBrought-in observed quantity ZkAnd use of ZkPredicted state vector X for time kkAnd updating to obtain the position of the vehicle at the moment k, so as to realize autonomous positioning.
Further, the preprocessing of the image in step 3 includes distortion removal processing and histogram equalization processing.
Further, the step 3 determines the coordinates of the feature point k in the image at the moment according to the following steps:
step 31, using Gaussian filtering to obtain a blurred image of the original image, performing down-sampling on the blurred image to enable the height and width of the blurred image to be half of the original image, and repeating the Gaussian filtering and the down-sampling to establish an image pyramid;
step 32, calculating rotation matrixes of the k-1 moment and the k moment, and calculating the rotation matrixesPsi is the plane motion yaw angle of the vehicle between time k-1 and time k;
step 33, determining feature points to be tracked in the image at the moment k-1 by using a FAST corner detection rule;
step 34, rotating the feature points in the image at the time k-1 by using a rotation matrix, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as input of an LK optical flow, and tracking the coordinates of the feature points in the image at the time k by using an LK optical flow method;
and step 35, if the number of the successfully tracked feature points is small, dividing the image into 8 × 8 grids, determining a FAST corner Point of the image, removing an outer Point by using 2-Point RANSAC, supplementing an inner Point in the FAST corner Point to the feature Point set until the number of the feature points in each grid is greater than or equal to 10, and taking the feature points as feature points to be tracked in next positioning.
Further, in the step 5, the state quantity is removed and augmented according to the following rules:
if the new feature point is continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain the coordinate of the landmark point in a world coordinate system, and adding the coordinate into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, removing the coordinates of the landmark point from the state vector.
Further, the predicted state vector X at the time k in step 6kIs calculated as shown in equation (2):
wherein U iskFor control input at time k, Uk=(vk,δk),vk、δkRespectively the speed and the front wheel angle x of the vehicle at the moment kk-1、yk-1、Respectively the horizontal and vertical coordinates and the yaw angle of the vehicle under the world coordinate system at the moment of k-1, N is the total number of the road sign points,i is more than or equal to 1 and less than or equal to N, delta t is adjacent sampling time interval,the yaw angle of the vehicle at the moment k and the wheelbase l are shown;
Xkbrought-in observed quantity ZkIs calculated as shown in equation (3):
in which ξkTo observe the noise, which is a gaussian distribution with a mean of 0 and a covariance matrix of Q, rithe distance from the ith landmark point to the center of mass of the vehicle,is the angle between the line connecting the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k.
Further, the prediction and update process in step 6 is as follows:
step 61, prediction
Computing a priori mean of the predicted state vectors at time k using equation (4)And a priori covariance matrix
Wherein XkAn observed state vector of' time kxx' (k) is Xk' covariance matrix, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,i is an identity matrix and is a matrix of the identity,is JtTranspose of (J)uIs f (X)k′,Uk) To UkThe jacobian matrix of (a) is,Σuis UkThe covariance matrix of (a) is determined,is JuTransposing;
step 62, update
Calculating Kalman gain K at time KkUsing KkUpdating XkThe prior mean and prior covariance matrix of (a) obtain XkPosterior mean ofSum a posteriori covariance matrixAnd further acquiring the pose of the vehicle at the moment k under a world coordinate system, and autonomously positioning the final position of the vehicle at the moment k, wherein the calculation is as shown in a formula (5) and a formula (6):
the invention has the beneficial effects that: according to the invention, the monocular camera is used for obtaining the state vector of observation formed by the pose of the vehicle in the world coordinate system and the landmark point coordinates, the wheel speed sensor and the steering wheel angle sensor are used for obtaining the chassis information of the vehicle, the predicted state vector of the vehicle is further obtained, the predicted state vector is corrected by using the observed quantity introduced by the state vector sensed visually to obtain the real-time position of the vehicle, the robustness of a positioning system is enhanced, the state vector is timely eliminated and expanded, the positioning real-time performance is improved, and the positioning result is more accurate and reliable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow chart of the autonomous positioning of the present invention.
FIG. 2 is a diagram of an Ackerman steering model for a vehicle.
Fig. 3 is a block diagram of the autonomous positioning system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, the vehicle autonomous positioning system based on vision and chassis information comprises a vision sensing module and a chassis motion information acquisition module, wherein the vision sensing module is a monocular camera and is used for acquiring an environmental image in the running process of a vehicle, the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel corner sensor, the wheel speed sensor is used for acquiring the wheel speed information of the vehicle in real time so as to obtain the speed information of the vehicle, the steering wheel corner sensor is used for acquiring the steering wheel corner data of the vehicle in real time, the vision sensing module and the chassis motion information acquisition module input the acquired data into a data alignment module, the data alignment module aligns the image and the chassis data according to a timestamp and inputs the aligned image data into an image processing module, and the image processing module comprises an image preprocessing module, a feature point tracking module and a feature point detection module, the image preprocessing module preprocesses the images, the feature point detecting module detects the feature points in the images, the feature point tracking module tracks the movement of the feature points in a plurality of groups of images under a time sequence, and converting the feature point data of the image into road sign point coordinate input state quantity eliminating and amplifying module under world coordinate system, the state quantity eliminating and amplifying module is used for eliminating and amplifying the state quantity according to the state quantity eliminating and amplifying condition, and transmits the updated observation state vector data to the motion prediction module, the data alignment module also inputs the aligned chassis information into the motion prediction module, the motion prediction module predicts the motion of the vehicle according to the chassis signal and the observation state vector of the vehicle, and the prediction result is transmitted to an observation updating module, and the observation updating module corrects the prediction result by using the observed quantity to obtain the real-time position data of the vehicle.
As shown in fig. 1, the method for autonomous positioning of a vehicle based on visual and chassis information comprises the following steps:
step 2, after the data alignment module aligns the image with the vehicle chassis data according to the time stamp, the aligned image sequence is input into the image processing module, and the data of the vehicle wheel speed and the steering wheel angle are input into the motion prediction module;
step 3, an image preprocessing module in the image processing module carries out distortion removal processing and histogram equalization processing on all images, a characteristic point detection module determines FAST corner points in all images and takes the FAST corner points as characteristic points to be tracked, a characteristic point tracking module calculates rotation matrixes of two images at the k-1 moment and the k moment by using a vehicle kinematics model shown in fig. 2, characteristic points in the images at the k-1 moment are rotated, coordinates of the rotated characteristic points serve as initial estimation coordinates of optical flow tracking, and coordinates of the characteristic points in the images at the k moment are tracked by using an LK optical flow method;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as the X axis and the forward direction as the advancing direction of the vehicle, the transverse axis of the vehicle as the Y axis and the forward direction as the left turning direction of the vehicle, and obtaining the pose of the vehicle under the world coordinate system at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image into the coordinates of the landmark points in a world coordinate system by using a triangulation method, inputting the coordinates into a state quantity eliminating and amplifying module, and eliminating and amplifying the state quantity according to the state quantity eliminating and amplifying conditions to obtain an observation state vector X at the moment kk', and Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskInput into an observation update module that predicts a state vector X based on time kkCalculating the observed quantity Z brought in by the measured quantitykUsing ZkPredicted state vector X for time kkAnd updating to obtain the position of the vehicle at the moment k, so as to realize autonomous positioning.
The invention carries out the processes of distortion removal processing and histogram equalization processing on the image as follows:
1, shooting a calibration board for multiple times to establish multiple constraints, and calculating a distortion parameter k of a camera by utilizing maximum likelihood estimation1、k2、k3、p1、p2And determining the correct position of any point in the camera coordinate system in the image through the distortion parameters:
xdistorted=x0(1+k1r2+k2r4+k3r6)+2p1x0y0+p2(r2+2x0 2)
ydistorted=y0(1+k1r2+k2r4+k3r6)+p1(r2+2y0 2)+2p2x0y0
u=fxxdistorted+cx
v=fyydistorted+cy
wherein (x)0,y0) For the original coordinates of the feature points in the image before distortion correction, (x)distorted,ydistorted) (u, v) the pixel coordinates of the feature point in the image after the distortion correction, fx、fy、cx、cyAll are internal references of the monocular camera, and r is the distance between the characteristic point and the origin of coordinates in a camera coordinate system;
2, calculating the m pixel gray level g in the imagemFrequency of occurrence pmAnd then p is obtainedmForward accumulation ofpjIs pmFrequency of occurrence of any previous pixel, for smRounding and expanding to obtain Sm=int[255×sm+0.5]Determining the corresponding relation between the original pixel and the rounding expansion value, and carrying out histogram equalization processing on the original image based on the corresponding relation, wherein m is a variable representing the number of pixels, m is more than or equal to 0 and less than or equal to 255, j belongs to [0, m ∈]。
The invention carries out the characteristic point detection and tracking process on the preprocessed image as follows:
31, in order to avoid the situation that the optical flow tracking is trapped in the local minimum due to the fact that the camera moves fast, Gaussian filtering is conducted on the histogram equalization image to obtain a fuzzy image of the original image, downsampling is conducted on the fuzzy image to enable the height and the width of the fuzzy image to be half of the original image, and the Gaussian filtering-downsampling process is repeated to establish a complete image pyramid;
step 32, knowing the speed v of the vehicle at the moment k-1k-1And steering wheel angle thetak-1And calculating the front wheel rotation angle delta of the vehicle at the moment k-1k-1And yaw angular velocity ωk-1,i is the steering gear ratio, ρ, of the vehiclek-1The turning radius of the vehicle at the moment k-1;
according toYaw rate omega of vehicle at time k-1k-1Calculating a plane motion yaw angle psi of the vehicle between the k-1 moment and the k moment according to the sampling time interval delta t, and further obtaining a rotation matrix from the k-1 moment to the k moment
Step 32, determining characteristic points needing to be tracked in the images at the k-1 moment by using a FAST corner detection rule;
the process of determining the feature points is as follows: calculating the difference value between the gray value of each pixel point in the image and the gray value of the pixel points around the pixel point, taking 3 pixels as the radius, and if the difference value between the gray value of a certain pixel point and the gray value of 12 pixel points around the pixel point (total 16 pixel points) is greater than a set threshold, considering the pixel point as a FAST corner point, namely a feature point needing to be tracked;
step 34, utilizing a rotation matrix to rotate the feature points in the image at the time k-1, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as the input of an LK optical flow algorithm, decomposing the offset of the pixel points into a transfer optical flow vector and a residual optical flow vector, wherein the transfer optical flow vector is the offset transferred from the previous layer to the current layer of the image pyramid, the transfer optical flow vector at the top layer of the image pyramid is initialized to 0, and an error matching function based on the assumption of unchanged gray level is obtained as follows:
Px、Pythe coordinates of the pixel points in the X axis and the Y axis, wx、wyFor the size of the pixel point window in the optical flow tracking in the X axis and the Y axis, the pixel points in the window have the same motion, PLLevel L, C of the image pyramid created for the previous frame of imageLThe lth layer of the image pyramid created for the current frame image,respectively for conveying optical flow vectorsCoordinates in the X-axis and Y-axis, vx、vyThe motion speeds of the pixel points on the X axis and the Y axis are respectively;
the above process is established when the pixel point displacement is very small, and in order to make the obtained solution more accurate, the above process is iterated for many times, and the residual optical flow vector v obtained by the nth iteration isn=vn-1+ηn,ηnResidual optical flow vector increment from error matching function for nth iterationnStopping iteration when | is less than a set threshold or reaches a set iteration number, and obtaining the final offset d ═ t of the pixel point0+v0,t0For the transmitted light flow vector at layer 0 after the iteration ends, v0The residual optical flow vector of the 0 th layer after the iteration is finished;
and step 35, if the number of the successfully tracked feature points in the step 34 is small, and the vehicle cannot be accurately positioned based on the observation state vector formed by converting the successfully tracked feature points into the landmark points, dividing the image into 8 × 8 grids, determining the FAST corner points in the image again by using a FAST corner Point rule, removing external points in the FAST corner points by using 2-Point RANSAC to remove error-associated data, supplementing internal points of the FAST corner points to a feature Point set, ensuring that the number of feature points in each grid is more than or equal to 10, and using the feature Point set as feature points to be tracked in next positioning so as to ensure the accuracy of subsequent positioning.
The process of removing outliers is as follows: randomly selecting two pairs of FAST corners from two frames of images, solving a translation vector t of a camera between the two frames of images according to the two pairs of FAST corners, calculating the error of the residual FAST corners relative to the translation vector t, dividing an inner point and an outer point according to a set error threshold value, wherein the error threshold value is 2 multiplied by 10-7And solving the translation vector again according to the obtained inner point set, and outputting the set with the most reserved inner points as a result by iterative loop.
The process of feature point supplementation is as follows: in order to uniformly distribute the characteristic points in the whole image, the whole image is equally divided into a plurality of grids, and the range of the number of the characteristic points which can be contained by each grid is specified; counting the number of the feature points in each grid, supplementing points with stronger features in the inner point set to the feature point set if the number of the feature points is smaller than the minimum value which can be accommodated, sequencing the feature points according to the strength of the features if the number of the feature points is larger than the maximum value which can be accommodated, and rejecting the feature points with weaker features until the number of the feature points is within the range.
The invention eliminates and enlarges the coordinates of the road sign points in the state quantity according to the following rules:
when the new feature points are continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain coordinates of the landmark points in a world coordinate system, and adding the coordinates into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, the update of the landmark point is finished, and the landmark point is removed from the state vector.
The invention calculates the discrete equation of the vehicle position coordinate and the yaw angle at the moment k according to the position coordinate and the yaw angle of the vehicle at the moment k-1, as shown in the formula (1):
wherein xk、yk、Respectively the abscissa, ordinate and yaw angle of the vehicle at the moment k in the world coordinate system, xk-1、yk-1、Respectively the abscissa, the ordinate and the yaw angle of the vehicle under the world coordinate system at the moment k-1, delta t is the adjacent sampling time interval, vk、δkThe vehicle speed and the front wheel rotation angle of the vehicle at the moment k respectively,the yaw angle of the vehicle at the moment k and the wheelbase l are shown;
adding landmark points under world coordinate system at moment k on the basis of vehicle discrete equationPosition coordinates to obtain a predicted state vector X at time kkThe process of (2) is shown in formula (2):
wherein U iskFor control input at time k, Uk=(vk,δk) N is the total number of the road sign points,respectively the abscissa and ordinate of the ith road sign point at the moment k under a world coordinate system, wherein i is more than or equal to 1 and is less than or equal to N;
Xkbrought-in observed quantity ZkThe observation equation is shown in formula (3):
wherein h (X)k) As a function of the observation equation, ξkIs the observed noise of the system, which is a gaussian distribution with a mean of 0 and a covariance matrix of Q,rithe distance from the ith landmark point to the center of mass of the vehicle,is the angle between the line connecting the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k.
According to X in step 6k' with Chassis information prediction XkUsing ZkPredicted state vector X for time kkThe process of updating to obtain the vehicle's position at time k is as follows:
step 61, prediction
Obtaining a predicted state vector X at the moment k according to a discrete equation of the position coordinate and the yaw angle of the vehiclekPrior mean ofAnd a priori covariance matrixAs shown in equation (4):
wherein XkAn observed state vector of' time kxx' (k) is Xk' covariance matrix of Xk-1The predicted value of the covariance matrix ofCovariance matrix, X, after state quantity rejection and augmentationk-1Is the predicted state vector at time k-1, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,Jξis a Jacobian matrix of discrete equations versus vehicle pose,i is an identity matrix and is a matrix of the identity,is JtTranspose of (J)uIs f (X)k′,Uk) To UkJacobian matrix of ∑uIs UkThe covariance matrix of (a) is determined,is JuThe transpose of (a) is performed,
step 62, update
Calculating Kalman gain K at the K moment by using formula (5) and formula (6)kUpdating X using Kalman gainkTo obtain X from the prior mean and the prior covariance matrixkPosterior mean ofSum a posteriori covariance matrixByAnd acquiring the pose of the vehicle at the moment k under a world coordinate system, and autonomously positioning the final position of the vehicle at the moment k:
the method comprises the steps of using a vehicle provided with an industrial personal computer, a monocular camera, a steering wheel corner sensor and a wheel speed sensor as an experimental vehicle, drawing a guide line on the ground as a reference road sign, respectively controlling the experimental vehicle to move along the guide line according to the autonomous positioning method and the traditional ORB-SLAM2 algorithm, obtaining the real movement track of the experimental vehicle, and knowing from the experimental result that the traditional positioning algorithm fails when the ambient light is uneven, the light is dark and the guide line texture is not rich, so that the vehicle cannot be accurately positioned.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (9)
1. A system for autonomous vehicle positioning based on vision and chassis information, comprising:
the visual perception module is used for acquiring an environment image in the running process of the vehicle and inputting the acquired image into the data alignment module;
the chassis motion information acquisition module is used for acquiring wheel speed data and steering wheel corner data in the vehicle running process and inputting the acquired data into the data alignment module;
the data alignment module is used for aligning the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputting the image data under the time sequence into the image processing module, and inputting the wheel speed data and the steering wheel corner data into the motion prediction module;
the image processing module is used for preprocessing the aligned images, detecting and tracking the characteristic points in the images, converting the coordinates of the characteristic points in the images into the coordinates of the landmark points in a world coordinate system, and inputting the coordinates of the landmark points into the state quantity eliminating and increasing module;
the state quantity eliminating and amplifying module is used for updating the landmark points to obtain state vectors and inputting the state vectors into the motion prediction module;
the motion prediction module is used for predicting the motion of the vehicle according to the vehicle chassis information and inputting a prediction result into the observation updating module;
and the observation updating module is used for correcting the prediction result by using the observed landmark point coordinates to obtain the real-time position information of the vehicle.
2. The vision-and-chassis-information-based vehicle autonomous positioning system of claim 1, wherein the vision perception module is a monocular camera and the chassis motion information acquisition module comprises a wheel speed sensor and a steering wheel angle sensor.
3. The vision and chassis information based vehicle autonomous positioning system of claim 1, wherein the image processing module comprises an image preprocessing module, a feature point detection module, and a feature point tracking module;
the image preprocessing module is used for preprocessing images in a time sequence, the characteristic point detection module is used for detecting characteristic points in each image, and the characteristic point tracking module is used for tracking the movement of the characteristic points in the time sequence images.
4. The autonomous positioning method of the vision and chassis information based autonomous positioning system of a vehicle according to any of claims 1 to 3, comprising the steps of:
step 1, a vision perception module arranged in the middle of the upper side of a vehicle windshield collects an environment image of a roadside in the vehicle running process, a chassis motion information collection module respectively collects wheel speed data and steering wheel corner data in the vehicle running process, and the vision perception module and the chassis motion information collection module input the collected data into a data alignment module;
step 2, the data alignment module aligns the image, the wheel speed data and the steering wheel corner data according to the time stamp, inputs the aligned image sequence into the image processing module, and inputs the wheel speed and steering wheel corner data into the motion prediction module;
step 3, the image processing module preprocesses all images, detects the characteristic points in each image and tracks the coordinates of the characteristic points in the images;
step 4, establishing a world coordinate system by taking the center of mass of the vehicle at the entrance of the parking lot as the origin of coordinates, the longitudinal axis of the vehicle as an X axis and the transverse axis of the vehicle as a Y axis, and determining the pose of the vehicle at the moment k-1;
step 5, converting the coordinates of the characteristic points at the moment k in the image into the coordinates of the landmark points in a world coordinate system by using a triangulation method, inputting the coordinates of the landmark points into a state quantity eliminating and amplifying module to eliminate and amplify the state quantity, and obtaining an observation state vector X at the moment kk', mixing Xk' input motion prediction Module, said XkThe vehicle position and pose information and the coordinates of each road mark point in a world coordinate system are included;
step 6, the motion prediction module observes the state vector X according to k timek' obtaining a predicted state vector X at the moment k by predicting with chassis informationkAnd X iskAn input observation update module for calculating a predicted state vector X at time kkBrought-in observed quantity ZkAnd use of ZkPredicted state vector X for time kkAnd updating to obtain the position of the vehicle at the moment k, so as to realize autonomous positioning.
5. The method of claim 4, wherein the preprocessing of the image in step 3 comprises a distortion removal process and a histogram equalization process.
6. The method for autonomous vehicle localization based on vision and chassis information according to claim 4, wherein the step 3 determines the coordinates of the feature point k in the image at the moment according to the following steps:
step 31, using Gaussian filtering to obtain a blurred image of the original image, performing down-sampling on the blurred image to enable the height and width of the blurred image to be half of the original image, and repeating the Gaussian filtering and the down-sampling to establish an image pyramid;
step 32, calculating rotation matrixes of the k-1 moment and the k moment, and calculating the rotation matrixesPsi is the plane motion yaw angle of the vehicle between time k-1 and time k;
step 33, determining feature points to be tracked in the image at the moment k-1 by using a FAST corner detection rule;
step 34, rotating the feature points in the image at the time k-1 by using a rotation matrix, taking the coordinates of the rotated feature points as initial estimation coordinates of optical flow tracking, taking the image pyramid and the initial estimation coordinates as input of an LK optical flow, and tracking the coordinates of the feature points in the image at the time k by using an LK optical flow method;
and step 35, if the number of the successfully tracked feature points is small, dividing the image into 8 × 8 grids, determining a FAST corner Point of the image, removing an outer Point by using 2-Point RANSAC, supplementing an inner Point in the FAST corner Point to the feature Point set until the number of the feature points in each grid is greater than or equal to 10, and taking the feature points as feature points to be tracked in next positioning.
7. The method for autonomous vehicle positioning based on vision and chassis information according to claim 4, wherein the state quantity is eliminated and augmented in step 5 according to the following rules:
if the new feature point is continuously tracked by three frames of images, triangularizing the first frame of image and the last frame of image to obtain the coordinate of the landmark point in a world coordinate system, and adding the coordinate into the state vector;
and if the feature points corresponding to a certain landmark point in the state vector are not observed for ten continuous frames, removing the coordinates of the landmark point from the state vector.
8. The method of claim 4, wherein the step 6 is performed by a vehicle autonomous positioning method based on vision and chassis informationPredicted state vector X at time kkIs calculated as shown in equation (2):
wherein U iskFor control input at time k, Uk=(vk,δk),vk、δkRespectively the speed and the front wheel angle x of the vehicle at the moment kk-1、yk-1、Respectively the horizontal and vertical coordinates and the yaw angle of the vehicle under the world coordinate system at the moment of k-1, N is the total number of the road sign points,i is more than or equal to 1 and less than or equal to N, delta t is adjacent sampling time interval,the yaw angle of the vehicle at the moment k and the wheelbase l are shown;
Xkbrought-in observed quantity ZkIs calculated as shown in equation (3):
in which ξkTo observe the noise, which is a gaussian distribution with a mean of 0 and a covariance matrix of Q, rithe distance from the ith landmark point to the center of mass of the vehicle,is the angle between the line connecting the ith road sign point and the mass center of the vehicle and the longitudinal axis of the vehicle, xk、ykRespectively are the abscissa and the ordinate of the vehicle under the world coordinate system at the moment k.
9. The method for autonomous vehicle positioning based on vision and chassis information of claim 4, wherein the prediction and update in step 6 are as follows:
step 61, prediction
Computing a priori mean of the predicted state vectors at time k using equation (4)And a priori covariance matrix
Wherein XkAn observed state vector of' time kxx' (k) is Xk' covariance matrix, JtIs f (X)k′,Uk) To XkThe jacobian matrix of' is,i is an identity matrix and is a matrix of the identity,is JtTranspose of (J)uIs f (X)k′,Uk) To UkThe jacobian matrix of (a) is,Σuis UkThe covariance matrix of (a) is determined,is JuTransposing;
step 62, update
Calculating Kalman gain K at time KkUsing KkUpdating XkThe prior mean and prior covariance matrix of (a) obtain XkPosterior mean ofSum a posteriori covariance matrixAnd further acquiring the pose of the vehicle at the moment k under a world coordinate system, and autonomously positioning the final position of the vehicle at the moment k, wherein the calculation is as shown in a formula (5) and a formula (6):
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011402425.7A CN112506195B (en) | 2020-12-02 | 2020-12-02 | Vehicle autonomous positioning system and positioning method based on vision and chassis information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011402425.7A CN112506195B (en) | 2020-12-02 | 2020-12-02 | Vehicle autonomous positioning system and positioning method based on vision and chassis information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112506195A true CN112506195A (en) | 2021-03-16 |
CN112506195B CN112506195B (en) | 2021-10-29 |
Family
ID=74969850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011402425.7A Active CN112506195B (en) | 2020-12-02 | 2020-12-02 | Vehicle autonomous positioning system and positioning method based on vision and chassis information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112506195B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113063414A (en) * | 2021-03-27 | 2021-07-02 | 上海智能新能源汽车科创功能平台有限公司 | Vehicle dynamics pre-integration construction method for visual inertia SLAM |
CN113064193A (en) * | 2021-03-25 | 2021-07-02 | 上海智能新能源汽车科创功能平台有限公司 | Combined positioning system based on vehicle road cloud cooperation |
CN113074754A (en) * | 2021-03-27 | 2021-07-06 | 上海智能新能源汽车科创功能平台有限公司 | Visual inertia SLAM system initialization method based on vehicle kinematic constraint |
CN113341968A (en) * | 2021-06-01 | 2021-09-03 | 山东建筑大学 | Accurate parking system and method for multi-axis flat car |
CN113848696A (en) * | 2021-09-15 | 2021-12-28 | 北京易航远智科技有限公司 | Multi-sensor time synchronization method based on position information |
CN114018284A (en) * | 2021-10-13 | 2022-02-08 | 上海师范大学 | Wheel speed odometer correction method based on vision |
CN114212078A (en) * | 2022-01-18 | 2022-03-22 | 武汉光庭信息技术股份有限公司 | Method and system for detecting self-vehicle positioning precision in automatic parking |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1991009275A2 (en) * | 1989-12-11 | 1991-06-27 | Caterpillar Inc. | Integrated vehicle positioning and navigation system, apparatus and method |
CN108280847A (en) * | 2018-01-18 | 2018-07-13 | 维森软件技术(上海)有限公司 | A kind of vehicle movement track method of estimation |
CN109631896A (en) * | 2018-07-23 | 2019-04-16 | 同济大学 | A kind of parking lot autonomous parking localization method based on vehicle vision and motion information |
CN111105640A (en) * | 2018-10-26 | 2020-05-05 | 福特全球技术公司 | System and method for determining vehicle position in parking lot |
CN111238472A (en) * | 2020-01-20 | 2020-06-05 | 北京四维智联科技有限公司 | Real-time high-precision positioning method and device for full-automatic parking |
-
2020
- 2020-12-02 CN CN202011402425.7A patent/CN112506195B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1991009275A2 (en) * | 1989-12-11 | 1991-06-27 | Caterpillar Inc. | Integrated vehicle positioning and navigation system, apparatus and method |
CN108280847A (en) * | 2018-01-18 | 2018-07-13 | 维森软件技术(上海)有限公司 | A kind of vehicle movement track method of estimation |
CN109631896A (en) * | 2018-07-23 | 2019-04-16 | 同济大学 | A kind of parking lot autonomous parking localization method based on vehicle vision and motion information |
CN111105640A (en) * | 2018-10-26 | 2020-05-05 | 福特全球技术公司 | System and method for determining vehicle position in parking lot |
CN111238472A (en) * | 2020-01-20 | 2020-06-05 | 北京四维智联科技有限公司 | Real-time high-precision positioning method and device for full-automatic parking |
Non-Patent Citations (9)
Title |
---|
GUOJUN WANG: "A Point Cloud-Based Robust Road Curb Detection and Tracking Method", 《IEEE ACCESS》 * |
LUKASZ ADRJANOWICZ: "Single Camera Based Location Estimation with Dissimilarity Measurement", 《HSI 2013》 * |
QINYU JIANG: "Research on intelligent vehicle high-speed steering control Based on CCD sensor", 《2011 CCIE》 * |
RONGBAO CHEN: "Self-localization of Mobile Robot Based on Monocular and Extended Kalman Filter", 《ICEMI 2009》 * |
SHAHJAHAN MIAH: "An Innovative Multi-Sensor Fusion Algorithm to Enhance Positioning Accuracy of an Instrumented Bicycle", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
张素民: "汽车智能化轨迹规划与跟随的仿真环境", 《2014中国汽车工程学会年会论文集》 * |
彭文正: "多传感器信息融合的自动驾驶车辆定位与速度估计", 《传感技术学报》 * |
汪涛: "智能行驶车辆定位技术研究", 《中国优秀硕士论文全文数据库》 * |
邱雪娜: "基于序贯检测机制的双目视觉运动目标跟踪与定位方法", 《机器人》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113064193A (en) * | 2021-03-25 | 2021-07-02 | 上海智能新能源汽车科创功能平台有限公司 | Combined positioning system based on vehicle road cloud cooperation |
CN113063414A (en) * | 2021-03-27 | 2021-07-02 | 上海智能新能源汽车科创功能平台有限公司 | Vehicle dynamics pre-integration construction method for visual inertia SLAM |
CN113074754A (en) * | 2021-03-27 | 2021-07-06 | 上海智能新能源汽车科创功能平台有限公司 | Visual inertia SLAM system initialization method based on vehicle kinematic constraint |
CN113341968A (en) * | 2021-06-01 | 2021-09-03 | 山东建筑大学 | Accurate parking system and method for multi-axis flat car |
CN113848696A (en) * | 2021-09-15 | 2021-12-28 | 北京易航远智科技有限公司 | Multi-sensor time synchronization method based on position information |
CN113848696B (en) * | 2021-09-15 | 2022-09-16 | 北京易航远智科技有限公司 | Multi-sensor time synchronization method based on position information |
CN114018284A (en) * | 2021-10-13 | 2022-02-08 | 上海师范大学 | Wheel speed odometer correction method based on vision |
CN114018284B (en) * | 2021-10-13 | 2024-01-23 | 上海师范大学 | Wheel speed odometer correction method based on vision |
CN114212078A (en) * | 2022-01-18 | 2022-03-22 | 武汉光庭信息技术股份有限公司 | Method and system for detecting self-vehicle positioning precision in automatic parking |
CN114212078B (en) * | 2022-01-18 | 2023-10-10 | 武汉光庭信息技术股份有限公司 | Method and system for detecting positioning accuracy of self-vehicle in automatic parking |
Also Published As
Publication number | Publication date |
---|---|
CN112506195B (en) | 2021-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112506195B (en) | Vehicle autonomous positioning system and positioning method based on vision and chassis information | |
CN110108258B (en) | Monocular vision odometer positioning method | |
CN111784747B (en) | Multi-target vehicle tracking system and method based on key point detection and correction | |
CN106875425A (en) | A kind of multi-target tracking system and implementation method based on deep learning | |
CN107590438A (en) | A kind of intelligent auxiliary driving method and system | |
CN109871938A (en) | A kind of components coding detection method based on convolutional neural networks | |
CN107577996A (en) | A kind of recognition methods of vehicle drive path offset and system | |
CN110738690A (en) | unmanned aerial vehicle video middle vehicle speed correction method based on multi-target tracking framework | |
CN107609486A (en) | To anti-collision early warning method and system before a kind of vehicle | |
CN102999759A (en) | Light stream based vehicle motion state estimating method | |
CN110059683A (en) | A kind of license plate sloped antidote of wide-angle based on end-to-end neural network | |
CN111680713B (en) | Unmanned aerial vehicle ground target tracking and approaching method based on visual detection | |
CN109299656B (en) | Scene depth determination method for vehicle-mounted vision system | |
CN111738033B (en) | Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal | |
CN115131420A (en) | Visual SLAM method and device based on key frame optimization | |
CN108900775B (en) | Real-time electronic image stabilization method for underwater robot | |
CN108109177A (en) | Pipe robot vision processing system and method based on monocular cam | |
CN112541423A (en) | Synchronous positioning and map construction method and system | |
CN113744315A (en) | Semi-direct vision odometer based on binocular vision | |
CN114234967A (en) | Hexapod robot positioning method based on multi-sensor fusion | |
Kang et al. | Robust visual tracking framework in the presence of blurring by arbitrating appearance-and feature-based detection | |
CN113759364A (en) | Millimeter wave radar continuous positioning method and device based on laser map | |
CN113920150A (en) | Simplified binocular vision mileage positioning method for planet vehicle under resource limitation | |
CN114219852A (en) | Multi-sensor calibration method and device for automatic driving vehicle | |
Kim et al. | Tracking moving object using Snake’s jump based on image flow |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |