CN108759823B - Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching - Google Patents

Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching Download PDF

Info

Publication number
CN108759823B
CN108759823B CN201810522641.1A CN201810522641A CN108759823B CN 108759823 B CN108759823 B CN 108759823B CN 201810522641 A CN201810522641 A CN 201810522641A CN 108759823 B CN108759823 B CN 108759823B
Authority
CN
China
Prior art keywords
image
vehicle
images
longitude
latitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810522641.1A
Other languages
Chinese (zh)
Other versions
CN108759823A (en
Inventor
林旭
李梓宁
朱林炯
王文夫
潘之杰
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810522641.1A priority Critical patent/CN108759823B/en
Publication of CN108759823A publication Critical patent/CN108759823A/en
Application granted granted Critical
Publication of CN108759823B publication Critical patent/CN108759823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface

Abstract

The invention discloses a method for positioning and rectifying a low-speed automatic driving vehicle on a specified road based on image matching, which improves the positioning processing time by utilizing a SURF matching algorithm and a FLANN quick search algorithm, effectively reduces the influence caused by the processing time delay of a binocular vision SLAM algorithm and has strong robustness; the current objects on two sides of the road are identified by Hough transform, so that whether the vehicle deviates left and right or not is judged, real-time deviation correction is carried out, and the algorithm is simple and easy to implement and high in real-time performance. The invention depends on the automatic driving vehicle and the binocular vision sensor, does not need to greatly modify the vehicle, reduces the complexity and the cost of the vehicle, collects images from the binocular camera in the positioning and deviation rectifying process, executes control instructions to the target vehicle, does not need manual participation in the whole process, and can realize the positioning and deviation rectifying automation in the true sense.

Description

Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method for positioning and correcting a low-speed automatic driving vehicle on an appointed road based on image matching.
Background
With the popularization of the automatic driving technology, the technology with high precision, low cost and high efficiency is becoming a hot trend for research, and at present, there are various methods in the self-positioning aspect of the automatic driving vehicle, wherein the methods are mainly divided into two directions: inner sensor positioning and outer sensor positioning.
The internal sensor measures and accumulates the pose change by sensing the motion change of the internal sensor, and is mainly applied to an automatic driving system with an accurate known starting point and a fixed and unchangeable speed and direction, wherein the two types are as follows: odometers and inertial sensors (gyroscopes, accelerometers); the wheel code disc (also called wheel type mileometer) counts the wheel rotating speed by knowing the diameter of the wheel in advance to obtain the speed and displacement; taking two differential wheels as an example, the change of the angle of the moving body can be calculated according to the difference of the number of turns of the two wheels. The inertial sensor can measure linear acceleration and angular velocity, and accumulated displacement and angle change can be calculated through integration. The positioning of the inner sensor depends on an initial pose, and the relative poses are combined on the basis of the initial pose through continuous integration, so that the pose at a new moment is calculated. The internal sensor positioning method has the following defects:
(1) the method does not depend on the external environment, does not make prior assumption on the external environment, thus being incapable of sensing the change of the vehicle direction and being incapable of correcting if the deviation of the vehicle and the specified driving direction occurs.
(2) It calculates the accumulated pose change, so absolute positioning needs an accurate initial pose, the initial pose is not correct, and the subsequent positioning is not correct.
(3) Since the method is accumulation change, if an error is introduced into each step of accumulation, no matter how small the error is, the error is a large error under the subsequent long-time accumulation, and the error cannot be estimated due to the accumulation error.
In the aspect of external sensor positioning, the principle is that the external sensor is mainly used for assisting in positioning the pose of the external sensor by sensing the surrounding environment, wherein the classification mainly comprises GPS receiving, a 2D monocular camera, a binocular camera, a depth camera, a laser radar and the like, devices such as the GPS or the laser radar and the camera are arranged in a vehicle-mounted electronic system in advance, and the vehicle-mounted device is used for receiving sensor information to acquire the position information of the external sensor. The external sensor has the following defects when used alone:
(1) the GPS can not acquire the position information at a high frequency, which is about 10Hz, and can be used for a moving body with a low moving speed, and for the moving body which is driven automatically and runs at a high speed, the GPS needs to acquire a pose at a higher frequency; GPS signals are easily shielded, indoor positioning basically does not need GPS, and when an automobile passes through a tunnel, the automobile has a long time without GPS signals; the GPS has large floating error, the precision is intelligently guaranteed within a range of dozens of meters, the requirement on outdoor positioning precision is high, and the GPS cannot independently solve the positioning problem.
(2) In the field of automatic driving and the acquisition and positioning application of a high-precision map, a multi-line laser radar scheme is generally used, and the defects that the price is too high, the arrangement on a common automatic driving vehicle is not available, the processing algorithm time is too long, and the requirements of high precision, low cost and high efficiency are not met are overcome.
The mainstream sensor for positioning is described above, and it can be seen that a single sensor has advantages and disadvantages in solving the positioning problem; in practical application, a positioning problem needs to be solved by combining multiple sensors, and the following describes a multi-sensor fusion case for several typical scenarios:
(1) GPS + IMU + odometer.
Global anchoring given by the GPS can eliminate the problem of accumulative error, but the updating frequency is low, and signals are easily blocked; the updating frequency of IMU (inertial measurement unit) and wheel disc odometer is high, but there is accumulative error, the most conceivable is to receive GPS positioning, GPS position information is used, the error is the precision of GPS, in the next GPS positioning interval, IMU (angle accumulation) and odometer (displacement accumulation) are used for position and attitude accumulation, and the intermediate position and attitude error is the accumulation of initial GPS positioning error and intermediate accumulated error.
The disadvantages are as follows: the error is equal to the precision of the GPS, the high-precision requirement is not met, the accumulated error cannot be eliminated through the GPS, and no real-time offset correction exists.
(2) GPS + multi-line radar + high-precision map matching.
Global anchoring is given by the GPS, accumulation is carried out by using a radar SLAM front-end odometer in the middle, image matching of a high-precision map can be matched, a mode similar to rear-end loop optimization is carried out, and the GPS, the laser radar and a known map are fused and positioned.
The disadvantages are as follows: expensive, processing time is not fast enough, and there is no real-time offset correction.
Disclosure of Invention
In view of the above, the invention provides a method for positioning and correcting a low-speed automatic driving vehicle on a specified road based on image matching, which performs sensor data fusion through a high-frequency inertia measurement unit of a odometer and an IMU and a low-frequency SURF feature extraction algorithm so as to realize high-precision positioning and real-time correction of the own vehicle on a along lane.
A low-speed automatic driving vehicle positioning and deviation rectifying method on an appointed road based on image matching comprises the following steps:
(1) setting a plurality of reference points at intervals of fixed distance on a specified road, enabling a vehicle to run at low speed on the road, and acquiring environmental images, running mileage and longitude and latitude of the vehicle at two sides of the road corresponding to each reference point by using a binocular vision sensor, a mileometer and a GPS (global positioning system), thereby constructing an image-longitude and latitude data set and a mileage-longitude and latitude data set;
(2) acquiring the driving mileage of the vehicle in real time, and extracting similar environment images from the image-longitude and latitude data set according to the current driving mileage distance of the vehicle to form an image set omega;
(3) obtaining the environment images at two sides of the current road by using a binocular vision sensor, wherein 4 images P are obtained by two sides of the current road in total1~P4(ii) a For any one of the images PiPerforming feature matching on the image set omega and each image in the image set omega through feature extraction, wherein i is a natural number and is more than or equal to 1 and less than or equal to 4;
(4) extracting and comparing image P from image set omegaiTwo images Q with optimal feature matching result1And Q2And obtaining an image Q by searching in the image-longitude and latitude data set1And Q2Latitude and longitude Z of corresponding reference point1And Z2Further to the longitude and latitude Z1And Z2Weighted summation to obtain image PiResult of positioning(ii) a Traversing to obtain 4 images P1~P4The geographic coordinate X of the geometric center of the current vehicle is obtained by combining the positioning result of the binocular vision sensor with the position relation of the geometric center of the vehicle;
(5) for image P1~P4Performing line segment analysis, obtaining a deviation angle delta α and a deviation amount delta β of the current vehicle through deviation identification, and further calculating and judging the deviation state, the deviation distance delta L and the actual direction angle α of the current vehicle according to delta α and delta βpm
Further, the specific implementation process of the step (1) is as follows: the two binocular vision sensors are respectively installed on two sides of a vehicle body, environment images on two sides of a road are collected at each reference point, histogram equalization processing is carried out on the images to adjust the saturation and brightness of the images, then the environment images correspondingly processed at each reference point are added with longitude and latitude to form samples and stored in an image-longitude and latitude data set, meanwhile, the driving mileage corresponding to each reference point is added with the longitude and latitude to form samples and stored in a mileage-longitude and latitude data set, and the samples of the two data sets are the same in number, namely the number of the reference points is correspondingly obtained.
Further, the specific implementation process of the step (2) is as follows: firstly, extracting samples corresponding to n reference points closest to distance from a mileage-longitude and latitude data set, and averaging the longitude and latitude of the samples to obtain the current longitude and latitude Z of the vehicle; and simultaneously extracting samples corresponding to the n reference points from the image-longitude and latitude data set, and forming an image set omega by environment images of the samples, wherein n is a natural number greater than 1.
Further, the specific implementation process of the step (3) is as follows: for image PiAnd intercepting an image area with the width of 1/3 in the middle, obtaining a feature vector of the image area through a SURF feature extraction algorithm, and sequentially performing feature matching with the images in the image set omega through a Kd-Tree-based FLANN (fast Library for adaptive Nearest neighbors) fast search algorithm according to the feature vector.
Further, the step (4) and the image PiTwo images Q with optimal feature matching result1And Q2I.e. the image set omega and the image PiMatching two images with the largest number of feature points; to longitude and latitude Z1And Z2In weighted sum, longitude and latitude Z1And Z2By the weight of the corresponding image Q1And Q2And picture PiThe number of feature point matches (c) determines, and the greater the number of feature point matches, the greater the corresponding weight.
Further, the specific process of calculating the current vehicle offset angle Δ α and the offset Δ β in the step (5) is as follows:
5.1 image P based on Hough transform1~P4Performing line segment detection to obtain the angle α of each line segment in the image and the midpoint position of the line segment, wherein the detected line segment needs to meet the following conditions that | tan α | < tan5 DEG and the length of the line segment is greater than 50 pixels;
5.2 taking the length after line segment normalization as weight, to the image P1~P4The weighted sum of the angles α of all the line segments in the two images on the middle and left sides obtains delta αLFor image P1~P4Weighted summation of all segment angles α in the middle and right two images is obtained, and delta α is obtainedRAnd further on Δ αLAnd Δ αRAveraging to obtain an offset angle delta α;
5.3 obtaining the transverse distance β of the midpoint of each line segment relative to the center line of the road in the image according to the midpoint position of the line segment, taking the length after the line segment normalization as the weight, and applying the image P1~P4The weighted sum of the lateral distances β of all the line segments in the two images on the middle and left sides obtains delta βLFor image P1~P4The weighted sum of the lateral distances β of all the line segments in the middle and right two images obtains delta βRFurther, the shift amount Δ β is determined to be Δ βL+ΔβR
Further, in the step (5), if the deviation angle Δ α is positive, it indicates that the current vehicle is right-handed, if the deviation angle Δ α is negative, it indicates that the current vehicle is left-handed, and the deviation distance Δ L is obtained by the following equation:
Figure BDA0001675198580000041
wherein: h is the road width and H is the vehicle width.
Further, the direction angle α of the line connecting the two closest track points of the vehicle in the step (5) is usedmAdding the deviation angle delta α to the actual direction angle α of the current vehiclepm
Compared with the prior art, the invention has the following beneficial technical effects:
(1) the invention makes use of the data fusion of the inner sensor and the outer sensor, combines the advantages of the inner sensor and the outer sensor, makes up the defect of a single sensor, and improves the accuracy and precision.
(2) The invention utilizes SURF matching algorithm and FLANN quick search algorithm based on KD-TREE to carry out matching search on images at two sides of a real-time road, effectively reduces positioning influence caused by accumulated error of a single odometer at image positioning points, improves the precision to centimeter level, combines a left offset identification part and a right offset identification part to obtain accurate positioning of vehicles on a set lane, and can accurately calculate the front-rear left-right distance on the lane with strong robustness.
(3) The invention judges whether the vehicle to be detected deviates from the appointed automatic driving track or not by utilizing the offset identification and the along track positioning, and calculates the distance and the angle delta α of the left deviation and the right deviation, and the algorithm is simple and easy to implement and has high real-time performance.
Drawings
FIG. 1 is a schematic flow chart of the steps of the method of the present invention.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
As shown in FIG. 1, the automatic driving vehicle positioning and deviation rectifying method based on image matching of the invention comprises the following steps:
step 1: acquiring images of environments on two sides of a current specified road by using two binocular vision sensors, acquiring fixed distance setting reference points, acquiring an image at each reference point by using the vision sensors, performing histogram equalization processing on the images, adjusting the saturation and brightness of the images, and adding longitude and latitude labels as an environment image-longitude and latitude data set for subsequent identification; a mileage-latitude data set on a specified road is obtained using a speedometer or speed sensor plus GPS positioning data.
The method comprises the steps of collecting images on two sides of a road, intercepting the middle part of a road area in a live-action image as a target area according to the installation position of an image collecting device on a vehicle, for example, taking the middle 1/3 part of the image as the target area, and then carrying out histogram equalization processing on the target area to adjust the saturation and brightness of the image. Histogram equalization is a method of adjusting contrast using an image histogram. The basic idea is to transform the histogram of the original image into a uniformly distributed form, non-linearly stretch the image, reassign the image pixel values so that the number of pixels within a certain gray scale range is approximately the same, thus increasing the dynamic range of the image gray scale values, which can be used to enhance the local contrast without affecting the overall contrast. The middle 1/3 part of the image is intercepted as matching, so that the image matching process is accelerated, the processing time is reduced, and the phenomenon of mismatching caused by overlapping and twisting of the edge space of two continuous images due to the wide-angle limitation of a camera in the two continuous images is avoided.
The embodiment takes reference points with fixed intervals as reference points for acquiring environment images, two binocular vision sensors are respectively arranged on two sides of a vehicle to enable the binocular vision sensors to be arranged between 1.5 meters and 2 meters away from a road edge, and respectively face the road edge towards the left side and the right side, so that the road edge is in the middle range of the visual field height, the binocular vision sensors comprise a left sensor L1 and an L2 facing towards the left edge of a road, a right sensor R1 and an R2 facing towards the right edge of the road, the distance between the L1 and the L2 is 10cm, the distance between the R1 and the R2 is 10cm, the image size is 640 × 480, an inertial navigation positioning point is arranged every 0.1 meter or so far along the road, an image positioning point is arranged every 5-10 meters along the road, each image positioning point is used for acquiring images by a vision sensor, an environment data set is added, the reference point coordinates correspond to the images in the environment data set one-to one, in order to increase the distinctiveness between the environment images, irregular textures or patterns can be decorated along the road edge, a mileage meter is prepared, the mileage meter is used for recording or the mileage sensor, the mileage meter is used for establishing a.
The specific steps of constructing the data set are as follows: moving along a specified road at a moving speed of 1m/s, while:
-recording GPS track data at a frequency of 10 Hz;
-recording accumulated path data at a frequency of 100 Hz;
-recording the image data of both sides at 2Hz every 10s for 3 s.
Step 2: obtaining real-time accumulated distance data distance on a specified road by using a odometer in real time, weighting and calculating the current longitude and latitude Z according to two lines of data closest to the distance in a data set and the relation between the distance and the longitude and latitude, and acquiring an image set omega in a Z range from an environment data set1,Q2,...,Q4n}。
Accurate accumulated distance data are obtained through a speedometer, low-precision GPS position information is obtained through a distance-longitude and latitude data set prepared in advance, the low-precision GPS position information belongs to a range with the precision of about ten meters, and then a series of environment images on two sides of a road are obtained according to the range. Because the GPS track data is recorded at the frequency of 10Hz during sampling; the accumulated distance data is recorded at the frequency of 100Hz, the image data at two sides is recorded at the frequency of 2Hz every 10s for 3s, and therefore the GPS position information corresponding to the odometer is certain to exist.
In the embodiment, a mileage meter is used on a specified road in real time to obtain the accumulated distance of the vehicle, and then the corresponding longitude and latitude position Z is obtained according to the query mileage-longitude and latitude data set; if T of one of the two lines of data is inquired1_idValue (noted as t)1_id) T at image-latitude and longitude database table21_idIf the column exists, starting primary image positioning and calculating the current longitude and latitude; otherwise, only inertial navigation positioning is carried out, and image information is not processed.
Step 3: acquiring a current vehicle-side view image P ═ { P ═ using two binocular vision sensors1,P2,P3,P4Truncating the middle 1/3 of its width to get P' ═ P1’,P2’,P3’,P4' }; sequentially obtaining P based on SURFm' (m is more than or equal to 1 and less than or equal to 4) and Qi(i is more than or equal to 1 and less than or equal to 4n) and sequentially matching P by using KD-TREE quick search methodm' and QiThe feature vector of (2).
The embodiment obtains the nearest image matching based on the SURF characteristics and the KD-TREE rapid search algorithm, needs to obtain real-time vehicle side images from binocular cameras on the left side and the right side, and mainly comprises the following steps:
3.1 in table2 of database, let T1_idThe value of the column is equal to t1_idThe rows of data are taken out from table2, and each row of data is set as row { row }1,row2,…,row4nAn element of.
3.2 from the img _ addr columns of the several rows of data, the nearby left and right environment image sets QL ═ { QL ═ is obtained1,QL2,...,QL2n},QR={QR1,QR2,...,QR2n}。
3.3 obtaining the current vehicle side image PL ═ { PL through the left and right cameras1,PL2},PR={PR1,PR2}。
3.4 quickly searching based on SURF characteristics and KD-TREE, and matching PL with QL characteristic vectors; PR is the same as QR.
3.5 defining an evaluation function for the characteristic matching of PL and QL, PR and QR as follows, and evaluating the optimal degree of the matching result; the smaller the function value, the better.
Figure BDA0001675198580000071
In the formula: n is the number of the matching points,
Figure BDA0001675198580000072
and
Figure BDA0001675198580000073
are respectively matched pixel points in the two graphs,
Figure BDA0001675198580000074
and
Figure BDA0001675198580000075
the coordinates of the middle point in the two graphs (this is done for coordinate normalization), C is a constant.
And 4, step 4: according to and Pm' two environment images Q with optimal matching resultmj、QmkThe reference point coordinate D corresponding to the serial number is searchedmj、DmkObtaining a positioning result of Pm' through weighted calculation; according to PmAnd calculating the geographic coordinates of the geometric center of the vehicle by the coordinates of the image acquirer (certain eyes of the sensor) relative to the geometric center of the vehicle and the calculated positioning result corresponding to each sensor.
The embodiment calculates the geographic coordinates of the geometric center of the vehicle, and the specific process is as follows:
4.1 two environmental images that are assumed to have the best matching result with PL are QRj、QRkCorresponding to row in the row setjAnd rowkBy rowjAnd rowkThe longitude and latitude information of the PL is weighted and calculated by taking the reciprocal of the evaluation function value as weight to obtain a positioning result of the PL; PR is the same.
4.2 positioning results from PL, PR and PL1、PL2、PR1、PR2And calculating the geographic coordinates of the geometric center of the vehicle relative to the coordinates of the geometric center of the vehicle.
Step 5, correcting the normal positioning position of the track by using the left and right offset distance information in the offset identification to obtain the final positioning result, and setting the direction angle α of the connecting line of the front track point and the rear track point at the positionmAnd the left and right offset angles Δ α in the "offset recognition" are added to obtain the actual direction angle α at the positioning point at that timepm=αm+Δα(m=b,r)。
In this embodiment, the positioning position in the normal direction of the trajectory is corrected using the left and right offset distance information in the "offset recognition" to obtain the final positioning result, where:
inputting: a current image;
and (3) outputting: left and right offset angle values, left and right offset distance values;
① degree of accuracy, assuming that the lower limit of the length of a specific linear object is L, the minimum unit of angle recognition is
Figure BDA0001675198580000081
② the minimum unit of distance value is the road width W and vehicle width W under the resolution of a × b
Figure BDA0001675198580000082
The specific process is as follows:
about 5.1 camera records real-time image ML at 10Hz frequency1、ML2、MR1、MR2
5.2 recognition of ML based on Hough Transform1、ML2、MR1、MR2The angles and positions of the objects in the graph (namely the positions of the line segment midpoints and further the transverse distances of the line segment midpoints relative to the center line of the road) are obtained, and the linear objects are identified under the conditions that the absolute value of the slope is smaller than arctan (5 degrees) and the length is larger than 50 pixels.
5.3 using the length of the objects in the four graphs as weight to calculate the weighted angle value of the objects as the left-right deviation angle delta α, the negative number is left deviation, the positive number is right deviation, the related specific calculation formula is as follows:
Figure BDA0001675198580000091
similarly, the offset distance Δ β ═ Δ β can be determinedL+ΔβR
5.4 calculating the y-component L of the weighted position values of the "objects" in the left and right images respectively by using the lengths of the "objects" in the four images as weightsyAnd Ry
5.5 according to the relative relation between the actual space and the image pixel space, calculating
Figure BDA0001675198580000092
Figure BDA0001675198580000093
As the distance of the left-right deviation, delta α < arctan (5 degrees), the negative number is left deviation, the positive number is right deviation, and the position deviation of the vehicle relative to the target track under the β gamma coordinate system is represented by (delta β, delta gamma), wherein the deviation amount in the normal direction of the vehicle is delta β, namely the left-right deviation of the driving direction of the vehicle.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (6)

1. A low-speed automatic driving vehicle positioning and deviation rectifying method on an appointed road based on image matching comprises the following steps:
(1) the method comprises the following steps of setting a plurality of reference points at intervals of fixed distance on a specified road, enabling a vehicle to run at a low speed on the road, and acquiring environmental images, running mileage and longitude and latitude of the vehicle at two sides of the road corresponding to each reference point by using a binocular vision sensor, a mileometer and a GPS (global positioning system), so as to construct an image-longitude and latitude data set and a mileage-longitude and latitude data set, wherein the specific implementation process comprises the following steps: the method comprises the steps that two binocular vision sensors are respectively installed on two sides of a vehicle body, environment images on two sides of a road are collected at each reference point, histogram equalization processing is carried out on the images to adjust the saturation and brightness of the images, then the environment images which are correspondingly processed at each reference point are added with longitude and latitude to form samples, the samples are stored in an image-longitude and latitude data set, the driving mileage which corresponds to each reference point is added with the longitude and latitude to form samples, the samples are stored in a mileage-longitude and latitude data set, and the samples of the two data sets are the same in number, namely the number of the reference;
(2) acquiring the driving mileage of a vehicle in real time, extracting similar environment images from the image-longitude and latitude data set according to the current driving mileage distance of the vehicle to form an image set omega, wherein the specific implementation process comprises the following steps: firstly, extracting samples corresponding to n reference points closest to distance from a mileage-longitude and latitude data set, and averaging the longitude and latitude of the samples to obtain the current longitude and latitude Z of the vehicle; simultaneously extracting samples corresponding to the n reference points from the image-longitude and latitude data set, and forming an image set omega by environment images of the samples, wherein n is a natural number greater than 1;
(3) obtaining the environment images at two sides of the current road by using a binocular vision sensor, wherein 4 images P are obtained by two sides of the current road in total1~P4(ii) a For any one of the images PiPerforming feature matching on the image set omega and each image in the image set omega through feature extraction, wherein i is a natural number and is more than or equal to 1 and less than or equal to 4;
(4) extracting and comparing image P from image set omegaiTwo images Q with optimal feature matching result1And Q2And obtaining an image Q by searching in the image-longitude and latitude data set1And Q2Latitude and longitude Z of corresponding reference point1And Z2Further to the longitude and latitude Z1And Z2Weighted summation to obtain image PiThe positioning result of (2); traversing to obtain 4 images P1~P4The geographic coordinate X of the geometric center of the current vehicle is obtained by combining the positioning result of the binocular vision sensor with the position relation of the geometric center of the vehicle;
(5) for image P1~P4Performing line segment analysis, obtaining a deviation angle delta α and a deviation amount delta β of the current vehicle through deviation identification, and further calculating and judging the deviation state, the deviation distance delta L and the actual direction angle α of the current vehicle according to delta α and delta βpm
2. The low speed autonomous drive of claim 1The method for positioning and rectifying the driving vehicle is characterized in that: the specific implementation process of the step (3) is as follows: for image PiAnd intercepting the image area with the width of 1/3 in the middle, obtaining the feature vector of the image area through a SURF feature extraction algorithm, and sequentially performing feature matching with the images in the image set omega through a FLANN fast search algorithm based on Kd-Tree according to the feature vector.
3. The method of claim 1, wherein the method comprises: the step (4) and the image PiTwo images Q with optimal feature matching result1And Q2I.e. the image set omega and the image PiMatching two images with the largest number of feature points; to longitude and latitude Z1And Z2In weighted sum, longitude and latitude Z1And Z2By the weight of the corresponding image Q1And Q2And picture PiThe number of feature point matches (c) determines, and the greater the number of feature point matches, the greater the corresponding weight.
4. The method for locating and correcting the deviation of the low-speed automatic driving vehicle as claimed in claim 1, wherein the specific process of calculating the current vehicle deviation angle Δ α and the deviation amount Δ β in the step (5) is as follows:
5.1 image P based on Hough transform1~P4Performing line segment detection to obtain the angle α of each line segment in the image and the midpoint position of the line segment, wherein the detected line segment needs to satisfy the following condition that | tan α | < tan5°And the length of the line segment is greater than 50 pixels;
5.2 taking the length after line segment normalization as weight, to the image P1~P4The weighted sum of the angles α of all the line segments in the two images on the middle and left sides obtains delta αLFor image P1~P4Weighted summation of all segment angles α in the middle and right two images is obtained, and delta α is obtainedRAnd further on Δ αLAnd Δ αRAveraging to obtain an offset angle delta α;
5.3 obtaining the relative position of the midpoint of each line segment in the image according to the midpoint position of the line segmentThe lateral distance β of the center line of the road is weighted by the length normalized by the line segment, and the image P is subjected to1~P4The weighted sum of the lateral distances β of all the line segments in the two images on the middle and left sides obtains delta βLFor image P1~P4The weighted sum of the lateral distances β of all the line segments in the middle and right two images obtains delta βRFurther, the shift amount Δ β is determined to be Δ βL+ΔβR
5. The method as claimed in claim 4, wherein the step (5) is performed by determining that the current vehicle is right-handed if the deviation angle Δ α is positive, determining that the current vehicle is left-handed if the deviation angle Δ α is negative, and determining the deviation distance Δ L according to the following equation:
Figure FDA0002387071830000021
wherein: h is the road width and H is the vehicle width.
6. The method for locating and correcting the deviation of the low-speed automatic driving vehicle according to claim 1, wherein the step (5) is carried out according to the direction angle α of the connecting line of the two nearest track points of the vehiclemAdding the deviation angle delta α to the actual direction angle α of the current vehiclepm
CN201810522641.1A 2018-05-28 2018-05-28 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching Active CN108759823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810522641.1A CN108759823B (en) 2018-05-28 2018-05-28 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810522641.1A CN108759823B (en) 2018-05-28 2018-05-28 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching

Publications (2)

Publication Number Publication Date
CN108759823A CN108759823A (en) 2018-11-06
CN108759823B true CN108759823B (en) 2020-06-30

Family

ID=64002824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810522641.1A Active CN108759823B (en) 2018-05-28 2018-05-28 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching

Country Status (1)

Country Link
CN (1) CN108759823B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11049339B2 (en) 2019-03-29 2021-06-29 Wipro Limited Method and system for validating odometer performance in an autonomous vehicle in real-time
CN112347935B (en) * 2020-11-07 2021-11-02 的卢技术有限公司 Binocular vision SLAM-based automatic driving vehicle positioning method and system
CN113627270B (en) * 2021-07-19 2023-05-26 成都圭目机器人有限公司 Highway mileage positioning method based on image stitching and marking detection
CN113949999B (en) * 2021-09-09 2024-01-30 之江实验室 Indoor positioning navigation equipment and method
CN114128461A (en) * 2021-10-27 2022-03-04 江汉大学 Control method of plug seedling transplanting robot and plug seedling transplanting robot
CN114282033B (en) * 2022-03-02 2022-05-27 成都智达万应科技有限公司 Deviation correction and intelligent road disease reporting system based on GPS
CN114593739B (en) * 2022-03-17 2023-11-21 长沙慧联智能科技有限公司 Vehicle global positioning method and device based on visual detection and reference line matching
CN117351338B (en) * 2023-12-04 2024-02-13 北京理工大学前沿技术研究院 Automatic pile correction method, system and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761242A (en) * 2016-01-27 2016-07-13 北京航空航天大学 Blind person walking positioning method based on computer binocular vision and inertial measurement
CN106504288A (en) * 2016-10-24 2017-03-15 北京进化者机器人科技有限公司 A kind of domestic environment Xiamen localization method based on binocular vision target detection
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107796391A (en) * 2017-10-27 2018-03-13 哈尔滨工程大学 A kind of strapdown inertial navigation system/visual odometry Combinated navigation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140257696A1 (en) * 2013-03-07 2014-09-11 Kamal Zamer Travel Pattern Analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761242A (en) * 2016-01-27 2016-07-13 北京航空航天大学 Blind person walking positioning method based on computer binocular vision and inertial measurement
CN106504288A (en) * 2016-10-24 2017-03-15 北京进化者机器人科技有限公司 A kind of domestic environment Xiamen localization method based on binocular vision target detection
CN106840148A (en) * 2017-01-24 2017-06-13 东南大学 Wearable positioning and path guide method based on binocular camera under outdoor work environment
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107796391A (en) * 2017-10-27 2018-03-13 哈尔滨工程大学 A kind of strapdown inertial navigation system/visual odometry Combinated navigation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Retina Based Biometric Identification Using SURF and ORB Feature Descriptors;Shalaka Haware et al.;《Retina-identification-haware2017》;20170831;第1-6页 *
基于局部自适应核回归的仪表定位方法;杜烨宇等;《数据采集与处理》;20160531;第31卷(第3期);第490-499页 *

Also Published As

Publication number Publication date
CN108759823A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN110859044B (en) Integrated sensor calibration in natural scenes
CN111436216B (en) Method and system for color point cloud generation
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN107704821B (en) Vehicle pose calculation method for curve
CN102208012B (en) Landscape coupling reference data generation system and position measuring system
JP5162849B2 (en) Fixed point position recorder
EP3640681B1 (en) Method and apparatus for estimating position
CN108885106A (en) It is controlled using the vehicle part of map
US20220270358A1 (en) Vehicular sensor system calibration
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN110332945B (en) Vehicle navigation method and device based on traffic road marking visual identification
JP6758160B2 (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
US11367213B2 (en) Method and apparatus with location estimation
WO2022147924A1 (en) Method and apparatus for vehicle positioning, storage medium, and electronic device
CN113566779A (en) Vehicle course angle estimation method based on linear detection and digital map matching
WO2020113425A1 (en) Systems and methods for constructing high-definition map
US20220404170A1 (en) Apparatus, method, and computer program for updating map
CN113781645A (en) Indoor parking environment-oriented positioning and mapping method
CN115597592B (en) Comprehensive positioning method applied to unmanned aerial vehicle inspection
US20230118134A1 (en) Methods and systems for estimating lanes for a vehicle
CN115127547B (en) Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning
US20230394679A1 (en) Method for measuring the speed of a vehicle
JP7334489B2 (en) Position estimation device and computer program
CN115127547A (en) Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant