CN107167826B - Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving - Google Patents

Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving Download PDF

Info

Publication number
CN107167826B
CN107167826B CN201710205430.0A CN201710205430A CN107167826B CN 107167826 B CN107167826 B CN 107167826B CN 201710205430 A CN201710205430 A CN 201710205430A CN 107167826 B CN107167826 B CN 107167826B
Authority
CN
China
Prior art keywords
image
target
target object
distance
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710205430.0A
Other languages
Chinese (zh)
Other versions
CN107167826A (en
Inventor
苏晓聪
朱敦尧
陶靖琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN KOTEI TECHNOLOGY Corp
Original Assignee
WUHAN KOTEI TECHNOLOGY Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN KOTEI TECHNOLOGY Corp filed Critical WUHAN KOTEI TECHNOLOGY Corp
Priority to CN201710205430.0A priority Critical patent/CN107167826B/en
Publication of CN107167826A publication Critical patent/CN107167826A/en
Application granted granted Critical
Publication of CN107167826B publication Critical patent/CN107167826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manufacturing & Machinery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Navigation (AREA)

Abstract

The invention provides a vehicle longitudinal positioning method based on variable grid image feature detection in automatic driving, which can search a specific front target in a vehicle-mounted binocular vision system and output the distance from the vision system to the front target by applying an ORB feature extraction algorithm based on a variable grid area (carrying scale information) according to the front target acquired from data output by a high-precision navigation system and the target distance calculated by a road in high-precision navigation. According to the distance and the installation position of the binocular system in the vehicle, the vehicle track in high-precision navigation can be corrected, and the vehicle longitudinal positioning precision in automatic driving is improved.

Description

Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
Technical Field
The invention belongs to the technical field of automatic driving of automobiles, and relates to a system and a method for positioning an automatic driving automobile, in particular to a system and a method for detecting the longitudinal positioning of an automobile in automatic driving based on image characteristics of a variable grid.
Background
The automatic driving system of the intelligent vehicle needs to rely on high-precision map data, complete global and local path planning dynamically and economically according to appointed destination information to form a self navigation track, and complete various control actions of the unmanned vehicle safely and conveniently. In the implementation process of the system, the high-precision positioning information of the vehicle needs to be accurately known in real time, so that the judgment on the decision level can be made on the current driving state control.
The conventional positioning method in autopilot is generally composed of gnss (global Navigation satellite system) in combination with imu (inertial Measurement unit). The GNSS can obtain better positioning accuracy in suburb plain areas, but in a complex urban environment, a positioning accuracy error in a range of several meters is easily caused by a multipath reflection effect of signal propagation; the IMU is generally formed by measuring instruments such as a gyroscope, a multi-axis acceleration sensor and the like, the current posture and acceleration of the IMU are detected in real time, vehicle motion information within a certain distance can be accurately recurred according to the IMU, but error accumulation can be generated in the process of dead reckoning by using the IMU, and the degradation of the positioning precision is more serious along with the increase of time. By fusing and interpolating GNSS and IMU data, a better high-precision positioning effect can be achieved.
However, if the GNSS + IMU method is used to complete high-precision positioning in the autopilot system, it is not guaranteed that the control action is completed safely and accurately in the execution process of the automatic decision-making, and an additional positioning method and a sensor are required to assist. Generally, laser point clouds acquired using lidar (light Detection And ranging) are matched to complete the localization of vehicles in local environments, And multiple cameras are used for object Detection And identification, depth calculation, motion estimation, etc. to complete the localization. The two schemes respectively using high-cost LiDAR and low-cost multi-camera and the conventional scheme based on high-cost GNSS + IMU mutually assist in correcting errors, and can provide high-precision positioning information in automatic driving.
In the existing automatic driving auxiliary positioning mode based on the camera, the attitude transformation of the camera is usually calculated to form a visual odometer, and the method can accurately determine the pose state of the vehicle within a certain time range. However, the binocular camera-based visual odometer needs to correct, register and calculate the disparity map of the left and right eye images in real time, cannot output the images at a high frequency, and the frame rate of depth calculation of the binocular images for images of 1600 × 1200 pixel size is less than 10fps (frames Per second). The image processing frame rate is low, the output frequency of the positioning information calculated by the camera is also low, and when the positioning information is fused with other positioning information such as GNSS + IMU, more problems of time synchronization, linear/nonlinear interpolation and the like need to be considered, so that the reliability, the real-time performance and the accuracy of high-precision positioning information are influenced. The multi-camera image processing technical scheme capable of achieving real-time, applicable and robust lane-level positioning accuracy occupies a central position in the fields of intelligent traffic detection systems and automatic driving.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problem to be solved by the invention is to provide a system and a method for detecting the longitudinal positioning of a vehicle in automatic driving based on the image characteristics of a variable grid, wherein an ORB characteristic extraction algorithm based on a variable grid area (carrying scale information) is applied, a specific front target is retrieved in a vehicle-mounted binocular vision system, the distance from the vision system to the front target is output, and the vehicle longitudinal positioning precision in automatic driving can be improved by correcting the vehicle track in high-precision navigation according to the distance and the installation position of the binocular system in the vehicle.
The technical scheme adopted by the invention is as follows:
a vehicle longitudinal positioning system based on variable grid image feature detection in automatic driving is divided into a high-precision navigation system, a binocular camera, an image preprocessor, a target detector, a target tracker and a target distance calculator according to functional modules;
the high-precision navigation system is used for carrying out map retrieval in real time, sending the name ID of a target object appearing or about to appear in front of the vehicle body to the target detector according to the current position of the vehicle body, and carrying out longitudinal distance correction on high-precision navigation according to a specific target distance;
the binocular camera comprises a left eye camera and a right eye camera and is used for acquiring a video image in front of the advancing vehicle in real time and outputting the video image to the image processor for preprocessing;
the image preprocessor is used for carrying out distortion correction, epipolar constraint correction, graying of images acquired by the binocular camera and distribution work of the images according to the internal and external parameters calibrated by the binocular camera;
the target detector is used for receiving the name ID of the specific target object sent by the high-precision navigation system, performing image feature matching work based on variable grids on a left target image in a gray level image distributed by the image preprocessor, and generating a target object variable grid feature file in an off-line manner;
the target tracker is used for completing tracking operation based on an image area according to an image input by the image preprocessor and a detection rectangular frame of a specific target object detected by the target detector, taking an image of the specific target area and a neighborhood partial image thereof as a convolution template, judging an area with the highest convolution response in the whole input new scene image in a subsequent image frame, and updating the current convolution template by using the highest response area; continuously outputting the highest response area of each frame as the tracking position of the target;
and the target distance calculator is used for calculating the distance between the binocular camera and the target object in the vertical direction through polar line geometric constraint so as to obtain the distance between the current frame of the vehicle body and the specific target object during acquisition.
The method for detecting the longitudinal positioning of the vehicle in automatic driving based on the image characteristics of the variable grids comprises the following steps:
s1, installing a forward binocular camera on the automatic driving vehicle with the over-high precision map and the GNSS + IMU system, and calibrating internal and external parameters of the binocular camera;
s2, extracting multi-frame images containing specific target areas according to specific target objects in scenes in the high-precision map database and video frames collected by a camera during running in an actual road to form a target object extraction image frame sequence, extracting target object features, and making a feature description file based on a variable grid;
s3, in the preparation stage of starting the automatic driving vehicle, the initialization work of each module of the high-precision navigation system, the binocular camera, the image preprocessor, the target detector, the target tracker and the target distance calculator is completed in sequence;
s4, in the running stage on the automatic driving line, the whole system needs high-precision navigation to trigger the detection process, the data source needing to be input into the system is the name ID of the target object in the high-precision map, the approximate distance calculated in navigation, and the image frame sequence input by the binocular camera through preprocessing operation, and the distance output of the specific target object in the scene is carried out according to the flow of 'detection', 'tracking', 'distance output';
and S5, calculating the output distance through the camera, inputting the distance into the high-precision navigation module, and assisting in executing a longitudinal correction process.
Compared with the prior art, the invention has the following advantages:
1. for dynamic complex road conditions faced by automatic driving, a low-cost vision sensor is utilized, the longitudinal error of a GNSS + IMU positioning system can be effectively corrected, and the positioning precision of the self-vehicle is improved;
2. the operation complexity in the traditional binocular vision system is reduced, so that the positioning of the vision sensor part meets the requirement of the output frequency of the real-time performance of the system;
3. in the actual target detection and tracking process, more calculation uses image convolution operation, so that the method is convenient for transplantation and hardware acceleration in an embedded + GPU system in engineering application.
Drawings
FIG. 1 is a block diagram of a vertical positioning system and method for variable grid based image feature detection in autonomous driving, according to the teachings of the present invention;
FIG. 2 is a diagram of the result of manually labeling the area in which a particular object is located corresponding to inventive content step S2;
fig. 3 is a schematic diagram of the variable mesh partition result according to the present invention, which adopts the binary partition method described in step S42;
fig. 4 is a schematic diagram of feature matching for detecting and positioning a target object, where the left side is the target object to be detected, and the right side is a test scene, and through the feature point detection and matching process, feature point correspondences in the two images can be obtained, and homography mapping of the correspondences is calculated, that is, a detection result of the target object and a detection area where the target may be located can be obtained in the test scene;
FIG. 5 is a schematic diagram illustrating the operation of inputting images from the left and right eye cameras into the target detector and successfully completing the detection and extraction of the target object;
fig. 6 depicts how the degree of progressivity is determined: an outsourcing rectangle ABCD (outer edge of image) and an inside-to-specific-object detection frame rectangle MNOP, the four sides of the inside rectangle MNOP being respectively at vertical distances dis _ T, dis _ R, dis _ B and dis _ L from the outsourcing rectangle;
FIG. 7 is a graph illustrating the results of matching according to the 0,1,2 sequence numbers of the variable mesh-based image feature matching algorithm; the left side is redrawing of different serial number detection templates, the right side is a scene image frame collected by an actual camera, and a line segment connected from the left side to the right side is a drawn corresponding feature matching point connecting line;
FIG. 8 is a sequence of consecutive 20 frame image tracks; the upper left corner image is the first frame image and the target rectangular frame to be tracked, and the rest images are the tracking results of the continuous frames.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides an image feature detection algorithm based on a variable grid, which can search a specific front target in a vehicle-mounted binocular vision system and output the distance from a vision system to the front target by installing a vehicle peripheral image acquired by a camera in an automatic driving system, inputting target information about a specific scene given in a high-precision navigation system and a target distance calculated by a road in the high-precision navigation system and applying a feature extraction algorithm based on the size of a variable grid area (carrying scale information). According to the distance information and the physical position of the binocular system installed in the vehicle, the vehicle track in high-precision navigation can be corrected, and the vehicle longitudinal positioning precision in automatic driving is improved.
The specific scene objects in the high-precision map refer to fixed scene objects such as various traffic information signs, and the specific objects generally exist in the high-precision map database, so that a data relationship, that is, a series of variable mesh feature descriptions belonging to a specific object, and names of corresponding objects in the high-precision map data can be agreed in advance in an autonomous navigation system. In the process of moving the automatic driving vehicle, the high-precision navigation system pre-judges the name of a target object which possibly appears in the field of view of the vehicle-mounted camera in front, and sends the information to the target detection module to call a variable grid feature file prefabricated in a database so as to finish the detection and matching of a specific target.
The invention provides an image feature detection and extraction method based on variable grids on the longitudinal auxiliary positioning of automatic driving, which comprises the following specific steps:
s1, installing a forward binocular camera on the automatic driving vehicle with the over-high precision map and the GNSS + IMU system, and calibrating internal and external parameters of the binocular camera;
in this step, the device attributes and parameters to be recorded in detail are: the mounting position information of the left eye camera relative to the coordinate system of the vehicle body and the internal reference M of the left eye camera1、D1Reference M of right eye camera2、D2And outer reference R, T for the left and right eye cameras, where M1And M2Representing the focal lengths f of two cameras, respectivelyx、fyAnd principal point location cx、cyIn the form of a 3 x 3 matrix:
Figure GDA0002235277580000061
D1、D2respectively representing imaging distortion coefficients of left and right eye cameras; extrinsic parameters R, T are used to describe the rotation angle and translation distance of the position of the right eye camera relative to the left eye camera.
The imaging distortion of the camera can be corrected by the internal parameters of the binocular camera, andand applying external parameters of the left and right cameras in parallax calculation and depth calculation. On one hand, tangential and radial distortion caused by lens installation can be corrected through intrinsic parameters, so that distortion errors of edges similar to flat straight lines in an imaging result, which are degraded into arc edges, are eliminated as much as possible; on the other hand, the projection relationship between a point (X ', Y') in the image plane of the right eye camera and a 3D point (X, Y, Z) in the scene can be given by the intra-camera and extra parameters:where s represents only the amount of change in scale.
S2, extracting multi-frame images containing specific target areas according to specific target objects in scenes in the high-precision map database and video frames collected by a camera during running in an actual road to form a target object extraction image frame sequence, extracting target object features, and making a feature description file based on a variable grid;
the image for extracting the target object features is derived from a video frame acquired by the left eye camera. For the video frames for feature extraction of a specific object, an object extraction sequence can be formed by acquiring 3 frames of images according to the roughly estimated near, middle and far. Particularly, the invention supports more frames of target object images to carry out feature extraction, but in practical application, it is better to carry out feature extraction on video frames not larger than 3 frames in consideration of feature matching efficiency.
In this step, a specific object in the scene may be detected and/or framed in the image frame according to a specific object detection algorithm and/or a manual marking method, and the size of the framed mesh needs to be recorded in the profile. The image sequence collected when the distance to the target object is different, the grid size can feed back the distance information of the camera (automatic driving vehicle) to the target object during collection, and the details and the characteristic point sequence presented in the image by the target object in the grid, namely the image sequence contains rich scale information.
In particular, in the calculation process of the image feature extraction based on the variable grids, the feature description algorithm used in the invention is a classic orb (improved brief) feature description algorithm. Further, the storage content of the variable mesh feature file of the target object stored after the serialization is defined as follows:
a) the name ID of the object which can be searched in the high-precision map database; typically, a string of numbers;
b) according to the distance between the image acquisition time and the target object, distributing a sequence number for the acquired image frame sequence; sequence numbers are assigned starting from 0. And when 3 frames of current target objects are acquired, the image sequence numbers of 0,1 and 2 correspond to the current target objects.
c) In image frames corresponding to different serial numbers, a key feature point sequence in a specific target object grid refers to a feature point subset which can be matched among multiple frames, namely a sequence consisting of feature points representing the same point in the image frame sequence;
d) ORB feature descriptions of feature points corresponding to the sequence of feature points;
e) storing information related to a retrieval grid of a specific target object in an image, wherein the information generally comprises the image coordinate at the upper left corner of the grid and the width and height of the grid in the image; for the manual marking method, the grid is the maximum outsourcing rectangle corresponding to the target object in the frame image; for a specific target detection algorithm, the grid corresponds to a rectangular window of target detection bit output acquired when the current frame runs the target detection algorithm;
f) and according to the distance information between the current traditional GNSS and IMU navigation system and the target object during acquisition.
Further, the extraction of the image features based on the variable grids can be completed off line, namely, a feature template library file related to a specific target object in the high-precision map data is made.
S3, in the preparation stage of starting the automatic driving vehicle, the initialization work of each module is completed in sequence;
according to the functional module division of the system, all module initialization tasks of a high-precision navigation system, a binocular camera, an image preprocessor, a target detector, a target tracker and a target distance calculator need to be completed in sequence.
The image preprocessor needs to complete distortion correction, epipolar constraint correction, graying of images and image distribution of images acquired by the binocular camera, so that the initialization needs to depend on internal and external parameters calibrated by the camera in the step 1. Further, graying of the color image needs to be completed by 0.299 b +0.587 g +0.114 r, where b, g, and r represent the pixel intensity of blue, green, and red channels of each pixel point, respectively.
The task that the target detector needs to complete is to perform variable grid-based image feature matching work on the left target image in the gray scale image distributed by the image preprocessor when receiving the specific target object name sent by the high-precision navigation system. Therefore, the target detector needs to initialize the profile generated offline in step S2.
The target tracker will complete the tracking operation based on the image area, and since the module uses the general discrete Fourier Transform algorithm FFTW (fast Fourier Transform in the west), it needs to complete the preset parameter reading of FFTW in the initialization phase.
The target distance calculator is a core module for executing depth operation based on the binocular camera, and internal and external parameters depending on the binocular camera are initialized.
S4, in the running stage on the automatic driving line, the whole system needs high-precision navigation to trigger the detection process, the data source needing to be input into the system is the name of the target object in the high-precision map, the approximate distance calculated in the navigation, and the image frame sequence input by the binocular camera through the preprocessing operation, and the distance output of the specific target object in the scene is carried out according to the flow of 'detection' — 'tracking' — 'distance output'.
Further, the step S4 includes the following sub-steps:
s41, sending the name ID of the object entering the map searching range by using the high-precision navigation system, and pre-reading the feature description file corresponding to the object initialized in the step S3 to the object detector; specifically, the corresponding grid-related information and the acquisition-time distance information need to be read in advance, that is, the content definitions e and f are stored in step S2.
S42, judging the grid size information used by the variable grid at the moment by using the target object distance value estimated when the high-precision navigation system is triggered and the target object feature description file content pre-read in the step S41, and dividing the image collected by the left eye camera output by the image preprocessor by using the grid size by using a dividing method based on binary search to obtain the current variable grid; the calculation process is as follows:
(1/2)(n+1)*LenGlobal<=LenBlock<=(1/2)n*LenGlobal
in the formula, n is the binary search times which need to be calculated according to iteration, namely the average search times; lenGlobalThe length or width of the preprocessed camera image; lenBlockThe length or width of the mesh size in the scale information used for the current variable mesh; calculating the n value satisfying the above formula by starting iterative operation from n to 1; it should be noted that, here, n corresponding to the length and width of the image is calculated according to the y and x directions of the image respectivelyyAnd nx(ii) a Respectively on the input image preprocessed by the left eye camera, the average division length is nyThe width of the part and the division is nxAnd dividing the image grids into current variable grids.
S43, sequentially traversing the image areas obtained in the previous step and divided based on the variable grids, performing ORB feature extraction on each area, and completing detection and matching with the key feature point sequence in the target object feature description file and the ORB feature description of the feature points (namely the storage content definitions c and d loaded in the step S41); if the matching is successful, performing the same detection and matching on the image collected by the right-eye camera output by the image preprocessor, and jumping to the step S44; otherwise, the image preprocessor reads in the left eye image of the next frame and executes the step again;
further, the matching process specifically includes the following: firstly, judging whether two ORB feature points are similar or not through the difference distance described by the ORB feature points, firstly completing one-time feature point matching between the feature point of each region and the template feature point loaded in the step S41 to obtain a sequence of feature point matching pairs from a detection region to a template region, and then completing the same operation from the template region to the detection region in a reverse manner to obtain a sequence of cross-verified feature point matching pairs; the detection area refers to an area where a specific target object acquired by a left eye camera is located, and the template area refers to an area where a characteristic target object in a high-precision map database is located; secondly, obtaining the mapping relation of the characteristic points existing between the detection area and the template area from the two groups of matching pair sequences, and defining the reversible transformation existing between the forward mapping and the reverse mapping as a remapping relation; and then detecting whether the number of the characteristic point pairs reaches a threshold value capable of solving a mapping matrix in the current matching process, calculating the mapping matrix once the number of the characteristic point pairs exceeds 4 characteristic point pairs, and then detecting the remapping relationship, wherein if the number of the characteristic point pairs exceeds 4 characteristic point pairs, the matching process is successful, and otherwise, the matching process fails.
Further, if the matching process fails until the high-precision navigation component retrieves that the current vehicle leaves the visual area of the specific target object from the map data, the current detection task is declared to fail, and the calculation process does not continue.
S44, respectively taking the left and right eye image frames and the left and right eye image detection rectangular frames which successfully complete the target object detection as parameters, and completing the tracker of the specific areas of the left eye image and the right eye image, wherein the parameters required to be transmitted in the tracker initialization process are the left and right eye detection rectangular frames and the corresponding left and right eye images successfully obtained in the previous step, once the tracker initialization is successful, the left and right eye image frames output by the image preprocessor are continuously input to the corresponding target trackers in the subsequent process, the target trackers take the specific target area image and the neighborhood partial images thereof as convolution templates, the areas with the highest convolution response in the whole input new scene image are judged in the subsequent image frames, and the current convolution templates are updated by using the highest response areas; the specific target area image is a detection rectangular frame area of a specific target object;
the target tracker continuously outputs the highest response area of each frame as the tracking position of the target;
further, the closeness of the tracking position to the edge in the image captured by the camera is judged, using [0,1 ]]And (5) quantifying the degree of progression of the interval. The calculation method of the gradual progress comprises the steps of respectively calculating the minimum values dbMinDis of the distances from the top point of the outer-wrapped rectangle 4 at the current specific target tracking position to the outer 4 edges of the image, and calculating the gradual progress according to the height H of the scene imagescnAnd width WscnThe ratio ((H) of the area of the rectangle of the scene image) to the rectangle of the degree of progressivity (the shortest distance of the outsourcing rectangle frame of the target tracking in the current frame image from the edge of the scene image, as the distance of each side of a virtual rectangle named the degree of progressivity to the edge of the scene image) can be calculatedscn-2*dbMinDis)*(Wscn-2*dbMinDis))/(Hscn*Wscn). The closer the ratio is to 1, the closer the tracking position is to the image edge. In the imaging sense, when the vehicle is about to travel by the object, the image of the object is about to move from near the center of the image to the edge of the image, and then across the edge of the image (beyond the angle of view of the camera) until disappearing from the edge of the image, as viewed from the image captured by the camera.
From the output of the calculation of the degree of progressiveness, whether the camera is at a proper distance from the target at the time and whether the degree of progressiveness is required as a cutoff condition in step S45 and subsequent steps can be well reflected. Generally, a degree of progressiveness threshold value is set to be 0.8-0.9, if the degree of progressiveness threshold value is larger than the degree of progressivity threshold value, the target is considered to be too close to the edge of the image, and a partial area possibly exceeds the edge of the image and cannot complete the direct image matching process of the left eye and the right eye.
S45, judging the approaching degree of the tracking position and the edge in the image collected by the camera, namely the gradual progress, setting the gradual progress threshold value to be 0.9, if the gradual progress of the edge of the image of the target outsourcing rectangle tracked by the left eye and the right eye is not more than 0.9, entering a target distance calculator, calculating the distance between the binocular camera and the target object in the vertical direction through epipolar geometric constraint, and further obtaining the distance between the current frame of the vehicle body and the specific target object during collection; otherwise, the left and right eye cameras are considered to be incapable of completely acquiring the specific tracking target object.
Furthermore, the left and right eye images entering the target distance calculator and the corresponding tracking rectangular frame area can calculate the depth of field of the corresponding target object by calculating epipolar geometry, namely the distance between the binocular camera and the target object in the vertical direction when the current image frame is collected is obtained, and the distance can be used for obtaining the distance to the specific target object when the current image frame of the vehicle body is collected according to the position of the left eye camera under a vehicle coordinate system.
And 5, calculating the output distance through a camera terminal, inputting the calculated distance into a high-precision navigation module, and assisting in executing a longitudinal correction process.
The invention will be further explained with reference to fig. 1:
firstly, on an automatic driving vehicle with the software and hardware system, a binocular camera system is started, an image preprocessor is initialized, and the preprocessed left and right eye images can be corrected by camera distortion, corrected by binocular polar lines, cut and zoomed. The processed left and right eye images are distributed to a target detector, a target tracker and a target distance calculator according to requirements.
Secondly, initializing a target detector by using a prefabricated variable grid offline feature file, initializing a target tracker by using an FFTW configuration file, and initializing a target distance calculator by using internal and external parameters of a binocular camera. After the initialization is successful, the whole set of system can be operated on line.
Thirdly, during the running process of the automatic driving vehicle, a high-precision navigation system sends a 'front specific target detection is required' instruction to a target detector. The target detector can call different target object serial numbers (corresponding to the feature file storage format b item) according to the target object distance calculated by the navigation system at the moment, as shown in the attached figure 7 of the specification, a result graph of matching according to the serial numbers 0,1 and 2 of the image feature matching algorithm based on the variable grid is reflected from top to bottom, the left side is redrawing of different serial number detection templates, the right side is a scene image frame collected by an actual camera, and a line segment connected from the left side to the right side is a drawn corresponding feature matching point connecting line.
Fourthly, after the left eye detection is successful, the same operation steps are carried out on the right eye image. If the right eye is still successfully detected, the operating environment of the target tracker is entered. An example of this process is described in the tracking sequence of the consecutive 20 frames of images of fig. 8, the top left image of fig. 8 being the first frame of image input and the target rectangular box to be tracked, the remaining images being the tracking results of the consecutive frames. And simultaneously inputting the left and right eye images into the target tracker to obtain a tracking result.
Fifthly, inputting the tracking results of the current left and right eye image frames and the left and right eye images distributed by the image preprocessor into a target distance calculator to obtain an output distance value. Generally, if a failure flag is returned in this step, it is necessary to detect whether the current left and right eye images include a specific detection target object, that is, the target detector is called again to check a return value to determine whether the current target object is out of the visible range of the camera, and whether the current processing flow of the specific target detection, tracking, and distance output is completed.
And sixthly, returning the distance calculation result output in the previous step and the corresponding system time stamps for acquiring the left and right eye images of the camera to the high-precision navigation module, and directly or indirectly correcting the longitudinal vehicle position of the high-precision navigation corresponding to the system time stamps.
By this, the embodiment of the present invention is completed, that is, an application example of the detection of the longitudinal orientation of the vehicle based on the image features of the variable mesh in one automatic driving is completed.
In the description of the present specification, the description of the term "one embodiment" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The parts not described in the specification are prior art or common general knowledge. The present embodiments are illustrative only and not intended to limit the scope of the present invention, and modifications and equivalents thereof by those skilled in the art are considered to fall within the scope of the present invention as set forth in the claims.

Claims (10)

1. A system for detecting longitudinal location of a vehicle in autonomous driving based on variable grid image features, characterized by: the system comprises a high-precision navigation system, a binocular camera, an image preprocessor, a target detector, a target tracker and a target distance calculator according to functional module division;
the high-precision navigation system is used for carrying out map retrieval in real time, sending the name ID of a target object appearing or about to appear in front of the vehicle body to the target detector according to the current position of the vehicle body, and carrying out longitudinal distance correction on high-precision navigation according to a specific target distance;
the binocular camera comprises a left eye camera and a right eye camera and is used for acquiring a video image in front of the advancing vehicle in real time and outputting the video image to the image processor for preprocessing;
the image preprocessor is used for carrying out distortion correction, epipolar constraint correction, graying of images and distribution work of the images acquired by the binocular camera according to the internal and external parameters calibrated by the binocular camera;
the target detector is used for receiving the name ID of the specific target object sent by the high-precision navigation system, performing image feature matching work based on variable grids on a left target image in the gray level image distributed by the image preprocessor and generating a target object variable grid feature file in an off-line manner; the variable grids are grids generated by the collected image sequence and have different grid sizes when the distance between the automatic driving vehicle and the same target object is different;
the target tracker is used for completing tracking operation based on image areas according to the image input by the image preprocessor and a detection rectangular frame of a specific target object detected by the target detector, taking the image of the specific target area and the partial image of the neighborhood thereof as a convolution template, judging the area with the highest convolution response in the whole input new scene image in the subsequent image frame, and updating the current convolution template by using the highest response area; continuously outputting the highest response area of each frame as the tracking position of the target;
and the target distance calculator is used for calculating the distance between the binocular camera and the target object in the vertical direction through polar line geometric constraint so as to obtain the distance between the current frame of the vehicle body and the specific target object during acquisition.
2. A method for detecting the longitudinal positioning of a vehicle in automatic driving based on image characteristics of variable grids is characterized in that: the method comprises the following steps:
s1, installing a forward binocular camera on the automatic driving vehicle with the over-high precision map and the GNSS + IMU system, and calibrating internal and external parameters of the binocular camera;
s2, extracting multi-frame images containing specific target areas according to specific target objects in scenes in the high-precision map database and video frames collected by a camera during running in an actual road to form a target object extraction image frame sequence, extracting target object features, and making a feature description file based on a variable grid; the variable grids are grids generated by the collected image sequence and have different grid sizes when the distance between the automatic driving vehicle and the same target object is different;
s3, in the preparation stage of starting the automatic driving vehicle, the initialization work of each module of the high-precision navigation system, the binocular camera, the image preprocessor, the target detector, the target tracker and the target distance calculator is completed in sequence;
s4, in the running stage on the automatic driving line, the whole system needs high-precision navigation to trigger the detection process, the data source needing to be input into the system is the name ID of the target object in the high-precision map, the approximate distance calculated in navigation, and the image frame sequence input by the binocular camera through preprocessing operation, and the distance output of the specific target object in the scene is carried out according to the flow of 'target object detection', 'target object tracking', 'target object distance output';
and S5, calculating the output distance through the camera end, inputting the calculated distance into the high-precision navigation module, and assisting in executing a longitudinal correction process.
3. The method for detecting the longitudinal positioning of the vehicle in the automatic driving based on the image characteristics of the variable grids as claimed in claim 2, wherein: in step S1, the internal and external parameters include: the mounting position information of the left eye camera relative to the coordinate system of the vehicle body and the internal reference M of the left eye camera1、D1Reference M of right eye camera2、D2And outer reference R, T for the left and right eye cameras, where M1And M2Representing the focal lengths f of two cameras, respectivelyx、fyAnd principal point location cx、cyIn the form of a 3 x 3 matrix:D1、D2respectively representing imaging distortion coefficients of left and right eye cameras; extrinsic parameters R, T are used to describe the rotation angle and translation distance of the position of the right eye camera relative to the left eye camera.
4. The method for detecting the longitudinal positioning of the vehicle in the automatic driving based on the image characteristics of the variable grids as claimed in claim 3, wherein: in step S2, the image for feature extraction of the target object is derived from the video frame captured by the left eye camera, and the video frame for feature extraction of the specific target object forms a target object extraction image frame sequence by capturing 3 frames of images at a distance from the vehicle body.
5. The method for detecting the longitudinal positioning of the vehicle in the automatic driving based on the image characteristics of the variable grids as claimed in claim 4, wherein: said step S2, comprises detecting and/or framing a specific object in the scene in the sequence of image frames according to a specific object detection algorithm and/or a manual marking method, and recording the framed mesh size in the feature description file.
6. The method for detecting the longitudinal positioning of the vehicle in the automatic driving based on the image characteristics of the variable grids as claimed in claim 5, wherein: in step S2, the storage content of the feature description file includes the following definitions:
a) the name ID of the object which can be searched in the high-precision map database;
b) according to the distance between the image acquisition time and the target object, distributing a sequence number for the acquired image frame sequence;
c) in image frames corresponding to different serial numbers, a key feature point sequence in a specific target object grid refers to a feature point subset which can be matched among multiple frames, namely a sequence consisting of feature points representing the same point in the image frame sequence;
d) ORB feature descriptions of feature points corresponding to the sequence of feature points;
e) retrieving grid related information of a specific target object, wherein the information comprises the upper left corner image coordinate of the grid and the width and height of the grid in the image; for the manual marking method, the grid is the maximum outsourcing rectangle corresponding to the target object in the frame image; for a specific target detection algorithm, the grid corresponds to a rectangular window of target detection bit output acquired when the current frame runs the target detection algorithm;
f) and according to the distance information between the current traditional GNSS and IMU navigation system and the target object during acquisition.
7. The method of claim 6 for detecting the longitudinal orientation of a vehicle in autonomous driving based on variable mesh image features, characterized in that: step S4 specifically includes the following substeps:
s41, sending the name ID of the object entering the map searching range by using the high-precision navigation system, and pre-reading the feature description file corresponding to the object initialized in the step S3 to the object detector;
s42, judging the grid size information used by the variable grid at the moment by using the target object distance value estimated when the high-precision navigation system is triggered and the target object feature description file content pre-read in the step S41, and dividing the image collected by the left eye camera output by the image preprocessor by using the grid size by using a dividing method based on binary search to obtain the current variable grid; the calculation process is as follows:
(1/2)(n+1)*LenGlobal<=LenBlock<=(1/2)n*LenGlobal
in the formula, n is a binary search number which needs to be calculated according to iteration, namely an average number: lenGlobalThe length or width of the preprocessed camera image; lenBlockThe length or width of the mesh size in the scale information used for the current variable mesh; calculating the n value satisfying the above formula by starting iterative operation from n to 1; respectively on the input image preprocessed by the left eye camera, the average division length is nyThe width of the part and the division is nxDividing the image grids into current variable grids;
s43, sequentially traversing the image areas obtained in the previous step and divided based on the variable grids, carrying out ORB feature extraction on each area, and completing the detection and matching with the key feature point sequence in the target object feature description file and the ORB feature description of the feature points; if the matching is successful, performing the same detection and matching on the image collected by the right-eye camera output by the image preprocessor, and jumping to the step S44; otherwise, the image preprocessor reads in the left eye image of the next frame and executes the step again;
s44, respectively taking the left and right eye image frames and the left and right eye image detection rectangular frames which successfully complete target object detection as parameters, and completing a tracker of the specific areas of the left eye image and the right eye image, wherein the parameters required to be transmitted in the tracker initialization process are the left and right eye detection rectangular frames and the corresponding left and right eye images successfully obtained in the previous step, once the tracker initialization is successful, the left and right eye image frames output by the image preprocessor are continuously input to the corresponding target trackers in the subsequent process, the target trackers take the specific target area image and the neighborhood partial images thereof as convolution templates, the area with the highest convolution response in the whole input new scene image is judged in the subsequent image frame, and the current convolution template is updated by using the highest response area; the specific target area image is a detection rectangular frame area of a specific target object;
the target tracker continuously outputs the highest response area of each frame as the tracking position of the target;
s45, judging the closeness degree of the tracking position and the edge in the image collected by the camera, namely the gradual progress, setting a gradual progress threshold, if the gradual progress of the edge of the image of the target outsourcing rectangle tracked by the left eye and the right eye is not more than the gradual progress threshold, entering a target distance calculator, calculating the distance between the binocular camera and the target object in the vertical direction through epipolar geometric constraint, and further obtaining the distance between the current frame of the vehicle body and the specific target object when the current frame is collected; otherwise, the left and right eye cameras are considered to be incapable of completely acquiring the specific tracking target object.
8. The method of claim 7 for detecting the longitudinal orientation of a vehicle in autonomous driving based on variable mesh image features, wherein: the calculation method of the gradual progress in step S45 is:
respectively calculating the minimum values dbMinDis of the distances from the top points of the outsourcing rectangles 4 at the current specific target tracking positions to the outer 4 edges of the image respectively, and according to the height H of the scene imagescnAnd width WscnCalculating the ratio of the area of the gradient rectangle to the area of the scene image rectangle, wherein the calculation formula is as follows:
((Hscn-2*dbMinDis)*(Wscn-2*dbMinDis))/(Hscn*Wscn)
the closer the ratio is to 1, the closer the tracking position is to the image edge.
9. The method of claim 8 for detecting the longitudinal location of a vehicle in autonomous driving based on variable mesh image features, wherein: the detecting and matching process in step S43 includes the following sub-steps:
s431, judging whether the two ORB feature points are similar through the difference distance described by the ORB feature points, firstly completing one-time feature point matching between the feature point of each area and the template feature point loaded in the step S41 to obtain a sequence of feature point matching pairs from the detection area to the template area, and then completing the same operation from the template area to the detection area in the reverse direction to obtain a sequence of cross-verified feature point matching pairs; the detection area refers to an area where a specific target object acquired by a left eye camera is located, and the template area refers to an area where a characteristic target object in a high-precision map database is located;
s432, obtaining the mapping relation of the characteristic points existing between the detection area and the template area from the two groups of matching pair sequences, and defining the reversible transformation existing between the forward mapping and the reverse mapping as the remapping relation;
and S433, detecting whether the number of the characteristic point pairs reaches a threshold value capable of solving a mapping matrix in the current matching process, calculating the mapping matrix once the number of the characteristic point pairs exceeds 4 characteristic point pairs, and then detecting a remapping relation, wherein if the number of the characteristic point pairs exceeds 4 characteristic point pairs, the matching process is successful, and otherwise, the matching process fails.
10. The method for detecting the longitudinal positioning of the vehicle in the automatic driving based on the image characteristics of the variable grids as claimed in claim 9, wherein: the step S43 further includes declaring that the current detection task is failed if it is detected that the matching is failed in the step S43 until the high precision navigation module retrieves from the map data that the current vehicle has left the visual area of the specific object, and the current calculation process is not continued.
CN201710205430.0A 2017-03-31 2017-03-31 Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving Active CN107167826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710205430.0A CN107167826B (en) 2017-03-31 2017-03-31 Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710205430.0A CN107167826B (en) 2017-03-31 2017-03-31 Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving

Publications (2)

Publication Number Publication Date
CN107167826A CN107167826A (en) 2017-09-15
CN107167826B true CN107167826B (en) 2020-02-04

Family

ID=59849031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710205430.0A Active CN107167826B (en) 2017-03-31 2017-03-31 Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving

Country Status (1)

Country Link
CN (1) CN107167826B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196285B (en) * 2017-11-30 2021-12-17 中山大学 Accurate positioning system based on multi-sensor fusion
CN108051002B (en) * 2017-12-04 2021-03-16 上海文什数据科技有限公司 Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
CN108592797B (en) * 2018-03-28 2020-12-22 华南理工大学 Dynamic measurement method and system for vehicle overall dimension and wheel base
CN109166155B (en) * 2018-09-26 2021-12-17 北京图森智途科技有限公司 Method and device for calculating distance measurement error of vehicle-mounted binocular camera
KR20200046437A (en) * 2018-10-24 2020-05-07 삼성전자주식회사 Localization method based on images and map data and apparatus thereof
CN109887087B (en) * 2019-02-22 2021-02-19 广州小鹏汽车科技有限公司 SLAM mapping method and system for vehicle
CN110069593B (en) * 2019-04-24 2021-11-12 百度在线网络技术(北京)有限公司 Image processing method and system, server, computer readable medium
CN111160123B (en) * 2019-12-11 2023-06-09 桂林长海发展有限责任公司 Aircraft target identification method, device and storage medium
GB202305331D0 (en) * 2019-12-18 2023-05-24 Motional Ad Llc Camera-to-lidar calibration and validation
CN111623776B (en) * 2020-06-08 2022-12-02 昆山星际舟智能科技有限公司 Method for measuring distance of target by using near infrared vision sensor and gyroscope
CN113960581B (en) * 2021-10-26 2024-06-04 众芯汉创(北京)科技有限公司 Unmanned aerial vehicle target detection system applied to transformer substation and combined with radar
CN114200926B (en) * 2021-11-12 2023-04-07 河南工业大学 Local path planning method and system for unmanned vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105588563A (en) * 2016-01-15 2016-05-18 武汉光庭科技有限公司 Joint calibration method of binocular camera and inertial navigation unit in automatic driving
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN105868574A (en) * 2016-04-25 2016-08-17 南京大学 Human face tracking optimization method for camera and intelligent health monitoring system based on videos

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140168264A1 (en) * 2012-12-19 2014-06-19 Lockheed Martin Corporation System, method and computer program product for real-time alignment of an augmented reality device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105588563A (en) * 2016-01-15 2016-05-18 武汉光庭科技有限公司 Joint calibration method of binocular camera and inertial navigation unit in automatic driving
CN105674993A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Binocular camera-based high-precision visual sense positioning map generation system and method
CN105868574A (en) * 2016-04-25 2016-08-17 南京大学 Human face tracking optimization method for camera and intelligent health monitoring system based on videos

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像的物体尺寸测量算法研究;赵明;《软件导刊》;20161130;第15卷(第11期);全文 *
基于立体视觉的被动测距技术研究;马继红 等;《西南师范大学学报(自然科学版)》;20160930;第41卷(第9期);全文 *

Also Published As

Publication number Publication date
CN107167826A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN112197770B (en) Robot positioning method and positioning device thereof
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN112230242B (en) Pose estimation system and method
CN111830953B (en) Vehicle self-positioning method, device and system
Chien et al. Visual odometry driven online calibration for monocular lidar-camera systems
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
US20220398825A1 (en) Method for generating 3d reference points in a map of a scene
CN114001733A (en) Map-based consistency efficient visual inertial positioning algorithm
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN114136315A (en) Monocular vision-based auxiliary inertial integrated navigation method and system
CN115456898A (en) Method and device for building image of parking lot, vehicle and storage medium
CN113566817B (en) Vehicle positioning method and device
Meis et al. A new method for robust far-distance road course estimation in advanced driver assistance systems
Jaenal et al. Improving visual SLAM in car-navigated urban environments with appearance maps
CN115388880B (en) Low-cost parking map construction and positioning method and device and electronic equipment
CN112184906A (en) Method and device for constructing three-dimensional model
CN115546303A (en) Method and device for positioning indoor parking lot, vehicle and storage medium
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques
KR102225321B1 (en) System and method for building road space information through linkage between image information and position information acquired from a plurality of image sensors
Mounier et al. High-precision positioning in GNSS-challenged environments: a LiDAR-based multi-sensor fusion approach with 3D digital maps registration
CN115205828B (en) Vehicle positioning method and device, vehicle control unit and readable storage medium
CN113390422B (en) Automobile positioning method and device and computer storage medium
CN114323038B (en) Outdoor positioning method integrating binocular vision and 2D laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant