WO2021017212A1 - Procédé et appareil de positionnement de véhicule de haute précision et à scènes multiples et terminal embarqué - Google Patents

Procédé et appareil de positionnement de véhicule de haute précision et à scènes multiples et terminal embarqué Download PDF

Info

Publication number
WO2021017212A1
WO2021017212A1 PCT/CN2019/113490 CN2019113490W WO2021017212A1 WO 2021017212 A1 WO2021017212 A1 WO 2021017212A1 CN 2019113490 W CN2019113490 W CN 2019113490W WO 2021017212 A1 WO2021017212 A1 WO 2021017212A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
vehicle
parking lot
positioning
road
Prior art date
Application number
PCT/CN2019/113490
Other languages
English (en)
Chinese (zh)
Inventor
施泽南
姜秀宝
谢国富
Original Assignee
魔门塔(苏州)科技有限公司
北京初速度科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司, 北京初速度科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Priority to DE112019007451.2T priority Critical patent/DE112019007451T5/de
Publication of WO2021017212A1 publication Critical patent/WO2021017212A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/01Determining conditions which influence positioning, e.g. radio environment, state of motion or energy consumption
    • G01S5/014Identifying transitions between environments
    • G01S5/015Identifying transitions between environments between indoor and outdoor environments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present invention relates to the technical field of intelligent driving, in particular to a multi-scene high-precision vehicle positioning method and a vehicle-mounted terminal.
  • the first parking lot image collected by the camera device in the first initialization area is acquired, and the Describe the road characteristics of the first parking lot image, and determine the first starting pose for starting the visual positioning;
  • the parking lot image collected by the camera device is acquired, and the previous vehicle pose is acquired, based on the IMU data, Determining the vehicle pose of the vehicle according to the matching result between the previous vehicle pose and the road feature in the parking lot image and the road feature in the preset map;
  • an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor, a camera device, an IMU, and a GNSS; the processor includes: an outdoor positioning module, a startup determining module, a first visual positioning module, and a second visual Positioning module and scene switching module;
  • the outdoor positioning module is used to obtain IMU data collected by IMU and satellite data collected by GNSS when the vehicle is driving outdoors, and determine the vehicle pose of the vehicle according to the IMU data and the satellite data;
  • the activation determination module is configured to obtain the first initialization area collected by the camera device in the first initialization area when it is determined that the vehicle enters the first initialization area preset at the entrance of the parking lot from the outdoors according to the vehicle pose.
  • the parking lot image according to the road feature of the first parking lot image, determine the first starting pose for starting the visual positioning;
  • the first visual positioning module is configured to determine the vehicle based on the IMU data, the first starting pose, and the matching result between the road features in the first parking lot image and the road features in a preset map Vehicle pose;
  • the second visual positioning module is used to obtain the parking lot image collected by the camera device when it is determined that the visual positioning is in the activated state and the vehicle is driving in the parking lot according to the vehicle pose, and to obtain the previous vehicle pose , Determining the vehicle pose of the vehicle based on the IMU data, the previous vehicle pose, and the matching result between the road feature in the parking lot image and the road feature in the preset map;
  • the multi-scene high-precision vehicle positioning method and vehicle-mounted terminal provided by the embodiments of the present invention can determine the vehicle pose of the vehicle according to IMU data and satellite data when the vehicle is driving outdoors or when driving into the outdoors from a parking lot. ; When the vehicle enters the entrance of the parking lot from the outdoors, start visual positioning in the first initialization area; when the vehicle is driving in the parking lot, based on the IMU data, the previous vehicle pose and the road features and predictions of the parking lot image Set the matching result between the road features in the map to determine the vehicle pose of the vehicle.
  • the vehicle pose is determined according to the corresponding vehicle positioning method, and the vehicle positioning can be performed based on IMU and visual positioning in the parking lot without satellite signals. , Can improve the accuracy of positioning. Therefore, the embodiment of the present invention can realize accurate positioning of the vehicle in different scenes and when switching between different scenes.
  • any product or method of the present invention does not necessarily need to achieve all the advantages described above at the same time.
  • the vehicle pose is determined based on the fusion of IMU trajectory calculation and visual positioning.
  • the combination of the two positioning methods can correct the errors in a single positioning method, making the positioning result more accurate.
  • FIG. 1 is a schematic flowchart of a multi-scene high-precision vehicle positioning method provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of the outdoor map area and the map area of the parking lot in the preset map
  • FIG. 5 is a schematic diagram of each position point in the first track and the second track
  • the above-mentioned first initialization area is a preset coordinate area in a preset map.
  • the first initialization area observations at any two locations or observations from different angles at the same location are significantly different.
  • the vehicle pose can be accurately determined according to the landmarks in the area, and then the visual positioning can be started.
  • the first initialization area may be a circular area with the preset location point at the entrance of the parking lot as the center and the preset distance as the radius.
  • the preset distance can be 15m or other values.
  • Road features include but are not limited to lane lines, street light poles, traffic signs, edge lines, stop lines, traffic lights, and other signs on the road.
  • Edge lines include, but are not limited to, lane edge lines and parking space edge lines.
  • the preset map may include the road features of each location point.
  • the location points in the preset map can be represented by two-dimensional coordinate points or three-dimensional coordinate points.
  • the vehicle pose positioning in this step can be understood as the first vehicle pose positioning when the visual positioning is started after the first starting pose is determined.
  • determining the vehicle pose of the vehicle may specifically include the following steps 1a to 4a .
  • Step 1a Determine the estimated pose of the vehicle according to the first starting pose.
  • the first starting pose may be directly used as the estimated pose of the vehicle, or the first starting pose may be modified to be used as the estimated pose of the vehicle.
  • the first road feature is the road feature in the first parking lot image, and the location representation in the road image is used.
  • the second road feature is a road feature in the preset map that successfully matches the first road feature, and is also a road feature in the preset map, which is represented by coordinates in the coordinate system where the preset map is located.
  • the first road feature and the second road feature may be mapped to the same coordinate system to determine the mapping error.
  • This step may specifically include the following implementation manners:
  • the first road feature is calculated to be mapped to the first mapping position in the preset map; the first mapping position and the first mapping position are calculated Second, the error between the positions of the road features in the preset map to obtain the reference mapping error.
  • the estimated pose and the position of the first road feature in the first parking lot image when calculating the mapping of the first road feature to the first mapping position in the preset map, it can be specifically based on the difference between the image coordinate system and the world coordinate system
  • the conversion relationship of, and the estimated pose convert the position of the first road feature in the first parking lot image to the world coordinate system to obtain the first mapping position.
  • the image coordinate system is the coordinate system where the first parking lot image is located
  • the world coordinate system is the coordinate system where the preset map is located.
  • the conversion relationship between the image coordinate system and the world coordinate system can be obtained through the internal parameter matrix between the image coordinate system and the camera coordinate system, and the rotation matrix and the translation matrix between the camera coordinate system and the world coordinate system.
  • the second road feature is calculated to be mapped to the second mapping position in the coordinate system where the first parking lot image is located; the first road is calculated The error between the position of the feature in the first parking lot image and the second mapping position is obtained as a reference mapping error.
  • the positions of the first road feature and the second road feature are compared to obtain the reference mapping error.
  • the image coordinate system and the world when calculating the second road feature to be mapped to the second mapping position in the coordinate system of the first parking lot image, the image coordinate system and the world can be calculated.
  • the conversion relationship between the coordinate systems and the estimated pose of the vehicle are converted from the position of the second road feature in the preset map to the image coordinate system to obtain the second mapping position.
  • Step 3a When the reference mapping error is greater than the preset error threshold, adjust the estimated pose of the vehicle, and perform step 2a based on the estimated pose of the vehicle to determine the reference mapping error between the first road feature and the second road feature step.
  • Step 4a When the reference mapping error is not greater than the preset error threshold, determine the first visual pose of the vehicle at the first moment according to the current estimated pose of the vehicle.
  • S140 When it is determined that the visual positioning is in the activated state and the vehicle is driving in the parking lot according to the vehicle pose, obtain the parking lot image collected by the camera device, and obtain the previous vehicle pose, based on the IMU data, the previous vehicle pose and parking The matching result between the road features in the field image and the road features in the preset map determines the vehicle pose of the vehicle.
  • the step of determining the vehicle pose of the vehicle can refer to the description in step S130. , Not repeat them here.
  • the acquired parking lot image may be an image collected at any location in the parking lot after starting the visual positioning.
  • the last vehicle pose can be understood as the vehicle pose determined at the last moment of the first moment.
  • the first moment is the moment when the parking lot image is collected.
  • step S150 When it is determined that the vehicle is driving into the outdoors from the parking lot according to the pose of the vehicle, return to the step of obtaining the IMU data collected by the IMU and the satellite data collected by the GNSS in step S110.
  • the vehicle's pose is determined based on the IMU data and satellite data.
  • Step 1b Based on the road features and vehicle pose of the first parking lot image, determine the first vehicle pose of the vehicle through the first pose regression model.
  • the first pose regression model is obtained by training in advance based on multiple sample parking lot images collected in the first initialization area and corresponding sample vehicle poses and labeled vehicle poses.
  • the first pose regression model can correlate the road features and vehicle pose of the first parking lot image with the first vehicle pose according to the trained model parameters.
  • P GPS is the input vehicle pose and Iseg is the semantic observation image, that is, the road feature of the first parking lot image.
  • P GPS and I seg are the input information of CPR, and P reg is the first vehicle pose output by CPR.
  • a more accurate pose of the vehicle can be determined by the multi-level pose regression based on the road characteristics and the pose of the vehicle, and the pose of the positioning is made more accurate on the basis of determining that the vehicle enters the first initialization area.
  • This step can also be understood as identifying the position of the road feature in the first parking lot image in the first initialization area.
  • the first vehicle pose and the second vehicle pose are both the pose of the vehicle at the same time.
  • the first moment is the collection moment of parking lot images and IMU data.
  • step S130 For the specific implementation of this step, please refer to the description in step S130, which will not be repeated here.
  • the multiple second moments are moments before the first moment. Each second moment is earlier than the first moment.
  • it may be determined from stored visual poses at multiple moments and IMU poses at multiple moments.
  • Each position point in the first trajectory is the position point corresponding to the first visual pose and each second visual pose
  • each position point in the second trajectory is the position corresponding to the first IMU pose and each second IMU pose point.
  • the first trajectory and the second trajectory are not overlapped.
  • each position point in the first trajectory is represented by a hollow circle
  • each position point in the second trajectory is represented by a solid circle, and the general driving direction of the vehicle is from left to right.
  • the similarity transformation is satisfied between the corresponding position points in the first trajectory and the second trajectory, and corresponding fusion transformation coefficients exist between the corresponding position points.
  • the vehicle pose is determined based on the fusion of IMU trajectory calculation and visual positioning.
  • the combination of the two positioning methods can correct the errors in a single positioning method and make the positioning result more accurate.
  • this embodiment uses the original camera equipment and IMU equipment of the vehicle when determining the posture of the vehicle after fusion, without adding hardware equipment, and can realize the positioning of the vehicle posture in a scene without GNSS signals.
  • the similarity constraint optimization function is constructed based on the similarity relationship between the first trajectory and the second trajectory and the estimated fusion pose, and the fusion transformation coefficients between each position point in the first trajectory and the second trajectory.
  • the fusion process considers various constraints, which can make the determined fusion vehicle pose more accurate and closer to the true value.
  • the above P stands for position
  • R stands for pose
  • the amount with subscript v is the amount in the visual pose
  • the amount with subscript o is the amount in IMU pose
  • the amount with subscript s is the amount in the fusion transform coefficient .
  • S represents the zoom ratio between the position point in the first trajectory and the corresponding position point in the second trajectory
  • R s represents the rotation matrix between the position point in the first trajectory and the corresponding position point in the second trajectory.
  • the visual pose based on vision is obtained for each moment, and the pose estimation based on IMU is obtained based on the relative amount between adjacent moments, there is a single moment between the visual pose and the estimated fused pose Constraints, and the IMU pose and the estimated fusion pose are constraints from the previous moment to the current moment.
  • the first term in the similarity constraint optimization function E indicates that the position in each visual pose should be relatively close to the position in the corresponding estimated fusion pose, and the second term indicates that the pose in each visual pose is fused with the corresponding estimation
  • the poses in the poses should be relatively close
  • the third term indicates that the displacement between adjacent IMU poses should be relatively close to the displacement between the corresponding adjacent estimated fusion poses
  • the fourth term indicates the adjacent IMU poses
  • the transformation angle between and the corresponding transformation angle between adjacent estimated fusion poses should be relatively close
  • the fifth item indicates that the zoom ratio between adjacent position points should be relatively close
  • the sixth item indicates that between adjacent position points
  • the rotation matrix should be relatively close.
  • a similar constraint optimization function E can be constructed.
  • the estimated fusion pose and R f , P f , S, R s when E can achieve the minimum value are the optimal solutions.
  • R f (t) is equal to R v (t)
  • P f (t) is equal to P v (t)
  • S(t) is 1
  • R s (t) is equal to
  • Step 3c Determine the current function value of the similar constraint optimization function according to the current value of the estimated fusion pose and the current value of R f , P f , S, and R s .
  • the current value of the estimated fusion pose and the current value of R f , P f , S, and R s are substituted into the expression of the similar constraint optimization function E in step S141 to obtain the current function value.
  • Step 4c Obtain the last function value of the similarly constrained optimization function, and judge whether the absolute value of the difference between the last function value and the current function value is greater than the preset difference threshold, if yes, go to step 5c; if not, go to step 6c.
  • the preset difference threshold may be a value determined in advance based on empirical values, and adjusting the preset difference threshold can adjust the accuracy of the fusion pose.
  • the preset difference threshold When the absolute value of the difference between the previous function value and the current function value is greater than the preset difference threshold, it is considered that the function value of the similarly constrained optimization function can be smaller.
  • the absolute value of the difference between the previous function value and the current function value is not greater than the preset difference threshold, it is considered that the function value of the similarly constrained optimization function is very close to the minimum value.
  • Step 5c Adjust the value of the estimated fusion pose and the values of R f , P f , S, R s , and return to step 3c, according to the current value of the estimated fusion pose and R f , P f , S, R
  • the current value of s is the step of determining the current function value of the similar constraint optimization function.
  • the values of the last estimated fusion pose and the values of the last R f , P f , S and R s can be adjusted. Value and the trend of the current function value.
  • Step 6c Determine the current value of the estimated fusion pose as the vehicle pose after the vehicle is fused at the first moment.
  • this embodiment provides a specific form of the similar constraint optimization function constructed, and provides a specific implementation method for iteratively solving the similar constraint optimization function, which can more accurately determine the fusion of the two poses.
  • Step 1d Determine the mapping error between the first road feature and the second road feature according to the first visual pose.
  • the first road feature is a road feature in the parking lot image
  • the second road feature is a road feature successfully matched with the first road feature in the preset map.
  • step 2a For the specific implementation of this step, refer to the description in step 2a, and one of two mapping methods is used to determine the mapping error.
  • Step 2d Determine the target map area where the first visual pose is located from among multiple different map areas included in the preset map.
  • the preset map may be divided into a plurality of different map areas according to the road features contained in the preset map in advance, and the road features in each map area have relevance or location similarity.
  • the map area can be a circular area, a rectangular area, or other area shapes.
  • the corresponding relationship between the mapping error and the positioning error can be represented by a mapping error function with the positioning error as a variable.
  • the mapping error can be substituted into the mapping error function to obtain the corresponding positioning error.
  • the positioning error can be understood as the difference between the current positioning pose and the real positioning pose.
  • the positioning error can be 5cm, 10cm, etc.
  • the reciprocal of the positioning error can be determined as the positioning accuracy of the first visual pose, or other preset processing can be performed on the reciprocal to obtain the first visual pose positioning accuracy.
  • the accuracy of the first visual pose is determined according to the mapping error between the road features and the pre-established correspondence between the mapping error and the positioning error. Realize the evaluation of the visual pose positioning effect.
  • Step 1e Obtain sample road images and corresponding sample road features collected in the target map area, as well as the standard positioning pose of the vehicle corresponding to the sample road images, and obtain the preset map that successfully matches the sample road features The third way feature.
  • Step 3e Determine the disturbance mapping errors corresponding to multiple disturbance positioning poses according to the sample road feature and the third road feature.
  • the preset mapping error function related to the positioning error in the target map area can be understood as a preset mapping error function containing an unknown quantity.
  • the mapping error function can be set to the following quadric form:
  • the perturbation mapping error corresponding to multiple perturbation positioning poses can be expressed by the following function:
  • MapMatching(p gt + ⁇ p,I seg ,I map ) is the location of multiple disturbances Perturbation mapping error corresponding to the pose p gt + ⁇ p.
  • g( ⁇ x, ⁇ y)-MapMatching(p gt + ⁇ p,I seg ,I map ) represents the residual error between the mapping error function and the disturbance mapping errors corresponding to multiple disturbance positioning poses.
  • mapping error function g For each map area in the preset map, the corresponding mapping error function g can be obtained by the above-mentioned method.
  • the processor 610 further includes:
  • the third IMU pose of the vehicle is calculated based on the vehicle pose before the visual positioning fails and the IMU data;
  • the road feature of the second parking lot image is matched with the road feature in the preset map, and the fourth vehicle pose of the vehicle is determined according to the matching result as the second start for restarting the visual positioning Posture
  • the vehicle pose of the vehicle is determined.
  • modules in the device in the embodiment may be distributed in the device in the embodiment according to the description of the embodiment, or may be located in one or more devices different from this embodiment with corresponding changes.
  • the modules of the above-mentioned embodiments can be combined into one module or further divided into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Navigation (AREA)

Abstract

L'invention concerne un procédé de positionnement de véhicule de haute précision et à scènes multiples, ainsi qu'un terminal embarqué. Le procédé consiste : lorsqu'un véhicule se déplace à l'extérieur, à déterminer la pose de véhicule du véhicule selon des données d'IMU et des données de satellite (S110) ; lorsque le véhicule entre, de l'extérieur, dans une première zone d'initialisation à une entrée d'un parc de stationnement, à déterminer, selon les caractéristiques routières d'une première image de parc de stationnement acquise dans la première zone d'initialisation, une première pose de départ pour démarrer un positionnement visuel (S120) ; à déterminer la pose de véhicule du véhicule en fonction des données d'IMU, de la première pose de départ et du résultat de correspondance entre les caractéristiques routières dans la première image de parc de stationnement et les caractéristiques routières dans une carte prédéfinie (S130) ; s'il est déterminé que le positionnement visuel est dans un état activé et qu'il est déterminé, selon la pose de véhicule, que le véhicule se déplace dans le parc de stationnement, à obtenir une image de parc de stationnement acquise par un dispositif de prise de vues et à obtenir la pose précédente de véhicule pour déterminer la pose de véhicule du véhicule en fonction des données d'IMU, de la pose précédente de véhicule et du résultat de correspondance entre les caractéristiques routières dans l'image de parc de stationnement et les caractéristiques routières dans la carte prédéfinie (S140) ; et s'il est déterminé que le véhicule se déplace vers l'extérieur à partir du parc de stationnement, à déterminer la pose de véhicule du véhicule, selon les données d'IMU et les données de satellite (S150). La solution fournie par le procédé peut réaliser un positionnement précis du véhicule dans différentes scènes et lors d'une commutation entre différentes scènes.
PCT/CN2019/113490 2019-07-26 2019-10-26 Procédé et appareil de positionnement de véhicule de haute précision et à scènes multiples et terminal embarqué WO2021017212A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE112019007451.2T DE112019007451T5 (de) 2019-07-26 2019-10-26 Hochpräzises Verfahren sowie Gerät zur Fahrzeugpositionierung in mehreren Szenen, und fahrzeugmontiertes Terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910681752.1 2019-07-26
CN201910681752.1A CN112304302B (zh) 2019-07-26 2019-07-26 一种多场景高精度车辆定位方法、装置及车载终端

Publications (1)

Publication Number Publication Date
WO2021017212A1 true WO2021017212A1 (fr) 2021-02-04

Family

ID=74230147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113490 WO2021017212A1 (fr) 2019-07-26 2019-10-26 Procédé et appareil de positionnement de véhicule de haute précision et à scènes multiples et terminal embarqué

Country Status (3)

Country Link
CN (1) CN112304302B (fr)
DE (1) DE112019007451T5 (fr)
WO (1) WO2021017212A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223050A (zh) * 2021-05-12 2021-08-06 之江实验室 一种基于ArUco码的机器人运动轨迹实时采集方法
CN113313967A (zh) * 2021-04-25 2021-08-27 湖南海龙国际智能科技股份有限公司 一种基于室内低精度定位的车位级导航系统
CN113343830A (zh) * 2021-06-01 2021-09-03 上海追势科技有限公司 一种地下停车场车辆快速重定位的方法
CN113535875A (zh) * 2021-07-14 2021-10-22 北京百度网讯科技有限公司 地图数据扩充方法、装置、电子设备、介质和程序产品
CN113763738A (zh) * 2021-09-14 2021-12-07 上海智能网联汽车技术中心有限公司 车路协同系统路侧感知与车端感知实时匹配的方法及系统
CN113781645A (zh) * 2021-08-31 2021-12-10 同济大学 一种面向室内泊车环境的定位和建图方法
CN113963285A (zh) * 2021-09-09 2022-01-21 济南金宇公路产业发展有限公司 一种基于5g的道路养护方法及设备
CN114427863A (zh) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 车辆定位方法及系统、自动泊车方法及系统、存储介质
EP4040111A1 (fr) * 2021-02-04 2022-08-10 Guangzhou Xiaopeng Autopilot Technology Co., Ltd. Procédé et appareil de traitement de carte
CN114964216A (zh) * 2021-02-23 2022-08-30 广州汽车集团股份有限公司 一种车辆定位方法及系统
CN115285143A (zh) * 2022-08-03 2022-11-04 东北大学 一种基于场景分类的自动驾驶车辆导航方法
CN115930953A (zh) * 2023-03-02 2023-04-07 成都宜泊信息科技有限公司 一种室内外地图切换方法及系统
CN116466382A (zh) * 2023-04-24 2023-07-21 贵州一招信息技术有限公司 一种基于gps的高精度实时定位系统
CN117119588A (zh) * 2023-10-18 2023-11-24 湖南承希科技有限公司 一种基于Wi-Fi6技术实现车辆在轨道停车场内的定位方法
CN117706583A (zh) * 2023-12-29 2024-03-15 无锡物联网创新中心有限公司 一种高精度定位方法及系统
CN113343830B (zh) * 2021-06-01 2024-05-24 上海追势科技有限公司 一种地下停车场车辆快速重定位的方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113835435B (zh) * 2021-09-30 2023-10-31 中国联合网络通信集团有限公司 车辆控制方法、服务器和存储介质
CN114001742A (zh) * 2021-10-21 2022-02-01 广州小鹏自动驾驶科技有限公司 车辆定位方法、装置、车辆和可读存储介质
CN114111774B (zh) * 2021-12-06 2024-04-16 纵目科技(上海)股份有限公司 车辆的定位方法、系统、设备及计算机可读存储介质
CN114323033B (zh) * 2021-12-29 2023-08-29 北京百度网讯科技有限公司 基于车道线和特征点的定位方法、设备及自动驾驶车辆
CN114370872B (zh) * 2022-01-14 2024-04-09 苏州挚途科技有限公司 车辆姿态确定方法和车辆
CN114383626B (zh) * 2022-01-19 2023-05-16 广州小鹏自动驾驶科技有限公司 全场景智能辅助驾驶的导航方法及装置
CN117789444A (zh) * 2022-09-19 2024-03-29 北京初速度科技有限公司 一种停车场数据的匹配方法、装置、设备、介质及车辆

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412826A (zh) * 2016-09-07 2017-02-15 清华大学 基于多源信息融合的室内定位方法及定位装置
CN108873038A (zh) * 2018-09-10 2018-11-23 芜湖盟博科技有限公司 自主泊车定位方法及定位系统
CN109387192A (zh) * 2017-08-02 2019-02-26 湖南格纳微信息科技有限公司 一种室内外连续定位方法及装置
CN109582038A (zh) * 2018-12-28 2019-04-05 中国兵器工业计算机应用技术研究所 一种无人机路径规划方法
CN109631887A (zh) * 2018-12-29 2019-04-16 重庆邮电大学 基于双目、加速度与陀螺仪的惯性导航高精度定位方法
CN109682373A (zh) * 2018-12-28 2019-04-26 中国兵器工业计算机应用技术研究所 一种无人平台的感知系统
CN109827574A (zh) * 2018-12-28 2019-05-31 中国兵器工业计算机应用技术研究所 一种无人机室内外切换导航系统
CN109900265A (zh) * 2019-03-15 2019-06-18 武汉大学 一种camera/mems辅助北斗的机器人定位算法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108802785B (zh) * 2018-08-24 2021-02-02 清华大学 基于高精度矢量地图和单目视觉传感器的车辆自定位方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412826A (zh) * 2016-09-07 2017-02-15 清华大学 基于多源信息融合的室内定位方法及定位装置
CN109387192A (zh) * 2017-08-02 2019-02-26 湖南格纳微信息科技有限公司 一种室内外连续定位方法及装置
CN108873038A (zh) * 2018-09-10 2018-11-23 芜湖盟博科技有限公司 自主泊车定位方法及定位系统
CN109582038A (zh) * 2018-12-28 2019-04-05 中国兵器工业计算机应用技术研究所 一种无人机路径规划方法
CN109682373A (zh) * 2018-12-28 2019-04-26 中国兵器工业计算机应用技术研究所 一种无人平台的感知系统
CN109827574A (zh) * 2018-12-28 2019-05-31 中国兵器工业计算机应用技术研究所 一种无人机室内外切换导航系统
CN109631887A (zh) * 2018-12-29 2019-04-16 重庆邮电大学 基于双目、加速度与陀螺仪的惯性导航高精度定位方法
CN109900265A (zh) * 2019-03-15 2019-06-18 武汉大学 一种camera/mems辅助北斗的机器人定位算法

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4040111A1 (fr) * 2021-02-04 2022-08-10 Guangzhou Xiaopeng Autopilot Technology Co., Ltd. Procédé et appareil de traitement de carte
CN114964216A (zh) * 2021-02-23 2022-08-30 广州汽车集团股份有限公司 一种车辆定位方法及系统
CN113313967A (zh) * 2021-04-25 2021-08-27 湖南海龙国际智能科技股份有限公司 一种基于室内低精度定位的车位级导航系统
CN113223050A (zh) * 2021-05-12 2021-08-06 之江实验室 一种基于ArUco码的机器人运动轨迹实时采集方法
CN113343830A (zh) * 2021-06-01 2021-09-03 上海追势科技有限公司 一种地下停车场车辆快速重定位的方法
CN113343830B (zh) * 2021-06-01 2024-05-24 上海追势科技有限公司 一种地下停车场车辆快速重定位的方法
CN113535875A (zh) * 2021-07-14 2021-10-22 北京百度网讯科技有限公司 地图数据扩充方法、装置、电子设备、介质和程序产品
CN113781645A (zh) * 2021-08-31 2021-12-10 同济大学 一种面向室内泊车环境的定位和建图方法
CN113781645B (zh) * 2021-08-31 2024-03-26 同济大学 一种面向室内泊车环境的定位和建图方法
CN113963285B (zh) * 2021-09-09 2022-06-10 山东金宇信息科技集团有限公司 一种基于5g的道路养护方法及设备
CN113963285A (zh) * 2021-09-09 2022-01-21 济南金宇公路产业发展有限公司 一种基于5g的道路养护方法及设备
CN113763738A (zh) * 2021-09-14 2021-12-07 上海智能网联汽车技术中心有限公司 车路协同系统路侧感知与车端感知实时匹配的方法及系统
CN114427863A (zh) * 2022-04-01 2022-05-03 天津天瞳威势电子科技有限公司 车辆定位方法及系统、自动泊车方法及系统、存储介质
CN115285143A (zh) * 2022-08-03 2022-11-04 东北大学 一种基于场景分类的自动驾驶车辆导航方法
CN115930953A (zh) * 2023-03-02 2023-04-07 成都宜泊信息科技有限公司 一种室内外地图切换方法及系统
CN115930953B (zh) * 2023-03-02 2023-05-09 成都宜泊信息科技有限公司 一种室内外地图切换方法及系统
CN116466382A (zh) * 2023-04-24 2023-07-21 贵州一招信息技术有限公司 一种基于gps的高精度实时定位系统
CN117119588A (zh) * 2023-10-18 2023-11-24 湖南承希科技有限公司 一种基于Wi-Fi6技术实现车辆在轨道停车场内的定位方法
CN117119588B (zh) * 2023-10-18 2024-01-12 湖南承希科技有限公司 一种基于Wi-Fi6技术实现车辆在轨道停车场内的定位方法
CN117706583A (zh) * 2023-12-29 2024-03-15 无锡物联网创新中心有限公司 一种高精度定位方法及系统

Also Published As

Publication number Publication date
DE112019007451T5 (de) 2022-03-10
CN112304302A (zh) 2021-02-02
CN112304302B (zh) 2023-05-12

Similar Documents

Publication Publication Date Title
WO2021017212A1 (fr) Procédé et appareil de positionnement de véhicule de haute précision et à scènes multiples et terminal embarqué
US11354820B2 (en) Image based localization system
US10788830B2 (en) Systems and methods for determining a vehicle position
US8213706B2 (en) Method and system for real-time visual odometry
KR20220033477A (ko) 자동 발렛 파킹 시스템의 위치 추정 장치 및 방법
CN111263960B (zh) 用于更新高清晰度地图的设备和方法
JP2020064056A (ja) 位置推定装置及び方法
US10872246B2 (en) Vehicle lane detection system
WO2021017213A1 (fr) Procédé d'auto-détection d'effet de positionnement visuel et terminal embarqué
KR102383499B1 (ko) 시각 특징 맵 생성 방법 및 시스템
CA3083430C (fr) Marquage d'environnement urbain
US11474193B2 (en) Camera calibration for localization
CN110794828A (zh) 一种融合语义信息的路标定位方法
GB2578721A (en) Method and system for processing image data utilizing deep neural network
US20190293444A1 (en) Lane level accuracy using vision of roadway lights and particle filter
Nourani-Vatani et al. Scene change detection for vision-based topological mapping and localization
Stübler et al. Feature-based mapping and self-localization for road vehicles using a single grayscale camera
CN112304322B (zh) 一种视觉定位失效后的重启方法及车载终端
Hoang et al. Motion estimation based on two corresponding points and angular deviation optimization
Hoang et al. A simplified solution to motion estimation using an omnidirectional camera and a 2-D LRF sensor
Wong et al. Vision-based vehicle localization using a visual street map with embedded SURF scale
Wong et al. Single camera vehicle localization using SURF scale and dynamic time warping
KR102552712B1 (ko) 차량의 위치 추정 시스템 및 이를 이용한 차량의 위치 추정 방법
CN112304321B (zh) 一种基于视觉和imu的车辆融合定位方法及车载终端
CN113390422B (zh) 汽车的定位方法、装置及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939635

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19939635

Country of ref document: EP

Kind code of ref document: A1