CN115311349A - Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof - Google Patents

Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof Download PDF

Info

Publication number
CN115311349A
CN115311349A CN202210939943.5A CN202210939943A CN115311349A CN 115311349 A CN115311349 A CN 115311349A CN 202210939943 A CN202210939943 A CN 202210939943A CN 115311349 A CN115311349 A CN 115311349A
Authority
CN
China
Prior art keywords
result
vehicle
factor
ndt
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210939943.5A
Other languages
Chinese (zh)
Inventor
周乐韬
胡广地
曹展
李雪
任鹏羽
陈武旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210939943.5A priority Critical patent/CN115311349A/en
Publication of CN115311349A publication Critical patent/CN115311349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a vehicle automatic driving auxiliary positioning fusion method and a domain control system thereof, wherein the method comprises the following steps: acquiring various positioning related data of a vehicle; preprocessing the various positioning related data to obtain a GNSS factor, an IMU pre-integration factor and a preprocessing result; sequentially carrying out first NDT point cloud registration and pose calculation on the preprocessing result to obtain a pose calculation result; carrying out NDT map registration by using the pose calculation result to obtain an NDT map registration result and a laser odometer factor; constructing a sliding window map according to the NDT map registration result; performing second NDT point cloud registration on the sliding window map to obtain a second NDT point cloud registration result; performing closed-loop detection on the second NDT point cloud registration result to generate a closed-loop detection factor; performing constraint factor fusion and factor graph optimization on the four factors to obtain an optimization result; and generating a motion track of the vehicle according to the optimization result and the pose calculation result.

Description

Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof
Technical Field
The invention relates to the technical field of automatic driving, in particular to a vehicle automatic driving auxiliary positioning fusion method and a domain control system thereof.
Background
With the popularization of the automatic driving technology and the complexity of the environment perception requirement of the vehicle, the automatic driving domain controller is an important carrier for realizing the automatic driving function and bears the module computational power and performance requirements of environment perception fusion, decision planning, chassis control and the like. In terms of functions, the current automatic driving area controller mainly supports various environment sensing sensors, does not integrate high-precision inertial navigation, satellite navigation and 4G RTK (Real-time kinematic/carrier phase difference technology), is not high enough in integration degree, needs additional power lines for the sensors, and is still complex in system wiring harness. In the aspect of algorithms, synchronous positioning and map building (SLAM) algorithms are also researched more and more, some algorithms need to establish clear matching relations among feature points, and the clear feature matching is most prone to errors.
Disclosure of Invention
The invention aims to provide an automatic driving assistance positioning fusion method for a vehicle and a domain control system thereof, so as to improve the real-time performance and accuracy of positioning data communication.
The technical scheme for solving the technical problems is as follows:
the invention provides a vehicle automatic driving auxiliary positioning fusion method, which comprises the following steps:
s1: acquiring various positioning related data of a vehicle;
s2: preprocessing the various positioning related data to obtain a GNSS factor, an IMU pre-integration factor and a preprocessing result;
s3: performing first NDT point cloud registration on the preprocessing result to obtain a first point cloud registration result;
s4: performing pose calculation on the first point cloud registration result to obtain a pose calculation result;
s5: carrying out NDT map registration by using the pose calculation result to obtain an NDT map registration result and a laser odometer factor;
s6: constructing a sliding window map according to the NDT map registration result;
s7: performing second NDT point cloud registration on the sliding window map to obtain a second NDT point cloud registration result;
s8: performing closed-loop detection on the second NDT point cloud registration result to generate a closed-loop detection factor;
s9: performing constraint factor fusion on the GNSS factor, the IMU pre-integration factor, the laser odometer factor and the closed-loop detection factor to obtain a fusion result;
s10: performing factor graph optimization on the fusion result to obtain an optimization result;
s11: and generating a motion track of the vehicle according to the optimization result and the pose calculation result.
Optionally, in step S1, the plurality of positioning-related data of the vehicle includes: absolute pose, angular velocity, acceleration, and laser point cloud.
Optionally, in the step S2, the preprocessing operation includes coordinate transformation, pre-integration and distortion removal, and the step S2 includes:
s201: carrying out coordinate transformation on the absolute pose to obtain an initial pose and a GNSS factor;
s202: pre-integrating the angular velocity and the acceleration by utilizing an IMU pre-integration model to obtain a pre-integration result and an IMU factor;
s203: performing motion estimation on the pre-integration result to obtain a motion estimation result;
s204: carrying out distortion removal on the laser point cloud and the pre-integration result to obtain a distortion removal result;
s205: performing feature calculation on the distortion removal result to obtain a feature calculation result;
s206: and outputting the initial pose, the motion estimation result and the feature calculation result as the preprocessing result.
Optionally, in step S202, the IMU pre-integration model includes:
Figure BDA0003785121190000031
Figure BDA0003785121190000032
Figure BDA0003785121190000033
wherein v is t+Δt Representing the speed, P, of the vehicle at time t + Deltat t+Δt Indicating the position of the vehicle at time t + deltat,
Figure BDA0003785121190000034
representing the rotation of the vehicle at time t + Δ t, v t Representing the speed of the vehicle at time t, g w Which represents the gravitational acceleration of the vehicle in the world coordinate system, deltat represents a period of time,
Figure BDA00037851211900000313
a rotation matrix representing the inertial system to the world coordinate system,
Figure BDA0003785121190000035
represents the raw measured acceleration of the IMU at the time and
Figure BDA0003785121190000036
Figure BDA0003785121190000037
represents the deviation of the acceleration that varies slowly with time,
Figure BDA0003785121190000038
a white gaussian noise representing the acceleration,
Figure BDA0003785121190000039
representing the IMU's raw measured angular velocity at a time
Figure BDA00037851211900000310
Figure BDA00037851211900000311
Indicates the deviation of the angular velocity over time,
Figure BDA00037851211900000312
gaussian white noise representing angular velocity.
Optionally, in step S6, the sliding window is a fixed-size window that is set on the time axis and slides over time, and only the variables in the window are optimized each time, and the remaining variables are rimmed.
Alternatively, the step S8 includes:
s81: classifying the appearance of each grid by using the characteristic value attribute of each grid in the second NDT point cloud registration result to obtain a classification result;
s82: constructing a similarity function between two frames according to the classification result;
s83: carrying out coarse closed loop detection by using the similar function to obtain a coarse closed loop detection result;
s84: if the detection result of the coarse closed loop meets a preset threshold, the step S85 is executed;
s85: and performing accurate closed-loop detection by using the sum of the distances from the mean value of each grid to the origin of coordinates to obtain an accurate closed-loop detection result, wherein the accurate closed-loop detection result comprises the closed-loop detection factor.
Optionally, in the step S9, in the process of adding the laser odometry factor, only the current frame associated with the current state of the vehicle is added as the constraint factor in the map, and the laser scanning frame between two frames will not be optimized.
The invention also provides a vehicle automatic driving domain control system using the vehicle automatic driving auxiliary positioning fusion method, which comprises the following steps:
a positioning-related data acquisition module for acquiring a plurality of positioning-related data of a vehicle;
an autonomous driving processor for performing a series of processes on the plurality of positioning-related data of the vehicle to generate a motion profile of the vehicle.
Optionally, the positioning-related data acquisition module comprises a GNSS + RTK unit for acquiring an absolute pose of the vehicle, an IMU unit and a sensor unit; the IMU unit is used for acquiring the angular speed and the acceleration of the vehicle, and the sensor unit is used for acquiring the laser point cloud of the vehicle.
The invention has the following beneficial effects:
the invention integrates the IMU, the GNSS and the 4G RTK real-time differential system in the automatic driving area controller, reduces the external line connection, improves the real-time performance of positioning data communication and achieves the aim of reducing the technical cost. The data fusion algorithm based on factor graph optimization represents the relationship between different nodes more intuitively, when the state quantity needs to be added, the factor graph can directly add factors on the basis of the original graph, and similarly, if the reliability of the measured value is low or the signal is lost, the factors only need to be simply reduced on the basis of the original graph, special programming or model modification is not needed, and the calculation quantity when the SLAM problem is processed is greatly reduced.
Drawings
FIG. 1 is a flow chart of a vehicle automatic driving assistance positioning fusion method according to the present invention;
FIG. 2 is a block diagram of the vehicle autopilot assistance positioning fusion of the present invention;
FIG. 3 is a diagram illustrating the process of factor graph optimization according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The invention provides a vehicle automatic driving auxiliary positioning fusion method, which comprises the following steps of:
s1: acquiring various positioning related data of a vehicle;
in the invention, the various positioning related data of the vehicle at least comprise absolute pose, angular velocity, acceleration and laser point cloud.
The absolute pose is acquired through a GNSS + RTK unit, and the angular velocity and the acceleration are acquired through an IMU unit; the laser point cloud is acquired by a sensor unit.
The GNSS + RTK unit is compatible with various galaxy systems and various frequency bands, more satellites are searched, the stability is better, the RTK high-precision positioning is supported, and the precision can reach centimeter level.
The IMU unit can output 3-axis acceleration and 3-axis angular velocity, accurately output the motion state of the vehicle, and input the acceleration and angular velocity information into the automatic driving processor for information fusion calculation for SLAM positioning. The vehicle can deduce the self pose by knowing the rotation angle and the acceleration information in the driving process, the accuracy of the measurement values of the angular velocity and the acceleration of the IMU is highest in the rotation angle and the acceleration, and the vehicle can provide information output with higher frequency. The IMU acts on the factor graph to provide good pose and make complementary constraints with the laser odometer factor.
Specifically, the GNSS + RTK unit selects an INS-YI100C differential GPS unit.
Specifically, the IMU unit adopts a BW-127 inertial measurement unit of north micro sensing.
S2: preprocessing the various positioning related data to obtain a GNSS factor, an IMU pre-integration factor and a preprocessing result;
referring to fig. 2, the preprocessing operation includes coordinate transformation, pre-integration, and distortion removal, and thus, the step S2 includes:
s201: carrying out coordinate transformation on the absolute pose to obtain an initial pose and a GNSS factor;
since the drift of the lidar odometer and the odometer increases very slowly, no GNSS factors have to be added in real time after the GNSS factors are generated. The invention adds GNSS factors at the initial position and loop detection, and in other operating conditions only when the estimated position covariance is greater than the received GNSS position covariance.
S202: pre-integrating the angular velocity and the acceleration by utilizing an IMU pre-integration model to obtain a pre-integration result and an IMU factor;
the IMU pre-integration model comprises:
Figure BDA0003785121190000051
Figure BDA0003785121190000061
Figure BDA0003785121190000062
wherein v is t+Δt Representing the speed, P, of the vehicle at time t + Deltat t+Δt Representing the position of the vehicle at time t + deltat,
Figure BDA0003785121190000063
representing the rotation of the vehicle at time t + Δ t, v t Representing the speed of the vehicle at time t, g w Which represents the gravitational acceleration of the vehicle in the world coordinate system, deltat represents a period of time,
Figure BDA00037851211900000612
a rotation matrix representing the inertial system to the world coordinate system,
Figure BDA0003785121190000064
represents the raw measured acceleration of the IMU at the time and
Figure BDA0003785121190000065
Figure BDA0003785121190000066
represents the deviation of the acceleration that varies slowly with time,
Figure BDA0003785121190000067
a white gaussian noise that represents the acceleration of the vehicle,
Figure BDA0003785121190000068
representing the IMU's raw measured angular velocity at a time
Figure BDA0003785121190000069
Figure BDA00037851211900000610
Indicates the deviation of the angular velocity over time,
Figure BDA00037851211900000611
gaussian white noise representing angular velocity.
S203: performing motion estimation on the pre-integration result to obtain a motion estimation result;
s204: carrying out distortion removal on the laser point cloud and the pre-integration result to obtain a distortion removal result;
s205: performing feature calculation on the distortion removal result to obtain a feature calculation result;
s206: and outputting the initial pose, the motion estimation result and the feature calculation result as the preprocessing result.
S3: performing first NDT point cloud registration on the preprocessing result to obtain a first point cloud registration result;
s4: carrying out pose calculation on the first point cloud registration result to obtain a pose calculation result;
s5: carrying out NDT map registration by using the pose calculation result to obtain an NDT map registration result and a laser odometer factor;
the laser odometry factor, like the IMU pre-integration factor, plays a crucial role in motion estimation. Compared with a GNSS factor, the laser odometer factor has obvious advantages in the estimation of the pose accuracy, and is not influenced by the obstruction of the obstacles in the environment. For the addition of the laser odometry factors, in order to ensure the real-time performance of the algorithm, in the process of adding the laser odometry factors, only the current frame associated with the current state of the vehicle is added as a constraint factor in the image, and the laser scanning frame between the two frames is not subjected to optimization calculation, so that the calculation efficiency is greatly improved. Meanwhile, the method helps to maintain a relatively sparse factor graph and is suitable for real-time nonlinear optimization.
S6: constructing a sliding window map according to the NDT map registration result;
the sliding window is a fixed window which is arranged on a time axis and slides along with time, only the variable in the window is optimized each time, and the rest variables are marginalized, and because all the variables are linearized again during optimization iteration, the linearization accumulated error is small, and the precision is ensured; meanwhile, the window size is fixed, the number of optimized variables is basically unchanged, and the real-time performance can be met.
S7: performing second NDT point cloud registration on the sliding window map to obtain a second NDT point cloud registration result;
s8: performing closed-loop detection on the second NDT point cloud registration result to generate a closed-loop detection factor;
after the closed loop condition matures, a closed loop detection factor is added, which in fact has the benefit of optimization of rotation and pitch angle. In the actual map building process, the point cloud map added with the loop factors has good performance in the scene of large map rotation and height change.
S9: performing constraint factor fusion on the GNSS factor, the IMU pre-integration factor, the laser odometer factor and the closed-loop detection factor to obtain a fusion result;
s10: optimizing the factor graph of the fusion result to obtain an optimized result;
the method combining the factor graph optimization and the sliding window is widely applied to various fusion positioning and mapping systems due to good real-time performance and robustness, so that the invention takes a Normal distribution transformation (Normal distribution Transform) point cloud matching algorithm which is characterized by sliding window matching as a core, and establishes a factor graph optimization SLAM framework based on multi-sensor fusion by taking factor graph optimization as a multi-sensor fusion means.
FIG. 3 is a schematic diagram of a factor graph optimization system. In the factor graph optimization, the precise GNSS positioning information, the IMU pre-integration information and the loop detection information are fused with the laser odometer factor to serve as the correction factor, so that the accumulated error can be greatly eliminated in the process of constructing the map in a complex large scene, and the high-precision SLAM map construction is realized. The GNSS provides absolute pose information including the initial pose of the SLAM positioning system, and the method has the advantage of improving the repositioning capacity of the unmanned vehicle.
S11: and generating a motion track of the vehicle according to the optimization result and the pose calculation result.
Alternatively, the step S8 includes:
s81: classifying the appearance of each grid by using the characteristic value attribute of each grid in the second NDT point cloud registration result to obtain a classification result;
s82: constructing a similarity function between two frames according to the classification result;
s83: performing coarse closed loop detection by using the similar function to obtain a coarse closed loop detection result;
s84: if the detection result of the coarse closed loop meets a preset threshold, the step S85 is executed;
s85: and performing accurate closed-loop detection by using the sum of the distances from the mean value of each grid to the origin of coordinates to obtain an accurate closed-loop detection result, wherein the accurate closed-loop detection result comprises the closed-loop detection factor.
Optionally, in the step S9, in the process of adding the laser odometry factor, only the current frame associated with the current state of the vehicle is added as the constraint factor in the map, and the laser scanning frame between two frames will not be subjected to the optimization calculation.
The invention also provides a vehicle automatic driving domain control system using the vehicle automatic driving auxiliary positioning fusion method, which comprises the following steps:
a positioning-related data acquisition module for acquiring a plurality of positioning-related data of a vehicle;
an autonomous driving processor for performing a series of processes on the plurality of positioning-related data of the vehicle to generate a motion profile of the vehicle.
Optionally, the positioning-related data acquisition module comprises a GNSS + RTK unit for acquiring an absolute pose of the vehicle, an IMU unit and a sensor unit; the IMU unit is used for acquiring the angular speed and the acceleration of the vehicle, and the sensor unit is used for acquiring the laser point cloud of the vehicle.
In particular, in practical applications, the autopilot domain controller may include an autopilot processor, a microprocessor, a GNSS + RTK positioning module, an IMU module, as well as autopilot sensors, a drive-by-wire chassis, other external devices, and the like.
As a specific embodiment, the autopilot processor of the present invention employs an embedded intelligent system Xavier chip including an autopilot system developed by NVIDIA, and the chip performance includes: eight core CPU based on ARMv8 ISA, deep Learning Accelerator (DLA): 5TOPS (FP 16) |10TOPS (INT 8), volta GPU:512CUDA cores (INT 8) |1.3TFLOPS (FP 32), vision processor: 1.6TOPS, stereo and Optical Flow Engine (SOFE): 6TOPS, image Signal Processor (ISP): 1.5Giga Pixels/s, video encoder: 1.2GPix/s, video decoder: 1.8GPix/s.
The microcontroller adopts an English flying TC297 series chip, comprises a three-core TriCore architecture with 300MHz working frequency, a 728KB +8MB capacity and an RAM with ECC (error correction coding) protection, is designed based on an ISO26262 standard, and supports the requirement of ASIL-D maximum security level. And the hardware core security architecture design is realized by matching with a basic chip.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A vehicle automatic driving auxiliary positioning fusion method is characterized by comprising the following steps:
s1: acquiring various positioning related data of a vehicle;
s2: preprocessing the various positioning related data to obtain a GNSS factor, an IMU pre-integration factor and a preprocessing result;
s3: performing first NDT point cloud registration on the preprocessing result to obtain a first point cloud registration result;
s4: performing pose calculation on the first point cloud registration result to obtain a pose calculation result;
s5: carrying out NDT map registration by using the pose calculation result to obtain an NDT map registration result and a laser odometer factor;
s6: constructing a sliding window map according to the NDT map registration result;
s7: performing second NDT point cloud registration on the sliding window map to obtain a second NDT point cloud registration result;
s8: performing closed-loop detection on the second NDT point cloud registration result to generate a closed-loop detection factor;
s9: performing constraint factor fusion on the GNSS factor, the IMU pre-integration factor, the laser odometer factor and the closed-loop detection factor to obtain a fusion result;
s10: optimizing the factor graph of the fusion result to obtain an optimized result;
s11: and generating a motion track of the vehicle according to the optimization result and the pose calculation result.
2. The vehicle automatic driving assistance positioning fusion method according to claim 1, wherein in the step S1, the plurality of positioning related data of the vehicle comprises: absolute pose, angular velocity, acceleration, and laser point cloud.
3. The vehicle automatic driving assistance positioning fusion method according to claim 2, wherein in the step S2, the preprocessing operation includes coordinate transformation, pre-integration and distortion removal, and the step S2 includes:
s201: carrying out coordinate transformation on the absolute pose to obtain an initial pose and a GNSS factor;
s202: pre-integrating the angular velocity and the acceleration by utilizing an IMU pre-integration model to obtain a pre-integration result and an IMU factor;
s203: performing motion estimation on the pre-integration result to obtain a motion estimation result;
s204: carrying out distortion removal on the laser point cloud and the pre-integration result to obtain a distortion removal result;
s205: performing feature calculation on the distortion removal result to obtain a feature calculation result;
s206: and outputting the initial pose, the motion estimation result and the feature calculation result as the preprocessing result.
4. The vehicle autopilot-assisted positioning fusion method of claim 3 wherein in step S202, the IMU pre-integration model comprises:
Figure FDA0003785121180000021
Figure FDA0003785121180000022
Figure FDA0003785121180000023
wherein v is t+Δt Representing the speed, P, of the vehicle at time t + Deltat t+Δt Representing the position of the vehicle at time t + deltat,
Figure FDA0003785121180000024
representing the rotation of the vehicle at time t + Δ t, v t Representing the speed of the vehicle at time t, g w Representing the gravitational acceleration of the vehicle in the world coordinate system, at represents a period of time,
Figure FDA0003785121180000025
a rotation matrix representing the inertial system to the world coordinate system,
Figure FDA0003785121180000026
represents the raw measured acceleration of the IMU at the time and
Figure FDA0003785121180000027
Figure FDA0003785121180000028
represents the deviation of the acceleration that varies slowly with time,
Figure FDA0003785121180000029
a white gaussian noise that represents the acceleration of the vehicle,
Figure FDA00037851211800000210
representing the IMU's raw measured angular velocity at a time
Figure FDA00037851211800000211
Figure FDA00037851211800000212
Indicates the deviation of the angular velocity over time,
Figure FDA00037851211800000213
gaussian white noise representing angular velocity.
5. The method for fusing automatic driving assistance positions of vehicles according to claim 1, wherein in step S6, the sliding window is a fixed-size window that is set on a time axis and slides along with time, only the variables in the window are optimized each time, and the remaining variables are rimmed.
6. The vehicle automatic driving assistance positioning fusion method according to claim 1, wherein the step S8 comprises:
s81: classifying the appearance of each grid by using the characteristic value attribute of each grid in the second NDT point cloud registration result to obtain a classification result;
s82: constructing a similarity function between two frames according to the classification result;
s83: carrying out coarse closed loop detection by using the similar function to obtain a coarse closed loop detection result;
s84: if the detection result of the coarse closed loop meets a preset threshold, the step S85 is carried out;
s85: and performing accurate closed-loop detection by using the sum of the distances from the mean value of each grid to the origin of coordinates to obtain an accurate closed-loop detection result, wherein the accurate closed-loop detection result comprises the closed-loop detection factor.
7. The fusion method for positioning assistance in automatic driving of vehicle according to any one of claims 1-6, wherein in step S9, during the process of adding the laser odometry factor, only the current frame associated with the current state of the vehicle is added as a constraint factor in the graph, and the laser scanning frame between two frames will not be optimized.
8. A vehicle automatic driving domain control system using the vehicle automatic driving assistance positioning fusion method according to any one of claims 1 to 7, characterized in that the vehicle automatic driving domain control system comprises:
a positioning-related data acquisition module for acquiring a plurality of positioning-related data of a vehicle;
an autonomous driving processor for performing a series of processes on the plurality of positioning-related data of the vehicle to generate a motion profile of the vehicle.
9. The vehicle autopilot-assisted positioning fusion method of claim 5 wherein the positioning-related data acquisition module comprises a GNSS + RTK unit for acquiring the absolute pose of the vehicle, an IMU unit and a sensor unit; the IMU unit is used for acquiring the angular speed and the acceleration of the vehicle, and the sensor unit is used for acquiring the laser point cloud of the vehicle.
CN202210939943.5A 2022-08-05 2022-08-05 Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof Pending CN115311349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210939943.5A CN115311349A (en) 2022-08-05 2022-08-05 Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210939943.5A CN115311349A (en) 2022-08-05 2022-08-05 Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof

Publications (1)

Publication Number Publication Date
CN115311349A true CN115311349A (en) 2022-11-08

Family

ID=83861158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210939943.5A Pending CN115311349A (en) 2022-08-05 2022-08-05 Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof

Country Status (1)

Country Link
CN (1) CN115311349A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601433A (en) * 2022-12-12 2023-01-13 安徽蔚来智驾科技有限公司(Cn) Loop detection method, computer device, computer-readable storage medium and vehicle
CN117671013A (en) * 2024-02-01 2024-03-08 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115601433A (en) * 2022-12-12 2023-01-13 安徽蔚来智驾科技有限公司(Cn) Loop detection method, computer device, computer-readable storage medium and vehicle
CN117671013A (en) * 2024-02-01 2024-03-08 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium
CN117671013B (en) * 2024-02-01 2024-04-26 安徽蔚来智驾科技有限公司 Point cloud positioning method, intelligent device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN113945206B (en) Positioning method and device based on multi-sensor fusion
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN109061703B (en) Method, apparatus, device and computer-readable storage medium for positioning
CN109696663B (en) Vehicle-mounted three-dimensional laser radar calibration method and system
CN109709801B (en) Indoor unmanned aerial vehicle positioning system and method based on laser radar
CN115311349A (en) Vehicle automatic driving auxiliary positioning fusion method and domain control system thereof
KR20220053513A (en) Image data automatic labeling method and device
CN112639502A (en) Robot pose estimation
CN111272165A (en) Intelligent vehicle positioning method based on characteristic point calibration
CN112462372B (en) Vehicle positioning method and device
CN113865580A (en) Map construction method and device, electronic equipment and computer readable storage medium
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
CN108827339B (en) High-efficient vision odometer based on inertia is supplementary
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
Li et al. Robust localization for intelligent vehicles based on compressed road scene map in urban environments
CN114323033A (en) Positioning method and device based on lane lines and feature points and automatic driving vehicle
CN116359905A (en) Pose map SLAM (selective level mapping) calculation method and system based on 4D millimeter wave radar
CN114915913A (en) UWB-IMU combined indoor positioning method based on sliding window factor graph
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
CN116608873A (en) Multi-sensor fusion positioning mapping method for automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination