CN113819905A - Multi-sensor fusion-based odometer method and device - Google Patents

Multi-sensor fusion-based odometer method and device Download PDF

Info

Publication number
CN113819905A
CN113819905A CN202010568308.1A CN202010568308A CN113819905A CN 113819905 A CN113819905 A CN 113819905A CN 202010568308 A CN202010568308 A CN 202010568308A CN 113819905 A CN113819905 A CN 113819905A
Authority
CN
China
Prior art keywords
movable object
pose
coordinate system
constraint
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010568308.1A
Other languages
Chinese (zh)
Other versions
CN113819905B (en
Inventor
刘光伟
赵季
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tusen Weilai Technology Co Ltd
Original Assignee
Beijing Tusen Weilai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tusen Weilai Technology Co Ltd filed Critical Beijing Tusen Weilai Technology Co Ltd
Priority to CN202010568308.1A priority Critical patent/CN113819905B/en
Publication of CN113819905A publication Critical patent/CN113819905A/en
Application granted granted Critical
Publication of CN113819905B publication Critical patent/CN113819905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a multi-sensor fusion-based odometer method and device, and relates to the technical field of high-precision maps. The method is applied to a movable object carrying a plurality of sensors, and comprises the following steps: acquiring sensor data acquired by various sensors carried on a movable object in real time; modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object; and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object. The method and the device can realize real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals, and have accurate results and good robustness.

Description

Multi-sensor fusion-based odometer method and device
Technical Field
The application relates to the technical field of high-precision maps, in particular to a multi-sensor fusion-based odometer method and device.
Background
At present, with the development of an automatic driving technology and an intelligent robot technology, how to ensure the accurate driving of an automatic driving vehicle and an intelligent robot becomes a hot point problem. In the automatic driving technology, a high-precision map is generally applied, which is different from a traditional navigation map, the high-precision map contains a large amount of driving assistance information, and the most important information depends on accurate three-dimensional representation of a road network, such as intersection layout, road sign positions and the like. In addition, the high-precision map also contains a lot of semantic information, meaning of different colors on communication traffic lights can be reported on the map, the high-precision map can indicate speed limit of roads, the position of the start of a left-turn lane and the like. One of the most important features of high-precision maps is precision, which enables autonomous vehicles and the like to reach centimeter-level precision, which is important to ensure the safety of autonomous vehicles.
In the fields of automatic driving and robots, the construction of high-precision maps generally needs to be applied to the odometer technology. The conventional odometer technology at present comprises a visual odometer method, a visual inertia odometer method, a laser inertia odometer method and the like. For the field of automatic driving, the laser odometer method and the laser inertia odometer method are mainly used in the field of automatic driving because the visual characteristics are sparse in a road scene and the vehicle speed is high, and the accuracy and the robustness of pose estimation are difficult to ensure by using the visual odometer method and the visual inertia odometer method. When the laser odometry method and the laser inertia odometry method are applied in the field of automatic driving, the inventor finds that in a common road scene, markers such as lamp poles, guardrails, flower beds, trees and the like usually exist, and the laser radar can establish relatively accurate geometric constraint through observation of the markers. However, in the scenes with sparse features and poor GPS signals, such as tunnels, sea-crossing bridges, deserts and gobi, similar markers do not exist, stable features are difficult to extract from the laser radar observation data, and precise geometric constraints cannot be constructed, so that in these scenes, the conventional laser odometry method and the laser inertial odometry method are degraded, cannot perform accurate pose estimation, and cannot meet the requirements for high-precision map construction in automatic driving.
Disclosure of Invention
The embodiment of the application provides a multi-sensor fusion-based odometer method and device, and the problem of inaccurate pose estimation in scenes with sparse features and poor GPS signals can be solved.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect of the embodiments of the present application, there is provided a multi-sensor fusion-based odometer method applied to a movable object carrying multiple sensors, the method including:
acquiring sensor data acquired by various sensors carried on a movable object in real time;
modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object;
and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.
In addition, according to a second aspect of the embodiments of the present invention, there is provided a multi-sensor fusion-based odometer device applied to a movable object having a plurality of types of sensors mounted thereon, the device including:
the sensor data acquisition unit is used for acquiring sensor data acquired by various sensors carried on the movable object in real time;
the constraint relation establishing unit is used for respectively modeling sensor data acquired by various sensors and establishing a constraint relation of the pose of the movable object;
and the joint optimization unit is used for performing joint optimization solution on the constraint relation of the pose of the movable object and determining the pose result of the movable object.
In addition, according to a third aspect of embodiments of the present application, there is provided a computer-readable storage medium including a program or instructions for implementing the multi-sensor fusion-based odometry method according to the first aspect when the program or instructions are run on a computer.
In addition, in a fourth aspect of embodiments of the present application, there is provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the multi-sensor fusion-based odometry method according to the first aspect.
Additionally, a fifth aspect of embodiments herein provides a computer server comprising a memory, and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement a multi-sensor fusion based odometry method as described in the first aspect above.
According to the odometer method and device based on multi-sensor fusion, sensor data collected by various sensors carried on a movable object are obtained in real time, then the sensor data collected by the various sensors can be modeled respectively, a constraint relation of the position and the attitude of the movable object is established, and therefore the constraint relation of the position and the attitude of the movable object can be subjected to combined optimization solution, and the position and attitude result of the movable object is determined. By the method and the device, the real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals can be realized, the result is accurate, and the robustness is good.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a first flowchart of a multi-sensor fusion-based odometry method provided in an embodiment of the present application;
fig. 2 is a second flowchart of a multi-sensor fusion-based odometry method provided in the embodiment of the present application;
fig. 3 is a schematic diagram of a tunnel scenario in an embodiment of the present application;
FIG. 4 is a graph illustrating a comparison of results obtained from a prior art method used in a tunnel scenario in an embodiment of the present application with a multi-sensor fusion-based odometry method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an odometer device based on multi-sensor fusion according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the present application better understood by those skilled in the art, some technical terms appearing in the embodiments of the present application are explained below:
a movable object: the mobile robot is an object capable of carrying out map acquisition, such as a vehicle, a mobile robot, an aircraft and the like, and various sensors, such as a laser radar, a camera and the like, can be carried on the movable object.
ICP: iterative Closest Point algorithm is mainly used for accurate splicing of depth images in computer vision, and accurate splicing is achieved by continuously iterating and minimizing corresponding points of source data and target data. There are many variants, and how to obtain a better splicing effect efficiently and robustly is the main hot spot.
GNSS: global Navigation Satellite System, Global Navigation Satellite System.
GPS: global Positioning System, Global Positioning System.
An IMU: the Inertial Measurement Unit is a device for measuring the three-axis attitude angle (or angular velocity) and acceleration of an object.
High-precision maps: different from the traditional navigation map, the high-precision map contains a large amount of driving assistance information, and the most important information depends on the accurate three-dimensional representation of a road network, such as intersection layout, road sign positions and the like. In addition, the high-precision map also contains a lot of semantic information, meaning of different colors on communication traffic lights can be reported on the map, the high-precision map can indicate speed limit of roads, the position of the start of a left-turn lane and the like. One of the most important features of high-precision maps is precision, which enables a vehicle to reach a centimeter-level precision, which is important to ensure the safety of an autonomous vehicle.
Mapping (Mapping): and constructing a high-precision map describing the current scene according to the estimated real-time pose of the vehicle or the mobile robot and the acquired data of the vision sensors such as the laser radar and the like.
Pose (Pose): the general term for position and orientation includes 6 degrees of freedom, including 3 degrees of freedom for position and 3 degrees of freedom for orientation. The 3 degrees of freedom of orientation are typically expressed in Pitch (Pitch), Roll (Roll), Yaw (Yaw).
Frame (Frame): the sensor finishes one-time observation of received measurement data, for example, one frame of data of the camera is a picture, and one frame of data of the laser radar is a group of laser point clouds.
Sub-map (Submap): the global map is composed of a plurality of sub-maps, and each sub-map comprises observation results of continuous multiple frames.
Registration (Registration): and matching the observation results of the same area at different moments and different positions to obtain the relative pose relationship between the two observation moments.
NDT: the Normal distribution Transform, a Normal distribution transformation algorithm, is a registration algorithm that is applied to a statistical model of three-dimensional points, using standard optimization techniques to determine the optimal match between two point clouds.
NovAtel: in the field of precision Global Navigation Satellite Systems (GNSS) and its subsystems, leading suppliers of products and technologies are in the position. The embodiment of the present application shows a NovAtel integrated navigation system.
LOAM: LiDAR mapping and mapping, laser ranging and mapping.
KD-tree: a K-Dimensional tree is a data structure for partitioning a K-Dimensional data space. The method is mainly applied to searching of multidimensional space key data (such as range searching and nearest neighbor searching).
SVD: singular Value Decomposition, is an important matrix Decomposition in linear algebra.
LIO-Mapping: lidar Inertial Odometry and Mapping, Lidar Inertial range measurement and Mapping.
Odometer (odometer): a method of estimating the pose of a movable object using data obtained from sensors of the object.
In some embodiments of the present application, the term "vehicle" is to be broadly interpreted to include any moving object, including, for example, an aircraft, a watercraft, a spacecraft, an automobile, a truck, a van, a semi-trailer, a motorcycle, a golf cart, an off-road vehicle, a warehouse transport vehicle or a farm vehicle, and a vehicle traveling on a track, such as a tram or train, and other rail vehicles. The "vehicle" in the present application may generally include: power systems, sensor systems, control systems, peripheral devices, and computer systems. In other embodiments, the vehicle may include more, fewer, or different systems.
Wherein, the driving system is the system for providing power motion for the vehicle, includes: engine/motor, transmission and wheels/tires, power unit.
The control system may comprise a combination of devices controlling the vehicle and its components, such as a steering unit, a throttle, a brake unit.
The peripheral devices may be devices that allow the vehicle to interact with external sensors, other vehicles, external computing devices, and/or users, such as wireless communication systems, touch screens, microphones, and/or speakers.
Based on the vehicle described above, the sensor system and the automatic driving control device are also provided in the automatic driving vehicle.
The sensor system may include a plurality of sensors for sensing information about the environment in which the vehicle is located, and one or more actuators for changing the position and/or orientation of the sensors. The sensor system may include any combination of sensors such as global positioning system sensors, inertial measurement units, radio detection and ranging (RADAR) units, cameras, laser rangefinders, light detection and ranging (LIDAR) units, and/or acoustic sensors; the sensor system may also include sensors (e.g., O) that monitor the vehicle interior systems2Monitors, fuel gauges, engine thermometers, etc.).
The autopilot control apparatus may include a processor and a memory, the memory having stored therein at least one machine-executable instruction, the processor executing the at least one machine-executable instruction to perform functions including a map engine, a positioning module, a perception module, a navigation or routing module, and an autopilot control module. The map engine and the positioning module are used for providing map information and positioning information. The sensing module is used for sensing things in the environment where the vehicle is located according to the information acquired by the sensor system and the map information provided by the map engine. And the navigation or path module is used for planning a driving path for the vehicle according to the processing results of the map engine, the positioning module and the sensing module. The automatic control module inputs and analyzes decision information of modules such as a navigation module or a path module and the like and converts the decision information into a control command output to a vehicle control system, and sends the control command to a corresponding component in the vehicle control system through a vehicle-mounted network (for example, an electronic network system in the vehicle, which is realized by CAN (controller area network) bus, local area internet, multimedia directional system transmission and the like), so as to realize automatic control of the vehicle; the automatic control module can also acquire information of each component in the vehicle through a vehicle-mounted network.
At present, in some scenes with sparse features and poor GPS signals (such as tunnels, sea-crossing bridges, deserts and gobi and the like), the laser odometry method and the laser inertial odometry method which are commonly used in the field of automatic driving generally degrade, cannot perform accurate pose estimation, and can not meet the requirement of high-precision map construction in automatic driving due to the fact that a pose estimation result is lost possibly along with the lapse of time.
The embodiment of the application aims to provide a multi-sensor fusion-based odometry method and a multi-sensor fusion-based odometry device, so as to solve the problem that in the prior art, a laser odometry method and a laser inertia odometry method which are commonly used for automatic driving cannot accurately estimate the pose in scenes with sparse features and poor GPS signals.
As shown in fig. 1, an embodiment of the present application provides a multi-sensor fusion-based odometer method, which is applied to a movable object carrying multiple sensors, and includes:
step 101, acquiring sensor data acquired by various sensors carried on a movable object in real time.
And 102, modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object.
And 103, carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.
In order to make those skilled in the art better understand the present invention, the embodiments of the present application will be described below with reference to specific embodiments, as shown in fig. 2, the embodiments of the present application provide a multi-sensor fusion-based odometer method, which is applied to a movable object carrying various sensors, which may include an inertial measurement unit IMU, a wheel speed meter, a laser radar, and a barometer; wherein the IMU includes an accelerometer and a gyroscope.
The method comprises the following steps:
step 201, obtaining triaxial acceleration data measured by an accelerometer, triaxial angular velocity data measured by a gyroscope, wheel speed data of a movable object measured by a wheel speed meter, point cloud data measured by a laser radar and height observation data measured by a barometer in real time.
After step 201, steps 202 to 205 are continued.
Step 202, modeling is carried out according to triaxial acceleration data measured by the accelerometer, and roll angle constraint and pitch angle constraint of the movable object are established.
The accelerometer in the IMU can measure three-axis acceleration data under an IMU coordinate system in real time, the measured three-axis acceleration data generally consists of two parts, namely gravity acceleration and self acceleration of the movable object, but the self acceleration of the movable object is usually far less than the gravity acceleration, so the influence of the self acceleration of the movable object can be ignored.
Specifically, step 202 here can be implemented as follows:
modeling is performed according to triaxial acceleration data measured by the accelerometer.
The established mathematical model has the following relationship:
Figure BDA0002548328390000071
in the above mathematical model, ax、ay、azThree-axis acceleration data representing accelerometer measurements;
Figure BDA0002548328390000072
a rotation matrix from an IMU coordinate system to a world coordinate system; g represents the normalized gravitational acceleration; a isrIndicating the vehicle body acceleration.
By simplifying the mathematical model, the roll angle estimated value theta of the IMU under the world coordinate system can be determinedrollAnd pitch angle estimate θpitch(ii) a Wherein,
Figure BDA0002548328390000081
ax、ay、azrepresenting triaxial acceleration data measured by the accelerometer.
In order to reduce the degree of freedom of joint optimization in subsequent steps and avoid rapid degradation of a mileometer method due to characteristic sparsity in scenes such as tunnels and sea-crossing bridges, the application proposes that the roll angle estimated value theta is usedrollAnd pitch angle estimate θpitchAs a fixed constraint, to be added to the subsequent joint optimization process. In addition to this, the present invention is,since the state variable of the attitude needs to be represented by quaternion in the joint optimization, the quaternion needs to be converted into a rotation matrix, and then the rotation matrix needs to be converted into an Euler angle form, so that the state variable of the attitude can be estimated according to the roll angle thetarollAnd pitch angle estimate θpitchEstablishing a roll angle constraint r of the movable objectRoll(X) and Pitch Angle constraint rPitch(X); wherein r isRoll(X)=θroll-arcsin(-R13);rPitch(X)=θpitch-arctan2(R23,R33) (ii) a X represents the pose of the IMU in a world coordinate system, is a state variable to be optimized and comprises a position p and a pose q; r is a rotation matrix form of the attitude q in the state variable X to be optimized, R23、R33、R13Respectively, the elements of the corresponding row and column in the rotation matrix R.
And step 203, performing kinematic modeling by using an Ackerman model according to the triaxial angular velocity data measured by the gyroscope and the wheel speed data of the movable object measured by the wheel speed meter, and establishing Ackerman model constraint of the horizontal position and the yaw angle of the movable object.
Specifically, step 203 here can be implemented as follows:
the application can be used for carrying out the kinematic modeling of the movable object based on the Ackerman model. For the convenience of calculation, in the ackermann kinematic model, a vehicle body coordinate system is generally established with the center of the rear axis of the movable object (for example, the rear axis of the vehicle) as the origin.
In general, default inputs of the ackerman kinematics model are the speed of the movable object and the steering wheel angle, but in practical application, the inventor finds that the accuracy of the steering wheel angle is generally difficult to guarantee, and in order to improve the accuracy and the robustness of the whole odometry method, the application applies an angle integral value of an included angle between the advancing direction of the movable object and the y axis in a world coordinate system to replace the steering wheel angle. Therefore, it is necessary to determine the angle integral value of the angle between the advancing direction of the movable object and the y axis in the world coordinate system according to the three-axis angular velocity data measured by the gyroscope:
Figure BDA0002548328390000082
Figure BDA0002548328390000083
wherein, thetaiAn angle integral value indicating an angle between the advancing direction of the movable object at the i-th time and the y-axis; t represents the t-th time;
Figure BDA0002548328390000084
the method comprises the steps of obtaining a rotation transformation relation from a vehicle body coordinate system to an IMU coordinate system in advance;
Figure BDA0002548328390000085
the yaw angle in the triaxial angular velocity data measured by the gyroscope at the time t.
Then, in the ackerman kinematics model, the speed of the left rear wheel of the movable object at the ith moment measured by the wheel speed meter under the vehicle body coordinate system can be measured
Figure BDA0002548328390000086
And the speed of the right rear wheel in the vehicle body coordinate system
Figure BDA0002548328390000087
Determining the speed v of the center of the rear axle of the movable object in the vehicle coordinate systemi(ii) a Wherein,
Figure BDA0002548328390000088
Figure BDA0002548328390000089
is a previously known velocity noise.
Then, the pose transfer equation of the movable object under the world coordinate system can be determined by adopting the kinematics modeling of the Ackerman model:
Xi+1=xi+vi.Δt·sinθi
yi+1=yi+vi·Δt·cosθi
wherein, Δ t is the time difference between two adjacent measuring moments of the wheel speed meter; x is the number ofi、yiRepresenting the horizontal position of the movable object in the world coordinate system.
Since the measurement frequency of the IMU and the wheel speed meter is usually higher than the frequency of the laser radar, the x between the k time and the k +1 time of two adjacent laser radars can be measured according to the measurement frequency of the laser radari、yi、θiIntegrating to determine x in world coordinate systemi、yi、θiRespective change delta xk(k+1)、δyk(k+1)、δθk(k+1)
Then, the pose transformation relation from the IMU coordinate system to the vehicle body coordinate system can be determined according to the external reference between the vehicle body coordinate system and the IMU coordinate system
Figure BDA0002548328390000091
And determining the pose transformation relation of the IMU between the k moment and the k +1 moment in the world coordinate system
Figure BDA0002548328390000092
Wherein:
Figure BDA0002548328390000093
thus, Ackerman model constraints r for the movable object can be establishedAkerman(X); wherein:
Figure BDA0002548328390000094
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized. For example in the formula Xk、Xk+1The poses of the IMU at the k-th and k + 1-th moments in a world coordinate system are respectively shown.
And 204, modeling according to the point cloud data measured by the laser radar, and establishing laser radar pose constraint of the movable object.
Here, the step 204 may be implemented as follows, for example, including the following steps:
step 2041, performing motion compensation on each frame of point cloud data measured by the laser radar, and determining the position of each point in each frame of point cloud data after motion compensation.
The reason why motion compensation is required is: the lidar is generally of a mechanical structure, and a certain time (usually 0.1s or 0.05s) is required for completing one frame of scanning, and due to the high-speed movement of a movable object (such as a vehicle) in the time, the acquired raw data of the lidar is influenced by the movement, so that a measured value is deviated from a real value. In order to reduce the influence of movable motion, the pose transformation relation of the IMU under the world coordinate system can be obtained according to the Ackerman model estimation
Figure BDA0002548328390000095
And performing motion compensation on the raw data measured by the laser radar. Because the time interval between two times of scanning is very short, the motion between two frames can be assumed to be linear motion, and the pose of the points acquired by the laser radar in one frame relative to the starting time of the frame can be obtained through time stamp interpolation, so that all the points acquired by the laser radar in one frame are converted into the starting time of the frame, and the position of each point after motion compensation is determined.
Step 2042, extracting the features of each frame of point cloud data after motion compensation, and dividing the points in each frame of point cloud data into line feature points and plane feature points according to the curvature information of the points in each frame of point cloud data.
This step 2042 may be specifically implemented as follows:
and obtaining any point on a wire harness and a plurality of points in a preset range of any point on the wire harness from the frame of point cloud data after motion compensation. Here, since the laser points measured by the laser radar are arranged according to the beam, a plurality of points within the preset range can be found for each laser point according to the beam, such as a plurality of laser points on the left and right sides of the beam (for example, 5 laser points are respectively located on the left and right sides, but not limited thereto).
According to the coordinate sum of any point in the laser radar coordinate systemAnd determining the curvature of any point in the coordinates of a plurality of points in a preset range of any point on the wire harness under the laser radar coordinate system. For example, the curvature at any point may be determined using the following curvature calculation formula:
Figure BDA0002548328390000101
wherein c represents
Figure BDA0002548328390000102
Curvature at a point;
Figure BDA0002548328390000103
respectively representing the coordinates of the ith and the j point on the kth line in the current frame under a laser radar coordinate system, S representing a point set consisting of a plurality of points on the left and the right sides of the ith point, and | S | representing the number of points contained in the point set.
According to a preset curvature threshold value, when the curvature of one point is larger than the curvature threshold value, the one point is taken as a line characteristic point, and when the curvature of the one point is smaller than the curvature threshold value, the one point is taken as a plane characteristic point.
Step 2043, overlapping preset frame point cloud data before the current frame point cloud data according to the pose estimated, and determining a local line feature map and a local area feature map corresponding to the current frame point cloud data.
Specifically, the pose estimation is performed incrementally, so that the line feature points, the surface feature points and the corresponding poses of each frame of point cloud before the current frame are known, and therefore, preset frame point cloud data (such as 15 frames of point cloud data) before the current frame point cloud data can be overlaid according to the poses obtained by the pose estimation, and a corresponding local line feature map (composed of line feature points) and a local surface feature map (composed of plane feature points) can be obtained.
2044, obtaining the initial pose of the laser radar of the current frame under a world coordinate system according to the external parameters between the laser radar and the IMU:
Figure BDA0002548328390000104
Figure BDA0002548328390000105
wherein p isLiDARFor the initial position of the lidar at the present moment in the world coordinate system, pLiDARFor the initial attitude, R, of the laser radar at the current moment in the world coordinate systemIMU、tIMURespectively representing the attitude and the position of the IMU at the current moment in a world coordinate system,
Figure BDA0002548328390000111
and
Figure BDA0002548328390000112
and obtaining the attitude transformation relation and the position transformation relation respectively through external reference calibration between the laser radar and the IMU.
Step 2045, according to a data index established for each point by adopting a KD-Tree algorithm in advance, searching in a local line feature map to obtain a plurality of adjacent points corresponding to each line feature point in the current frame point cloud data, and searching in the local plane feature map to obtain a plurality of adjacent points corresponding to each plane feature point in the current frame point cloud data.
2046, according to the line feature point x in the current frame point cloud datalFitting corresponding several adjacent points (for example 5 points) to obtain a straight line, and making line feature point xlThe distance function from the straight line is used as a line characteristic point error function;
the line characteristic point error function is:
Figure BDA0002548328390000113
wherein,
Figure BDA0002548328390000114
and
Figure BDA0002548328390000115
any two points on the line.
2047, according to the plane feature point x in the current frame point cloud datapFitting (for example, By SVD decomposition) the corresponding several neighboring points (for example, 5 points) to obtain a plane Ax + By + Cz + D as 0, and fitting the plane feature point xpThe distance function from the plane is taken as the face feature point error function.
Where A, B, C and D represent the parameters of the fitted plane.
The surface feature point error function is:
Figure BDA0002548328390000116
where n represents the matrix: n is (a, B, C).
2048, establishing a lidar pose constraint r of the movable object according to the line characteristic point error function and the surface characteristic point error functionLiDAR(X)。
Wherein:
Figure BDA0002548328390000117
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized; n islineRepresenting the number of line feature points, n, in the current frame point cloud dataplaneRepresenting the number of plane feature points in the current frame point cloud data.
Step 205, modeling is performed according to the altitude observation data measured by the barometer, and barometer constraint of the altitude position of the movable object is established.
Specifically, the barometer may obtain the current altitude by measuring the atmospheric pressure. Although factors such as sudden temperature changes, air flow shocks, etc. can affect the absolute accuracy of the barometer height measurement, the relative accuracy of the barometer observations is generally high. The low height estimation accuracy is always a prominent problem of the current mainstream mileage calculation method, so in order to improve the estimation accuracy of the odometer in the height direction and reduce the system accumulated error, the following method can be adopted in the embodiment of the application:
height observation data Z at the current moment measured by barometerk+1Barometer pre-measurementHeight observation data Z of initial time of measurement0Height estimation value of IMU (inertial measurement Unit) measurement at current moment in world coordinate system
Figure BDA0002548328390000121
And an estimate of the height of the IMU in the world coordinate system at the initial moment measured in advance
Figure BDA0002548328390000122
Modeling to establish barometer constraint r for height position of movable objectAltimeter(X); wherein:
Figure BDA0002548328390000123
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized;
Figure BDA0002548328390000124
respectively, rotation data and translation data of the barometer coordinate system at the current moment to the world coordinate system, which are known in advance.
And step 206, performing joint optimization solution on the roll angle constraint, the pitch angle constraint, the ackermann model constraint, the laser radar pose constraint and the barometer constraint by adopting a nonlinear optimization method, and determining a pose result of the movable object.
In particular, r can be constrained here to the roll angleRoll(X), pitch angle constraint rPitch(X), Ackerman model constraint rAkerman(X) pose constraint r of laser radarLiDAR(X) and barometer constraint rAltimeterAnd (X) solving the nonlinear least square problem of the joint optimization cost function by adopting an optimization algorithm, and determining the pose result of the IMU of the movable object in a world coordinate system (namely the maximum posterior probability estimation of the current state variable X to be optimized). The optimization algorithm may be a gauss-newton algorithm or a Levenberg-Marquardt algorithm (L-M algorithm, Levenberg-Marquardt method), but is not limited thereto.
Wherein, the joint optimization cost function is:
Figure BDA0002548328390000125
wherein,
Figure BDA0002548328390000126
and the information matrixes are preset information matrixes corresponding to the constraint items respectively.
Therefore, the odometer method based on the fusion of multiple sensors (including the laser radar, the IMU, the wheel speed meter and the barometer) realized in the steps 201 to 206 can obtain accurate relative poses between frames acquired by the laser radar, can meet the real-time pose estimation in the scenes with sparse features such as tunnels, sea-crossing bridges and the like and poor GPS signals, and has good pose result accuracy and robustness.
In an embodiment of the present application, the inventor performs experimental verification on the multi-sensor fusion-based odometry method implemented in the present application, and the process is as follows:
in order to verify the accuracy and robustness of the multi-sensor fusion-based odometer method, a data acquisition vehicle provided with sensors such as a laser radar, an IMU (inertial measurement unit), a wheel speed meter and a barometer is used in the embodiment of the application, data of a section of extra-long tunnel is acquired for experimental verification, the total length of the tunnel is about 9.2Km, as shown in FIG. 3, the scene features in the tunnel are sparse, and the walls on two sides are smooth planes.
After the multi-sensor fusion-based odometry method of the embodiment of the present application is adopted in the scenario shown in fig. 3, a comparison experiment is performed on the same data by combining the most representative laser mileage calculation method LOAM and the laser inertial navigation mileage calculation method LIO-Mapping in the prior art. The experimental result is shown in fig. 4, wherein the horizontal and vertical coordinates of fig. 4 are used for representing position information of the IMU in the pose in the world coordinate system, group-try represents the true value of the pose, and Sensor-Fusion-Odometry represents the multi-Sensor Fusion-based odometer method according to the embodiment of the present application. It can be seen that in the experimental scene, both the LOAM algorithm and the LIO-Mapping algorithm are seriously degraded, the whole course cannot be run, the pose of the IMU carried by the data acquisition vehicle in a world coordinate system is lost, and the requirement of tunnel Mapping is not met completely; under the same condition, the odometer method based on multi-sensor fusion can run the whole process, and the final pose estimation result obtains accurate relative poses among frames in the tunnel although inevitable accumulated errors exist, so that a foundation is laid for subsequent tunnel construction.
As shown in fig. 5, an embodiment of the present invention provides a multi-sensor fusion-based odometer device applied to a movable object having a plurality of sensors mounted thereon, including:
the sensor data obtaining unit 31 is configured to obtain sensor data collected by various sensors mounted on the movable object in real time.
And the constraint relation establishing unit 32 is used for respectively modeling the sensor data acquired by various sensors and establishing the constraint relation of the pose of the movable object.
And the joint optimization unit 33 is configured to perform joint optimization solution on the constraint relationship of the pose of the movable object, and determine a pose result of the movable object.
In addition, an embodiment of the present application further provides a computer-readable storage medium, which includes a program or instructions, and when the program or instructions are executed on a computer, the multi-sensor fusion-based odometry method described in fig. 1 and fig. 2 above is implemented. The specific implementation process is described in the above method embodiment, and is not described herein again.
In addition, the present application also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the multi-sensor fusion-based odometry method described in fig. 1 and 2 above. The specific implementation process is described in the above method embodiment, and is not described herein again.
In addition, the embodiment of the application also provides a computer server, which comprises a memory and one or more processors which are connected with the memory in a communication way; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement the multi-sensor fusion based odometry method of fig. 1 and 2 described above. The specific implementation process is described in the above method embodiment, and is not described herein again.
According to the odometer method and device based on multi-sensor fusion, sensor data collected by various sensors carried on a movable object are obtained in real time, then the sensor data collected by the various sensors can be modeled respectively, a constraint relation of the position and the attitude of the movable object is established, and therefore the constraint relation of the position and the attitude of the movable object can be subjected to combined optimization solution, and the position and attitude result of the movable object is determined. By the method and the device, the real-time pose estimation of the movable object in scenes with sparse features and poor GPS signals can be realized, the result is accurate, and the robustness is good.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the present application are explained by applying specific embodiments in the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. A multi-sensor fusion-based odometer method applied to a movable object carrying multiple sensors, the method comprising:
acquiring sensor data acquired by various sensors carried on a movable object in real time;
modeling sensor data acquired by various sensors respectively, and establishing a constraint relation of the pose of the movable object;
and carrying out joint optimization solution on the constraint relation of the pose of the movable object, and determining the pose result of the movable object.
2. The method of claim 1, wherein the plurality of sensors includes an Inertial Measurement Unit (IMU), a wheel speed meter, a lidar, and a barometer; wherein the IMU includes an accelerometer and a gyroscope.
3. The method of claim 2, wherein the obtaining sensor data collected by various sensors mounted on the movable object in real time comprises:
and acquiring triaxial acceleration data measured by an accelerometer, triaxial angular velocity data measured by a gyroscope, wheel speed data of a movable object measured by a wheel speed meter, point cloud data measured by a laser radar and height observation data measured by a barometer in real time.
4. The method of claim 3, wherein modeling the sensor data collected by the various sensors separately to establish a constrained relationship of the pose of the movable object comprises:
modeling is carried out according to triaxial acceleration data measured by an accelerometer, and roll angle constraint and pitch angle constraint of the movable object are established;
performing kinematic modeling by using an Ackermann model according to triaxial angular velocity data measured by a gyroscope and wheel speed data of a movable object measured by a wheel speed meter, and establishing Ackermann model constraint of the horizontal position and the yaw angle of the movable object;
modeling is carried out according to point cloud data measured by the laser radar, and laser radar pose constraints of the movable object are established;
modeling is performed according to altitude observation data measured by the barometer, and barometer constraint of the altitude position of the movable object is established.
5. The method of claim 4, wherein the jointly optimizing the constrained relationship to the pose of the movable object to determine the pose result of the movable object comprises:
and performing joint optimization solution on the roll angle constraint, the pitch angle constraint, the ackerman model constraint, the laser radar pose constraint and the barometer constraint by adopting a nonlinear optimization method, and determining a pose result of the movable object.
6. The method of claim 5, wherein modeling from the tri-axial acceleration data measured by the accelerometer to establish roll and pitch constraints for the movable object comprises:
modeling is carried out according to triaxial acceleration data measured by the accelerometer, and a roll angle estimated value theta of the IMU in a world coordinate system is determinedrollAnd pitch angle estimate θpitch(ii) a Wherein,
Figure FDA0002548328380000021
ax、ay、azthree-axis acceleration data representing accelerometer measurements;
according to the roll angle estimated value thetarollAnd pitch angle estimate θpitchEstablishing a roll angle constraint r of the movable objectRoll(X) and Pitch Angle constraint rPitch(X); wherein r isRoll(X)=θroll-arcsin(-R13);rPitch(X)=θpitch-arctan2(R23,R33) (ii) a X represents the pose of the IMU in a world coordinate system, is a state variable to be optimized and comprises a position p and a pose q; r is a rotation matrix form of the attitude q in the state variable X to be optimized, R23、R33、R13Respectively, the elements of the corresponding row and column in the rotation matrix R.
7. The method of claim 5, wherein performing kinematic modeling using an ackermann model based on the tri-axis angular velocity data measured by the gyroscope and the wheel speed data of the movable object measured by the wheel speed meter to establish ackermann model constraints for horizontal position and yaw angle of the movable object comprises:
determining the advance of the movable object under the world coordinate system according to the three-axis angular velocity data measured by the gyroscopeIntegral value of angle of direction with y-axis:
Figure FDA0002548328380000022
wherein, thetaiAn angle integral value indicating an angle between the advancing direction of the movable object at the i-th time and the y-axis; t represents the t-th time;
Figure FDA0002548328380000023
the method comprises the steps of obtaining a rotation transformation relation from a vehicle body coordinate system to an IMU coordinate system in advance;
Figure FDA0002548328380000024
the yaw angle in the triaxial angular velocity data measured by the gyroscope at the t-th moment;
measuring the speed of the left rear wheel of the movable object at the ith moment in the vehicle body coordinate system according to the wheel speed meter
Figure FDA0002548328380000025
And the speed of the right rear wheel in the vehicle body coordinate system
Figure FDA0002548328380000026
Determining the speed v of the center of the rear axle of the movable object in the vehicle coordinate systemi(ii) a Wherein,
Figure FDA0002548328380000027
Figure FDA0002548328380000028
is a previously known speed noise;
performing kinematic modeling by adopting an Ackerman model, and determining a pose transfer equation of the movable object under a world coordinate system:
xi+1=xi+vi·Δt·sinθi
yi+1=yi+vi·Δt·cosθi
wherein, Δ t is the time difference between two adjacent measuring moments of the wheel speed meter;xi、yirepresenting a horizontal position of the movable object in a world coordinate system;
according to the measuring frequency of the laser radar, x between the k-th time and the k + 1-th time of two adjacent laser radarsi、yi、θiIntegrating to determine x in world coordinate systemi、yi、θiRespective change delta xk(k+1)、δyk(k+1)、δθk(k+1)
Determining the pose transformation relation from the IMU coordinate system to the vehicle body coordinate system according to the external reference between the vehicle body coordinate system and the IMU coordinate system
Figure FDA0002548328380000029
And determining the pose transformation relation of the IMU between the k moment and the k +1 moment in the world coordinate system
Figure FDA0002548328380000031
Wherein:
Figure FDA0002548328380000032
ackerman model constraint r for establishing movable objectAkerman(X); wherein:
Figure FDA0002548328380000033
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized.
8. The method of claim 5, wherein modeling from lidar measured point cloud data to establish lidar pose constraints for the movable object comprises:
performing motion compensation on each frame of point cloud data measured by the laser radar, and determining the position of each point in each frame of point cloud data after motion compensation;
extracting the characteristics of each frame of point cloud data after motion compensation, and dividing points in each frame of point cloud data into line characteristic points and plane characteristic points according to curvature information of the points in each frame of point cloud data;
superposing preset frame point cloud data before current frame point cloud data according to the pose estimated, and determining a local line feature map and a local surface feature map corresponding to the current frame point cloud data;
obtaining the initial pose of the laser radar of the current frame under a world coordinate system according to the external parameters between the laser radar and the IMU:
Figure FDA0002548328380000034
Figure FDA0002548328380000035
wherein p isLiDARFor the initial position of the laser radar at the current moment in the world coordinate system, RLiDARFor the initial attitude, R, of the laser radar at the current moment in the world coordinate systemIMU、tIMURespectively representing the attitude and the position of the IMU at the current moment in a world coordinate system,
Figure FDA0002548328380000036
and
Figure FDA0002548328380000037
obtaining an attitude transformation relation and a position transformation relation respectively through external reference calibration between the laser radar and the IMU in advance;
searching a local line feature map according to a data index established for each point by adopting a KD-Tree algorithm in advance to obtain a plurality of near-neighbor points corresponding to each line feature point in the current frame point cloud data, and searching a local surface feature map to obtain a plurality of near-neighbor points corresponding to each plane feature point in the current frame point cloud data;
according to line characteristic point x in current frame point cloud datalFitting a plurality of corresponding neighbor points to obtain a straight line, and connecting the line with a characteristic point xlThe distance function from the straight line is used as a line characteristic point error function;
the line characteristic point error function is:
Figure FDA0002548328380000041
wherein,
Figure FDA0002548328380000042
and
Figure FDA0002548328380000043
any two points on the straight line;
according to the plane characteristic point x in the current frame point cloud datapFitting a plurality of corresponding adjacent points to obtain a plane Ax + By + Cz + D as 0, and fitting the surface feature point xpThe distance function from the plane is used as a surface characteristic point error function; wherein A, B, C and D represent parameters of the fitted plane;
the surface feature point error function is:
Figure FDA0002548328380000044
where n represents the matrix: n ═ (a, B, C);
establishing laser radar pose constraint r of the movable object according to the line characteristic point error function and the surface characteristic point error functionLiDAR(X); wherein:
Figure FDA0002548328380000045
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized; n islineRepresenting the number of line feature points, n, in the current frame point cloud dataplaneRepresenting the number of plane feature points in the current frame point cloud data.
9. The method of claim 5, wherein modeling from altitude observations of barometer measurements, establishing barometer constraints on the altitude position of the movable object, comprises:
height observation data Z at the current moment measured by barometerk+1Altitude observation data Z at initial time measured in advance by barometer0Height estimation value of IMU (inertial measurement Unit) measurement at current moment in world coordinate system
Figure FDA0002548328380000046
And an estimate of the height of the IMU in the world coordinate system at the initial moment measured in advance
Figure FDA0002548328380000047
Modeling to establish barometer constraint r for height position of movable objectAltimeter(X); wherein:
Figure FDA0002548328380000048
x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized;
Figure FDA0002548328380000049
respectively, rotation data and translation data of the barometer coordinate system at the current moment to the world coordinate system, which are known in advance.
10. The method of claim 5, wherein jointly optimizing the roll angle constraint, the pitch angle constraint, the ackermann model constraint, the lidar pose constraint, and the barometer constraint using a nonlinear optimization method to determine a pose result for the movable object comprises:
for the transverse roll angle constraint rRoll(X), pitch angle constraint rPitch(X), Ackerman model constraint rAkerman(X) pose constraint r of laser radarLiDAR(X) and barometer constraint rAltimeter(X) solving a nonlinear least square problem for the joint optimization cost function by adopting an optimization algorithm, and determining a pose result of the IMU of the movable object in a world coordinate system;
wherein, the joint optimization cost function is:
Figure FDA0002548328380000051
wherein,
Figure FDA0002548328380000052
respectively corresponding to each constraint item and preset information matrixes; x represents the pose of the IMU in a world coordinate system, and is a state variable to be optimized.
11. A multi-sensor fusion-based odometer device applied to a movable object on which a plurality of types of sensors are mounted, the device comprising:
the sensor data acquisition unit is used for acquiring sensor data acquired by various sensors carried on the movable object in real time;
the constraint relation establishing unit is used for respectively modeling sensor data acquired by various sensors and establishing a constraint relation of the pose of the movable object;
and the joint optimization unit is used for performing joint optimization solution on the constraint relation of the pose of the movable object and determining the pose result of the movable object.
12. A computer-readable storage medium comprising a program or instructions for implementing the multi-sensor fusion-based odometry method according to any one of claims 1 to 10, when said program or instructions are run on a computer.
13. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the multi-sensor fusion based odometry method according to any one of claims 1 to 10.
14. A computer server comprising a memory and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement a multi-sensor fusion based odometry method as claimed in any one of claims 1 to 10.
CN202010568308.1A 2020-06-19 2020-06-19 Mileage metering method and device based on multi-sensor fusion Active CN113819905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010568308.1A CN113819905B (en) 2020-06-19 2020-06-19 Mileage metering method and device based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010568308.1A CN113819905B (en) 2020-06-19 2020-06-19 Mileage metering method and device based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN113819905A true CN113819905A (en) 2021-12-21
CN113819905B CN113819905B (en) 2024-07-12

Family

ID=78924490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010568308.1A Active CN113819905B (en) 2020-06-19 2020-06-19 Mileage metering method and device based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN113819905B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113970330A (en) * 2021-12-22 2022-01-25 比亚迪股份有限公司 Vehicle-mounted multi-sensor fusion positioning method, computer equipment and storage medium
CN114155495A (en) * 2022-02-10 2022-03-08 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge
CN114897942A (en) * 2022-07-15 2022-08-12 深圳元戎启行科技有限公司 Point cloud map generation method and device and related storage medium
CN115655302A (en) * 2022-12-08 2023-01-31 安徽蔚来智驾科技有限公司 Laser odometer implementation method, computer equipment, storage medium and vehicle
CN114964212B (en) * 2022-06-02 2023-04-18 广东工业大学 Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
WO2023226375A1 (en) * 2022-05-22 2023-11-30 远也科技(苏州)有限公司 Method and apparatus for determining motion parameter, and system
CN118334568A (en) * 2024-06-13 2024-07-12 广汽埃安新能源汽车股份有限公司 Pose construction method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9870624B1 (en) * 2017-01-13 2018-01-16 Otsaw Digital Pte. Ltd. Three-dimensional mapping of an environment
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 The extraction and coding method of the motion feature of the digital retina of mobile camera
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110262546A (en) * 2019-06-18 2019-09-20 武汉大学 A kind of tunnel intelligent unmanned plane cruising inspection system and method
CN110307836A (en) * 2019-07-10 2019-10-08 北京智行者科技有限公司 A kind of accurate positioning method cleaned for unmanned cleaning vehicle welt
CN110906923A (en) * 2019-11-28 2020-03-24 重庆长安汽车股份有限公司 Vehicle-mounted multi-sensor tight coupling fusion positioning method and system, storage medium and vehicle
CN111258313A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Multi-sensor fusion SLAM system and robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9870624B1 (en) * 2017-01-13 2018-01-16 Otsaw Digital Pte. Ltd. Three-dimensional mapping of an environment
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 The extraction and coding method of the motion feature of the digital retina of mobile camera
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110262546A (en) * 2019-06-18 2019-09-20 武汉大学 A kind of tunnel intelligent unmanned plane cruising inspection system and method
CN110307836A (en) * 2019-07-10 2019-10-08 北京智行者科技有限公司 A kind of accurate positioning method cleaned for unmanned cleaning vehicle welt
CN110906923A (en) * 2019-11-28 2020-03-24 重庆长安汽车股份有限公司 Vehicle-mounted multi-sensor tight coupling fusion positioning method and system, storage medium and vehicle
CN111258313A (en) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 Multi-sensor fusion SLAM system and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
温安邦;吴怀宇;赵季;: "基于扫描匹配预处理的即时定位与地图创建", 计算机工程与应用, no. 33 *
纪嘉文;杨明欣;: "一种基于多传感融合的室内建图和定位算法", 成都信息工程大学学报, no. 04 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113970330A (en) * 2021-12-22 2022-01-25 比亚迪股份有限公司 Vehicle-mounted multi-sensor fusion positioning method, computer equipment and storage medium
CN114155495A (en) * 2022-02-10 2022-03-08 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge
CN114155495B (en) * 2022-02-10 2022-05-06 西南交通大学 Safety monitoring method, device, equipment and medium for vehicle operation in sea-crossing bridge
WO2023226375A1 (en) * 2022-05-22 2023-11-30 远也科技(苏州)有限公司 Method and apparatus for determining motion parameter, and system
CN114964212B (en) * 2022-06-02 2023-04-18 广东工业大学 Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
CN114897942A (en) * 2022-07-15 2022-08-12 深圳元戎启行科技有限公司 Point cloud map generation method and device and related storage medium
CN114897942B (en) * 2022-07-15 2022-10-28 深圳元戎启行科技有限公司 Point cloud map generation method and device and related storage medium
CN115655302A (en) * 2022-12-08 2023-01-31 安徽蔚来智驾科技有限公司 Laser odometer implementation method, computer equipment, storage medium and vehicle
CN118334568A (en) * 2024-06-13 2024-07-12 广汽埃安新能源汽车股份有限公司 Pose construction method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113819905B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
CN113819914B (en) Map construction method and device
CN113819905B (en) Mileage metering method and device based on multi-sensor fusion
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN113945206A (en) Positioning method and device based on multi-sensor fusion
CN107246876B (en) Method and system for autonomous positioning and map construction of unmanned automobile
CN112083725B (en) Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN111142091B (en) Automatic driving system laser radar online calibration method fusing vehicle-mounted information
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN112639882B (en) Positioning method, device and system
CN109991636A (en) Map constructing method and system based on GPS, IMU and binocular vision
EP4124829B1 (en) Map construction method, apparatus, device and storage medium
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
US20240053475A1 (en) Method, apparatus, and system for vibration measurement for sensor bracket and movable device
CN113252051A (en) Map construction method and device
CN113252022A (en) Map data processing method and device
US20220035036A1 (en) Method and apparatus for positioning movable device, and movable device
CN111402328A (en) Pose calculation method and device based on laser odometer
CN111708010B (en) Mobile equipment positioning method, device and system and mobile equipment
CN117234203A (en) Multi-source mileage fusion SLAM downhole navigation method
CN111829514A (en) Road surface working condition pre-aiming method suitable for vehicle chassis integrated control
Parra-Tsunekawa et al. A kalman-filtering-based approach for improving terrain mapping in off-road autonomous vehicles
CN111257853A (en) Automatic driving system laser radar online calibration method based on IMU pre-integration
Liu et al. Vehicle sideslip angle estimation: a review
CN114370872B (en) Vehicle attitude determination method and vehicle
CN113777589A (en) LIDAR and GPS/IMU combined calibration method based on point characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant