CN116678423A - Multisource fusion positioning method, multisource fusion positioning device and vehicle - Google Patents

Multisource fusion positioning method, multisource fusion positioning device and vehicle Download PDF

Info

Publication number
CN116678423A
CN116678423A CN202310612615.9A CN202310612615A CN116678423A CN 116678423 A CN116678423 A CN 116678423A CN 202310612615 A CN202310612615 A CN 202310612615A CN 116678423 A CN116678423 A CN 116678423A
Authority
CN
China
Prior art keywords
information
time point
positioning
acquisition time
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310612615.9A
Other languages
Chinese (zh)
Other versions
CN116678423B (en
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310612615.9A priority Critical patent/CN116678423B/en
Publication of CN116678423A publication Critical patent/CN116678423A/en
Application granted granted Critical
Publication of CN116678423B publication Critical patent/CN116678423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The disclosure relates to a multi-source fusion positioning method, a multi-source fusion positioning device and a vehicle, wherein the multi-source fusion positioning method comprises the following steps: acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by a laser sensor at each acquisition time point, a point cloud local map and first positioning information which is determined according to data acquired by an inertial sensor at each time point before the acquisition time point; determining second positioning information of the laser sensor at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; and then, the target positioning information of the vehicle at the acquisition time point is determined according to the positioning information, so that transverse constraint can be provided by combining the data acquired by the laser sensor in environments with poor illumination and weather, and the positioning accuracy of the vehicle is improved.

Description

Multisource fusion positioning method, multisource fusion positioning device and vehicle
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to a multi-source fusion positioning method, a multi-source fusion positioning device and a vehicle.
Background
At present, in order to realize a high-precision positioning function of a vehicle, a technical scheme of fusing data acquired by an inertial sensor, a global navigation positioning system, a visual sensor and the like and determining positioning information of the vehicle is mainly adopted. In the scheme, the data collected by the vision sensor mainly provides transverse constraint, so that the transverse positioning accuracy is improved.
However, in the above scheme, the data collected by the vision sensor is easily affected by factors such as illumination and weather, and in environments with poor illumination and weather, lateral constraint is difficult to provide, so that the positioning accuracy of the vehicle is reduced.
Disclosure of Invention
The disclosure provides a multi-source fusion positioning method, a multi-source fusion positioning device and a vehicle.
According to a first aspect of embodiments of the present disclosure, there is provided a multi-source fusion positioning method, the method including: acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by the laser sensor at each acquisition time point, a point cloud local map and first positioning information of the inertial sensor at the acquisition time point; the first positioning information is determined according to the data acquired by the inertial sensor at each time point before the acquisition time point; determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; and determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
In an embodiment of the present disclosure, the method for obtaining laser point cloud data includes: acquiring at least one scanning time period of the laser sensor, wherein the laser sensor completes one round of peripheral laser scanning in the scanning time period; acquiring at least one target scanning time period comprising the acquisition time point in the scanning time period; and determining the laser point cloud data at the acquisition time point according to each scanning time point in the target scanning time period, the scanning angle of the laser sensor at each scanning time point and the scanned point information.
In one embodiment of the present disclosure, the determining the laser point cloud data at the acquisition time point according to each scanning time point in the target scanning time period, and the scanning angle and the scanned point information of the laser sensor at each scanning time point includes: determining, for each scanning time point within the target scanning time period, a time interval between the scanning time point and the acquisition time point; determining the driving distance of the vehicle in the time interval according to the driving speed information of the vehicle in the time interval; determining the rotation angle of the laser sensor in the time interval according to the scanning angle at the scanning time point and the scanning angle at the acquisition time point; correcting the point information scanned at the scanning time point according to the driving distance of the vehicle and the rotation angle of the laser sensor in the time interval to obtain the point information at the acquisition time point; and determining the laser point cloud data at the acquisition time point according to the plurality of point information at the acquisition time point.
In one embodiment of the disclosure, the laser point cloud data is laser point cloud data in a laser sensor coordinate system, and determining the error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point includes: determining laser point cloud data under a longitude and latitude coordinate system according to the laser point cloud data under the laser sensor coordinate system and the first positioning information; performing linear surface feature extraction processing on the laser point cloud data under the longitude and latitude coordinate system to obtain first linear surface feature information; and matching and performing error calculation processing on the first line surface characteristic information and the second line surface characteristic information in the point cloud local map to obtain the error state information.
In one embodiment of the present disclosure, the matching and error calculating the first line surface feature information and the second line surface feature information in the point cloud local map to obtain the error state information includes: matching the first line surface characteristic information and the second line surface characteristic information, and determining the corresponding relation between the first line surface characteristic information and the second line surface characteristic information; and optimizing the first Kalman filter according to the corresponding relation between the first line-plane characteristic information and the second line-plane characteristic information and a point-plane observation equation to acquire the error state information output by the optimized first Kalman filter.
In one embodiment of the disclosure, the determining the target positioning information of the vehicle at the collection time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle includes: and inputting the first positioning information, the second positioning information and the historical positioning information into a second Kalman filter, and obtaining the target positioning information output by the second Kalman filter.
In one embodiment of the present disclosure, the positioning sensor further comprises at least one of: wheel speed sensor, vision sensor and global navigation positioning system; the determining the target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle includes: for each other positioning sensor except the inertial sensor and the laser sensor in the plurality of sensors, when the data acquired by the other positioning sensor exists at the acquisition time point, determining other positioning information of the other positioning sensor at the acquisition time point according to the data acquired by the other positioning sensor at the acquisition time point; and determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information, at least one piece of other positioning information and the historical positioning information of the vehicle.
According to a second aspect of embodiments of the present disclosure, there is also provided a multi-source fusion positioning device, the device comprising: the first acquisition module is used for acquiring data acquired by a plurality of positioning sensors on the vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor; the second acquisition module is used for acquiring laser point cloud data acquired by the laser sensor at each acquisition time point, a point cloud local map and first positioning information of the inertial sensor at the acquisition time point; the first positioning information is determined according to the data acquired by the inertial sensor at each time point before the acquisition time point; the first determining module is used for determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; the second determining module is used for determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; and the third determining module is used for determining the target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
According to a third aspect of embodiments of the present disclosure, there is also provided a vehicle including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to: the steps of the multi-source fusion positioning method are realized.
According to a fourth aspect of embodiments of the present disclosure, there is also provided a non-transitory computer readable storage medium, which when executed by a processor, causes the processor to perform the multi-source fusion positioning method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by the laser sensor at the acquisition time points, a point cloud local map and first positioning information of the inertial sensor at the acquisition time points aiming at each acquisition time point of the laser sensor; the first positioning information is determined according to the data acquired by the inertial sensors at each time point before the acquisition time point; determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; according to the first positioning information, the second positioning information and the historical positioning information of the vehicle, the target positioning information of the vehicle at the acquisition time point is determined, so that transverse constraint can be provided by combining the data acquired by the laser sensor, the influence of factors such as illumination and weather is avoided, the transverse constraint can be provided in the environment with poor illumination and weather, and the positioning accuracy of the vehicle is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart of a multi-source fusion positioning method according to one embodiment of the present disclosure;
FIG. 2 is a flow chart of a multi-source fusion positioning method according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the principle of the point-plane observation equation;
FIG. 4 is a schematic structural view of a multi-source fusion positioning device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a vehicle according to an exemplary embodiment of the present disclosure.
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
At present, in order to realize a high-precision positioning function of a vehicle, a technical scheme of fusing data acquired by an inertial sensor, a global navigation positioning system, a visual sensor and the like and determining positioning information of the vehicle is mainly adopted. In the scheme, the data collected by the vision sensor mainly provides transverse constraint, so that the transverse positioning accuracy is improved.
However, in the above scheme, the data collected by the vision sensor is easily affected by factors such as illumination and weather, and in environments with poor illumination and weather, lateral constraint is difficult to provide, so that the positioning accuracy of the vehicle is reduced.
Fig. 1 is a flow chart of a multi-source fusion positioning method according to an embodiment of the present disclosure. It should be noted that, the multi-source fusion positioning method of the present embodiment may be applied to a multi-source fusion positioning device, and the device may be configured in an electronic apparatus, so that the electronic apparatus may perform a multi-source fusion positioning function.
The electronic device may be a vehicle-mounted device in the vehicle, a controller in the vehicle, or the like, which may be used for controlling the vehicle, or the electronic device may also be a device that communicates with the vehicle, where the device may acquire data collected by a positioning sensor on the vehicle, process the data, and obtain positioning information and transmit the positioning information to the vehicle. The following embodiments will be described with reference to an example in which the execution subject is a controller in a vehicle.
As shown in fig. 1, the method comprises the steps of:
step 101, acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor comprises at least an inertial sensor and a laser sensor.
In the embodiment of the disclosure, the data collected by the inertial sensor may be acceleration information and rotation angle information. In the case that the inertial sensor is a six-axis inertial sensor, the inertial sensor may acquire acceleration information in 3 directions and rotation angle information in 3 axes. The data collected by the inertial sensor are relative data, namely, relative acceleration and relative rotation angle relative to the initial position of the inertial sensor. The positioning information of the vehicle can be determined based on the data acquired by the inertial sensor and the initial position of the inertial sensor.
In the embodiments of the present disclosure, the laser sensor may be, for example, a laser odometer or the like. The laser odometer may control the laser scanner to perform a laser scanning process while rotating the laser scanner. Wherein the laser scanning process, i.e., the emission of laser light, and the detection of returned laser light; and determining scanned point information according to the emitted laser light and the returned laser light. The laser odometer finishes scanning once, and can obtain point information in a peripheral preset angle. The preset angle may be, for example, 120 degrees, 180 degrees, 360 degrees, and the like, and may be set according to actual needs. The data collected by the laser sensor is relative data, namely, point information relative to the position of the laser sensor. Wherein, based on the data collected by the laser sensor, the positioning information of the vehicle can be determined.
In an embodiment of the present disclosure, the positioning sensor may include at least one of the following in addition to the inertial sensor and the laser sensor: wheel speed sensor, vision sensor and global navigation positioning system. The wheel speed sensor is used for detecting the wheel rotating speed of the vehicle; according to the wheel rotating speed and the wheel radius, the speed information of the vehicle is determined, and then the relative position information of the vehicle can be determined. The relative position information is relative position information with respect to an initial position of the vehicle.
In the embodiment of the disclosure, the vision sensor may be, for example, a camera or the like, for acquiring a surrounding image of the vehicle; performing image recognition on the vehicle surrounding image to obtain a surrounding object of the vehicle; matching the peripheral objects of the vehicle with the map to determine the positioning result of the peripheral objects; and determining the positioning information of the vehicle according to the positioning result of the surrounding object and the position relationship between the surrounding object and the vehicle.
In embodiments of the present disclosure, the global navigation positioning system may determine positioning information based on carrier phase differential techniques.
Wherein the frequency at which different positioning sensors collect data may be the same or different. For example, the acquisition frequency of the inertial sensor is highest, and the acquisition frequency of the other positioning sensors is smaller than that of the inertial sensor. That is, for each of all the acquisition time points involved in the plurality of positioning sensors, there is data acquired by a part of the positioning sensors at that acquisition time point, and there is no data acquired by a part of the positioning sensors. For example, the inertial sensor has acquired data at the first acquisition time point, and the other positioning sensors have no data; the inertial sensor and the laser sensor have acquired data at the second acquisition time point, and the other positioning sensors do not.
102, acquiring laser point cloud data acquired by a laser sensor at an acquisition time point, a point cloud local map and first positioning information of an inertial sensor at the acquisition time point for each acquisition time point of the laser sensor; the first positioning information is determined according to data acquired by the inertial sensors at various time points before the acquisition time point.
In the embodiment of the disclosure, the acquisition time point of the laser sensor may be the latest scanning time point in one scanning of the laser sensor. The number of scans of the laser sensor may be multiple times. The method for acquiring the laser point cloud data of the laser sensor at the acquisition time point in one scanning of the laser sensor can comprise the following steps: acquiring at least one scanning time period of the laser sensor, wherein the laser sensor completes one round of peripheral laser scanning in the scanning time period; acquiring a target scanning time period including an acquisition time point in at least one scanning time period; and determining laser point cloud data at the acquisition time points according to each scanning time point in the target scanning time period, the scanning angle of the laser sensor at each scanning time point and the scanned point information.
Wherein, a scanning time period is a time period used for the laser sensor to complete one scanning, namely, the laser sensor completes one round of peripheral scanning. Wherein the scanned point information, i.e. the relative position information of the surrounding points with respect to the laser sensor, etc.
The process of determining the laser point cloud data at the acquisition time points by the controller in the vehicle according to each scanning time point in the target scanning time period, the scanning angle of the laser sensor at each scanning time point and the scanned point information can be, for example, determining a time interval between the scanning time point and the acquisition time point for each scanning time point in the target scanning time period; determining the driving distance of the vehicle in the time interval according to the driving speed information of the vehicle in the time interval; determining the rotation angle of the laser sensor in the time interval according to the scanning angle at the scanning time point and the scanning angle at the acquisition time point; correcting the point information scanned at the scanning time point according to the running distance of the vehicle and the rotation angle of the laser sensor in the time interval to obtain the point information at the acquisition time point; and determining laser point cloud data at the acquisition time point according to the plurality of point information at the acquisition time point.
In the embodiment of the disclosure, the point cloud local map may include line-plane characteristic information of the periphery of the vehicle, and is generated according to laser point cloud data acquired by the laser sensor at a historical acquisition time point and positioning information of the vehicle at the historical acquisition time point.
The line-plane characteristic information in the point cloud local map is stored and processed by adopting an increment voxel (Incremental Voxels, iVox) data structure.
In the embodiment of the disclosure, the process of determining, by the controller in the vehicle, the first positioning information of the inertial sensor at the acquisition time point may be, for example, determining, according to the data acquired by the inertial sensor at each time point before the acquisition time point, the relative position information of the inertial sensor at the acquisition time point, where the phase versus position information is the relative position information with respect to the initial position of the inertial sensor; absolute position information of the inertial sensor at the acquisition time point, that is, first positioning information is determined in combination with the relative position information and the initial position of the inertial sensor.
And step 103, determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point.
In the embodiment of the disclosure, the laser point cloud data is obtained by laser scanning of the laser sensor on the periphery of the vehicle, and the position of the line-surface characteristic information on the periphery of the vehicle is fixed and does not change along with the running of the vehicle. That is, a plurality of point information for the same line feature may be extracted from the two frames of laser point cloud data. Based on the principle, the error state information of the inertial sensor at the acquisition time point can be determined according to the laser point cloud data and the point cloud local map at the acquisition time point, so that the second positioning information of the laser sensor at the acquisition time point is determined.
In the embodiment of the disclosure, the error state information may be a 15-dimensional error state quantity. The 15-dimensional error state quantity includes: position error, speed error, attitude error, zero offset error of the addition table, zero offset error of the gyro. The errors are errors in longitude and latitude coordinate systems. Wherein the unit of the position error is radian.
Step 104, determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information.
Step 105, determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
In the embodiment of the present disclosure, the process of executing step 105 by the controller in the vehicle may be, for example, inputting the first positioning information, the second positioning information, and the historical positioning information into the second kalman filter, and obtaining the target positioning information output by the second kalman filter.
In an embodiment of the present disclosure, the positioning sensor may further include at least one of: wheel speed sensor, vision sensor and global navigation positioning system. Correspondingly, the process of executing step 105 by the controller in the vehicle may be, for example, for each other positioning sensor except the inertial sensor and the laser sensor in the plurality of sensors, determining other positioning information of the other positioning sensor at the acquisition time point according to the data acquired by the other positioning sensor at the acquisition time point when the data acquired by the other positioning sensor at the acquisition time point exists; and determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information, at least one piece of other positioning information and the historical positioning information of the vehicle.
In one example, other sensors may include a wheel speed sensor and a vision sensor. In another example, other sensors may include wheel speed sensors and a global navigation positioning system. In another example, other sensors may include wheel speed sensors, vision sensors, and global navigation positioning systems.
In the multisource fusion positioning method of the embodiment of the disclosure, data acquired by a plurality of positioning sensors on a vehicle are acquired; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by the laser sensor at the acquisition time points, a point cloud local map and first positioning information of the inertial sensor at the acquisition time points aiming at each acquisition time point of the laser sensor; the first positioning information is determined according to the data acquired by the inertial sensors at each time point before the acquisition time point; determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; according to the first positioning information, the second positioning information and the historical positioning information of the vehicle, the target positioning information of the vehicle at the acquisition time point is determined, so that transverse constraint can be provided by combining the data acquired by the laser sensor, the influence of factors such as illumination and weather is avoided, the transverse constraint can be provided in the environment with poor illumination and weather, and the positioning accuracy of the vehicle is improved.
Fig. 2 is a flow chart of a multi-source fusion positioning method according to another embodiment of the present disclosure. It should be noted that, the multi-source fusion positioning method of the present embodiment may be applied to a multi-source fusion positioning device, and the device may be configured in an electronic apparatus, so that the electronic apparatus may perform a multi-source fusion positioning function.
The electronic device may be a vehicle-mounted device in the vehicle, a controller in the vehicle, or the like, which may be used for controlling the vehicle, or the electronic device may also be a device that communicates with the vehicle, where the device may acquire data collected by a positioning sensor on the vehicle, process the data, and obtain positioning information and transmit the positioning information to the vehicle. The following embodiments will be described with reference to an example in which the execution subject is a controller in a vehicle.
As shown in fig. 2, the method comprises the steps of:
step 201, acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor comprises at least an inertial sensor and a laser sensor.
Step 202, acquiring laser point cloud data acquired by a laser sensor at an acquisition time point, a point cloud local map and first positioning information of an inertial sensor at the acquisition time point for each acquisition time point of the laser sensor; the first positioning information is determined according to data acquired by the inertial sensors at various time points before the acquisition time point.
Step 203, determining laser point cloud data in a longitude and latitude coordinate system according to the laser point cloud data in the laser sensor coordinate system and the first positioning information.
In the embodiment of the disclosure, laser point cloud data in a laser sensor coordinate system is point information of surrounding points of a vehicle relative to the laser sensor. The controller in the vehicle performs the process of step 203 may be, for example, determining relative position information of the inertial sensor and the laser sensor; absolute position information of surrounding points of the vehicle is determined based on the relative position information and the first positioning information.
And 204, performing line-plane feature extraction processing on the laser point cloud data under the longitude and latitude coordinate system to obtain first line-plane feature information.
The first line characteristic information comprises at least one line characteristic and at least one surface characteristic. The line features and the plane features are determined based on curvature of points in the laser point cloud data.
And 205, matching and error calculation processing are performed on the first line surface characteristic information and the second line surface characteristic information in the point cloud local map to obtain error state information.
In the embodiment of the present disclosure, the controller in the vehicle performs the step 205, for example, may perform matching processing on the first line surface feature information and the second line surface feature information, and determine a correspondence between the first line surface feature information and the second line surface feature information; and according to the corresponding relation between the first line surface characteristic information and the second line surface characteristic information and the point surface observation equation, carrying out optimization processing on the first Kalman filter to obtain error state information output by the optimized first Kalman filter.
The core idea in the point-plane observation equation is that when a point is in a plane, the distance from the point to the plane is 0 (the dot product of the vector on the plane and the normal vector of the plane is 0). As shown in fig. 3, a schematic diagram of the point-plane observation equation is shown. In fig. 3, first, a laser point (facet point) is taken from the first line-feature information; in the point cloud local map, several local map points (nearest neighbors) closest to the surface point are searched, a fitting plane is constructed by utilizing the nearest neighbors, and if the surface point is considered to be on the fitting plane, the vector v between the laser point and one of the local map points and the normal vector u of the fitting plane are vertical, namely the dot product of u and v is 0.
The process of optimizing the first kalman filter by the controller in the vehicle according to the corresponding relation between the first line surface characteristic information and the second line surface characteristic information and the point surface observation equation to obtain the error state information output by the optimized first kalman filter can be, for example, that a point is taken from the first line surface characteristic information; taking a plurality of local map points in one surface feature nearest to the surface point from the second line surface feature information; bringing the surface point and the surface feature into a point-surface observation equation to obtain the distance between the surface point and the surface corresponding to the surface feature; inputting the distance into a first Kalman filter, acquiring predicted position information of the face point, and updating the first Kalman filter; determining the distance between the surface point and the surface corresponding to the surface feature according to the predicted position information of the surface point, and repeating the process until the distance obtained by determining the position information of the surface point output by the first Kalman filter is smaller than the preset distance threshold position; and acquiring parameters in the first Kalman filter at the moment, namely error state information.
Step 206, determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information.
Step 207, determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
It should be noted that, for details of steps 201 to 202 and steps 206 to 207, reference may be made to steps 101 to 102 and steps 104 to 105 in the embodiment shown in fig. 1, and detailed description thereof will not be provided here.
In the multisource fusion positioning method of the embodiment of the disclosure, data acquired by a plurality of positioning sensors on a vehicle are acquired; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by the laser sensor at the acquisition time points, a point cloud local map and first positioning information of the inertial sensor at the acquisition time points aiming at each acquisition time point of the laser sensor; the first positioning information is determined according to the data acquired by the inertial sensors at each time point before the acquisition time point; determining laser point cloud data in a longitude and latitude coordinate system according to the laser point cloud data in the laser sensor coordinate system and the first positioning information; performing line surface feature extraction processing on laser point cloud data under a longitude and latitude coordinate system to obtain first line surface feature information; matching and error calculation processing are carried out on the first line surface characteristic information and the second line surface characteristic information in the point cloud local map, so that error state information is obtained; determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; according to the first positioning information, the second positioning information and the historical positioning information of the vehicle, the target positioning information of the vehicle at the acquisition time point is determined, so that transverse constraint can be provided by combining the data acquired by the laser sensor, the influence of factors such as illumination and weather is avoided, the transverse constraint can be provided in the environment with poor illumination and weather, and the positioning accuracy of the vehicle is improved.
Fig. 4 is a schematic structural view of a multi-source fusion positioning device according to an embodiment of the present disclosure.
As shown in fig. 4, the multi-source fusion positioning device may include: a first acquisition module 401, a second acquisition module 402, a first determination module 403, a second determination module 404, and a third determination module 405.
The first acquiring module 401 is configured to acquire data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor;
a second acquisition module 402, configured to acquire, for each acquisition time point of the laser sensor, laser point cloud data acquired by the laser sensor at the acquisition time point, a point cloud local map, and first positioning information of the inertial sensor at the acquisition time point; the first positioning information is determined according to the data acquired by the inertial sensor at each time point before the acquisition time point;
a first determining module 403, configured to determine error state information at the acquisition time point according to the laser point cloud data, the first positioning information, and the point cloud local map at the acquisition time point;
a second determining module 404, configured to determine second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error status information;
And a third determining module 405, configured to determine target positioning information of the vehicle at the collection time point according to the first positioning information, the second positioning information, and the historical positioning information of the vehicle.
In one embodiment of the disclosure, the second obtaining module 402 is specifically configured to obtain at least one scanning period of the laser sensor, where the laser sensor completes a round of peripheral laser scanning during the scanning period; acquiring at least one target scanning time period comprising the acquisition time point in the scanning time period; and determining the laser point cloud data at the acquisition time point according to each scanning time point in the target scanning time period, the scanning angle of the laser sensor at each scanning time point and the scanned point information.
In one embodiment of the present disclosure, the second obtaining module 402 is specifically further configured to determine, for each scanning time point within the target scanning time period, a time interval between the scanning time point and the acquisition time point; determining the driving distance of the vehicle in the time interval according to the driving speed information of the vehicle in the time interval; determining the rotation angle of the laser sensor in the time interval according to the scanning angle at the scanning time point and the scanning angle at the acquisition time point; correcting the point information scanned at the scanning time point according to the driving distance of the vehicle and the rotation angle of the laser sensor in the time interval to obtain the point information at the acquisition time point; and determining the laser point cloud data at the acquisition time point according to the plurality of point information at the acquisition time point.
In one embodiment of the present disclosure, the laser point cloud data is laser point cloud data in a laser sensor coordinate system, and the first determining module 403 is specifically configured to determine laser point cloud data in a longitude and latitude coordinate system according to the laser point cloud data in the laser sensor coordinate system and the first positioning information; performing linear surface feature extraction processing on the laser point cloud data under the longitude and latitude coordinate system to obtain first linear surface feature information; and matching and performing error calculation processing on the first line surface characteristic information and the second line surface characteristic information in the point cloud local map to obtain the error state information.
In one embodiment of the present disclosure, the first determining module 403 is specifically configured to perform a matching process on the first line surface feature information and the second line surface feature information, and determine a correspondence between the first line surface feature information and the second line surface feature information; and optimizing the first Kalman filter according to the corresponding relation between the first line-plane characteristic information and the second line-plane characteristic information and a point-plane observation equation to acquire the error state information output by the optimized first Kalman filter.
In one embodiment of the present disclosure, the third determining module 405 is specifically configured to input the first positioning information, the second positioning information, and the historical positioning information into a second kalman filter, and obtain the target positioning information output by the second kalman filter.
In one embodiment of the present disclosure, the positioning sensor further comprises at least one of: wheel speed sensor, vision sensor and global navigation positioning system; the third determining module 405 is specifically configured to determine, for each other positioning sensor except the inertial sensor and the laser sensor in the plurality of sensors, other positioning information of the other positioning sensor at the acquisition time point according to the data acquired by the other positioning sensor at the acquisition time point when the data acquired by the other positioning sensor at the acquisition time point exists; and determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information, at least one piece of other positioning information and the historical positioning information of the vehicle.
In the multi-source fusion positioning device of the embodiment of the disclosure, data acquired by a plurality of positioning sensors on a vehicle are acquired; the positioning sensor at least comprises an inertial sensor and a laser sensor; acquiring laser point cloud data acquired by the laser sensor at the acquisition time points, a point cloud local map and first positioning information of the inertial sensor at the acquisition time points aiming at each acquisition time point of the laser sensor; the first positioning information is determined according to the data acquired by the inertial sensors at each time point before the acquisition time point; determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point; determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information; according to the first positioning information, the second positioning information and the historical positioning information of the vehicle, the target positioning information of the vehicle at the acquisition time point is determined, so that transverse constraint can be provided by combining the data acquired by the laser sensor, the influence of factors such as illumination and weather is avoided, the transverse constraint can be provided in the environment with poor illumination and weather, and the positioning accuracy of the vehicle is improved.
According to a third aspect of embodiments of the present disclosure, there is also provided a vehicle including: a processor; a memory for storing processor-executable instructions, wherein the processor is configured to: the multi-source fusion positioning method is realized.
In order to implement the above-described embodiments, the present disclosure also proposes a storage medium.
Wherein the instructions in the storage medium, when executed by the processor, enable the processor to perform the multi-source fusion positioning method as described above.
To achieve the above embodiments, the present disclosure also provides a computer program product.
Wherein the computer program product, when executed by a processor of a vehicle, enables the vehicle to perform the method as above.
Fig. 5 is a block diagram of a vehicle 500, according to an exemplary embodiment of the present disclosure. For example, the vehicle 500 may be a hybrid vehicle, or may be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 500 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 5, a vehicle 500 may include various subsystems, such as an infotainment system 510, a perception system 520, a decision control system 530, a drive system 540, and a computing platform 550. Vehicle 500 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 500 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 510 may include a communication system, an entertainment system, a navigation system, and the like.
The sensing system 520 may include several sensors for sensing information of the environment surrounding the vehicle 500. For example, the sensing system 520 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 530 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 540 may include components that provide powered movement of the vehicle 500. In one embodiment, the drive system 540 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 500 are controlled by the computing platform 550. The computing platform 550 may include at least one processor 551 and memory 552, and the processor 551 may execute instructions 553 stored in the memory 552.
The processor 551 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 552 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 553, memory 552 may also store data, such as a point cloud local map. The data stored by memory 552 may be used by computing platform 550.
In an embodiment of the present disclosure, the processor 551 may execute instructions 553 to perform all or part of the steps of the multi-source fusion positioning method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of multi-source fusion localization, the method comprising:
acquiring data acquired by a plurality of positioning sensors on a vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor;
acquiring laser point cloud data acquired by the laser sensor at each acquisition time point, a point cloud local map and first positioning information of the inertial sensor at the acquisition time point; the first positioning information is determined according to the data acquired by the inertial sensor at each time point before the acquisition time point;
determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point;
determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information;
And determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
2. The method of claim 1, wherein the obtaining the laser point cloud data includes:
acquiring at least one scanning time period of the laser sensor, wherein the laser sensor completes one round of peripheral laser scanning in the scanning time period;
acquiring at least one target scanning time period comprising the acquisition time point in the scanning time period;
and determining the laser point cloud data at the acquisition time point according to each scanning time point in the target scanning time period, the scanning angle of the laser sensor at each scanning time point and the scanned point information.
3. The method of claim 2, wherein determining the laser point cloud data at the acquisition time point based on each scanning time point within the target scanning time period, and the scanning angle and scanned point information of the laser sensor at each scanning time point, comprises:
Determining, for each scanning time point within the target scanning time period, a time interval between the scanning time point and the acquisition time point;
determining the driving distance of the vehicle in the time interval according to the driving speed information of the vehicle in the time interval;
determining the rotation angle of the laser sensor in the time interval according to the scanning angle at the scanning time point and the scanning angle at the acquisition time point;
correcting the point information scanned at the scanning time point according to the driving distance of the vehicle and the rotation angle of the laser sensor in the time interval to obtain the point information at the acquisition time point;
and determining the laser point cloud data at the acquisition time point according to the plurality of point information at the acquisition time point.
4. The method of claim 1, wherein the laser point cloud data is laser point cloud data in a laser sensor coordinate system, and wherein the determining the error status information at the acquisition time point according to the laser point cloud data, the first positioning information, and the point cloud local map at the acquisition time point comprises:
Determining laser point cloud data under a longitude and latitude coordinate system according to the laser point cloud data under the laser sensor coordinate system and the first positioning information;
performing linear surface feature extraction processing on the laser point cloud data under the longitude and latitude coordinate system to obtain first linear surface feature information;
and matching and performing error calculation processing on the first line surface characteristic information and the second line surface characteristic information in the point cloud local map to obtain the error state information.
5. The method of claim 4, wherein the matching and error calculation of the first line-plane feature information and the second line-plane feature information in the point cloud local map to obtain the error status information includes:
matching the first line surface characteristic information and the second line surface characteristic information, and determining the corresponding relation between the first line surface characteristic information and the second line surface characteristic information;
and optimizing the first Kalman filter according to the corresponding relation between the first line-plane characteristic information and the second line-plane characteristic information and a point-plane observation equation to acquire the error state information output by the optimized first Kalman filter.
6. The method of claim 1, wherein the determining the target location information of the vehicle at the collection time point based on the first location information, the second location information, and the historical location information of the vehicle comprises:
and inputting the first positioning information, the second positioning information and the historical positioning information into a second Kalman filter, and obtaining the target positioning information output by the second Kalman filter.
7. The method of any one of claims 1 to 6, wherein the positioning sensor further comprises at least one of: wheel speed sensor, vision sensor and global navigation positioning system; the determining the target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle includes:
for each other positioning sensor except the inertial sensor and the laser sensor in the plurality of sensors, when the data acquired by the other positioning sensor exists at the acquisition time point, determining other positioning information of the other positioning sensor at the acquisition time point according to the data acquired by the other positioning sensor at the acquisition time point;
And determining target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information, at least one piece of other positioning information and the historical positioning information of the vehicle.
8. A multi-source fusion positioning device, the device comprising:
the first acquisition module is used for acquiring data acquired by a plurality of positioning sensors on the vehicle; the positioning sensor at least comprises an inertial sensor and a laser sensor;
the second acquisition module is used for acquiring laser point cloud data acquired by the laser sensor at each acquisition time point, a point cloud local map and first positioning information of the inertial sensor at the acquisition time point; the first positioning information is determined according to the data acquired by the inertial sensor at each time point before the acquisition time point;
the first determining module is used for determining error state information at the acquisition time point according to the laser point cloud data, the first positioning information and the point cloud local map at the acquisition time point;
the second determining module is used for determining second positioning information of the laser sensor at the acquisition time point according to the first positioning information and the error state information;
And the third determining module is used for determining the target positioning information of the vehicle at the acquisition time point according to the first positioning information, the second positioning information and the historical positioning information of the vehicle.
9. A vehicle, characterized by comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
a step of implementing a multi-source fusion localization method as claimed in any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, which when executed by a processor, causes the processor to perform the multi-source fusion positioning method of any of claims 1-7.
CN202310612615.9A 2023-05-26 2023-05-26 Multisource fusion positioning method, multisource fusion positioning device and vehicle Active CN116678423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310612615.9A CN116678423B (en) 2023-05-26 2023-05-26 Multisource fusion positioning method, multisource fusion positioning device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310612615.9A CN116678423B (en) 2023-05-26 2023-05-26 Multisource fusion positioning method, multisource fusion positioning device and vehicle

Publications (2)

Publication Number Publication Date
CN116678423A true CN116678423A (en) 2023-09-01
CN116678423B CN116678423B (en) 2024-04-16

Family

ID=87782937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310612615.9A Active CN116678423B (en) 2023-05-26 2023-05-26 Multisource fusion positioning method, multisource fusion positioning device and vehicle

Country Status (1)

Country Link
CN (1) CN116678423B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
JP2020187126A (en) * 2019-05-15 2020-11-19 宜陞有限公司 Navigation facility for driverless vehicle
CN115077541A (en) * 2022-07-01 2022-09-20 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115494533A (en) * 2022-09-23 2022-12-20 潍柴动力股份有限公司 Vehicle positioning method, device, storage medium and positioning system
CN115526914A (en) * 2022-09-02 2022-12-27 燕山大学 Robot real-time positioning and color map fusion mapping method based on multiple sensors
CN115797490A (en) * 2022-12-19 2023-03-14 广州宸境科技有限公司 Drawing construction method and system based on laser vision fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958266A (en) * 2018-08-09 2018-12-07 北京智行者科技有限公司 A kind of map datum acquisition methods
JP2020187126A (en) * 2019-05-15 2020-11-19 宜陞有限公司 Navigation facility for driverless vehicle
CN115077541A (en) * 2022-07-01 2022-09-20 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115526914A (en) * 2022-09-02 2022-12-27 燕山大学 Robot real-time positioning and color map fusion mapping method based on multiple sensors
CN115494533A (en) * 2022-09-23 2022-12-20 潍柴动力股份有限公司 Vehicle positioning method, device, storage medium and positioning system
CN115797490A (en) * 2022-12-19 2023-03-14 广州宸境科技有限公司 Drawing construction method and system based on laser vision fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马振强: "基于多传感器融合的无人车建图和定位研究", 《中国优秀硕士论文全文数据库 工程科技Ⅱ辑》, no. 02, pages 36 - 51 *

Also Published As

Publication number Publication date
CN116678423B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109214248B (en) Method and device for identifying laser point cloud data of unmanned vehicle
US10739459B2 (en) LIDAR localization
KR20210111180A (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN112113574A (en) Method, apparatus, computing device and computer-readable storage medium for positioning
JP5105596B2 (en) Travel route determination map creation device and travel route determination map creation method for autonomous mobile body
CN112419776B (en) Autonomous parking method and device, automobile and computing equipment
CN112068152A (en) Method and system for simultaneous 2D localization and 2D map creation using a 3D scanner
CN115878494A (en) Test method and device for automatic driving software system, vehicle and storage medium
CN111402328A (en) Pose calculation method and device based on laser odometer
CN116380088B (en) Vehicle positioning method and device, vehicle and storage medium
CN112041210B (en) System and method for autopilot
CN116678423B (en) Multisource fusion positioning method, multisource fusion positioning device and vehicle
CN115718304A (en) Target object detection method, target object detection device, vehicle and storage medium
EP4134623A1 (en) Drive device, vehicle, and method for automated driving and/or assisted driving
EP4330726A1 (en) Systems and methods for producing amodal cuboids
CN117716312A (en) Methods, systems, and computer program products for resolving hierarchical ambiguity of radar systems of autonomous vehicles
CN116659529B (en) Data detection method, device, vehicle and storage medium
CN113826145A (en) System and method for distance measurement
CN116630923B (en) Marking method and device for vanishing points of roads and electronic equipment
WO2023017624A1 (en) Drive device, vehicle, and method for automated driving and/or assisted driving
CN115900771B (en) Information determination method, device, vehicle and storage medium
CN115471513B (en) Point cloud segmentation method and device
CN116767224B (en) Method, device, vehicle and storage medium for determining a travelable region
CN116503482B (en) Vehicle position acquisition method and device and electronic equipment
JP7483946B2 (en) How to determine the initial posture of the vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant