CN117647254B - Fusion positioning method, device, equipment and storage medium for automatic driving vehicle - Google Patents

Fusion positioning method, device, equipment and storage medium for automatic driving vehicle Download PDF

Info

Publication number
CN117647254B
CN117647254B CN202410123304.0A CN202410123304A CN117647254B CN 117647254 B CN117647254 B CN 117647254B CN 202410123304 A CN202410123304 A CN 202410123304A CN 117647254 B CN117647254 B CN 117647254B
Authority
CN
China
Prior art keywords
positioning
determining
value
current
count value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410123304.0A
Other languages
Chinese (zh)
Other versions
CN117647254A (en
Inventor
朱磊
费再慧
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202410123304.0A priority Critical patent/CN117647254B/en
Publication of CN117647254A publication Critical patent/CN117647254A/en
Application granted granted Critical
Publication of CN117647254B publication Critical patent/CN117647254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a fusion positioning method, device, equipment and storage medium for an automatic driving vehicle, wherein the method comprises the following steps: when fusion positioning is entered, determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode; the positioning mode comprises one of real-time differential positioning RTK, laser SLAM, vision auxiliary positioning VAL or vision SLAM; determining a predicted value of the switched positioning mode; determining a virtual observation point according to the current conversion count value and the predicted value; and inputting the virtual observation points into a filter, and performing fusion positioning. When the method is used for fusion positioning, the virtual observation point is determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter for fusion positioning, and the phenomenon that the positioning track jumps due to different output frequencies after the vehicle directly exits is avoided through the virtual observation point, so that the vehicle shakes or dragons are drawn is avoided.

Description

Fusion positioning method, device, equipment and storage medium for automatic driving vehicle
Technical Field
The application relates to the technical field of automatic driving, in particular to a fusion positioning method, device and equipment for an automatic driving vehicle and a storage medium.
Background
When the vehicle is automatically driven, high-precision positioning is required as a support to ensure the stability of running in various complex scenes. To achieve high-precision positioning in complex scenes, it is difficult to achieve the positioning by means of a single sensor, and fusion positioning is required by means of multi-sensor information. For example, a fused localization scheme based on IMU (Inertial Measurement Unit ) and RTK (Real-time differential localization) implementations.
RTKs can be disturbed when vehicles encounter urban, canyon, tunnel, etc. scenes, and cannot obtain high-precision positioning information. A common solution is to add a laser SLAM (Simultaneous Localization and Mapping) to compensate for the positioning results of the synchronous positioning and mapping.
However, since the output frequency of the laser SLAM is generally lower than that of the RTK/IMU, for example, the output frequency of the laser SLAM is generally 5Hz, and the output frequency of the RTK/IMU is 100Hz, there may be a situation that no laser SLAM is output during the output process of the RTK/IMU, or the laser SLAM may not output a positioning result due to shielding, interference, etc., and at this time, directly exiting the laser SLAM positioning may cause a jump in the positioning track, resulting in vehicle shake or picture dragon.
Disclosure of Invention
In order to solve one of the technical defects, the application provides a fusion positioning method, device and equipment for an automatic driving vehicle and a storage medium.
In a first aspect of the present application, there is provided a fusion positioning method for an automatic driving vehicle, the method comprising:
when fusion positioning is entered, determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode; the positioning mode comprises real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or one of vision SLAM;
determining a predicted value of the switched positioning mode;
determining a virtual observation point according to the current conversion count value and the predicted value;
and inputting the virtual observation points into a filter, and performing fusion positioning.
Optionally, the priority of the positioning mode is from high to low: real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, visual auxiliary positioning VAL, visual SLAM.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is 0, the predicted value is determined as the virtual observation point.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is greater than 0, the current conversion count value is reduced by 1; determining that the single-frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode; calculating a current observation value according to the single frame change displacement, the current conversion count value and the predicted value; and taking the current observed value as a virtual observed point.
Optionally, before determining that the single frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode, the method further includes:
performing time synchronization and frequency expansion on the laser synchronous positioning and mapping SLAM, the vision auxiliary positioning VAL and the vision SLAM; the output frequencies of the laser synchronous positioning and mapping SLAM, the visual auxiliary positioning VAL and the visual SLAM after frequency expansion are the same as the output frequency of the real-time differential positioning RTK;
based on the information after the frequency expansion, predicting the difference between the current position of each positioning mode under the UTM coordinate system of the universal transverse ink card grid system and the current position of the GPS under the UTM coordinate system.
Optionally, the state transition time initial value is 300;
the state transition time is updated to 600 when the single frame variation displacement amount is greater than 0.2 meter and less than or equal to 0.4 meter, or the vehicle speed is greater than 40km/h and less than or equal to 50 km/h;
the state transition time is updated to 900 when the single frame variation displacement amount is greater than 0.4m or the vehicle speed is greater than 40 km/h.
Optionally, calculating the current observed value according to the single frame change displacement, the current conversion count value and the predicted value includes:
calculating the product of the current conversion count value and the single frame change displacement;
the difference between the predicted value and the product is determined as the current observed value.
In a second aspect of the present application, there is provided an autonomous vehicle fusion locator device, the device comprising:
the first determining module is used for determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode when the fusion positioning is entered; the positioning mode comprises real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or one of vision SLAM;
the second determining module is used for determining a predicted value of the switched positioning mode;
the third determining module is used for determining a virtual observation point according to the current conversion count value and the predicted value;
and the fusion positioning module is used for inputting the virtual observation points into the filter to perform fusion positioning.
In a third aspect of the present application, there is provided an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method as described in the first aspect above.
In a fourth aspect of the present application, there is provided a computer-readable storage medium having a computer program stored thereon; the computer program is executed by a processor to implement the method as described in the first aspect above.
The application provides a fusion positioning method, device, equipment and storage medium for an automatic driving vehicle, wherein the method comprises the following steps: when fusion positioning is entered, determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode; the positioning mode comprises one of real-time differential positioning RTK, laser SLAM, vision auxiliary positioning VAL or vision SLAM; determining a predicted value of the switched positioning mode; determining a virtual observation point according to the current conversion count value and the predicted value; and inputting the virtual observation points into a filter, and performing fusion positioning.
When the method is used for fusion positioning, the virtual observation point is determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter for fusion positioning, and the phenomenon that the positioning track jumps due to different output frequencies after the vehicle directly exits is avoided through the virtual observation point, so that the vehicle shakes or dragons are drawn is avoided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic flow chart of an automatic driving vehicle fusion positioning method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an automatic driving vehicle fusion positioning device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of exemplary embodiments of the present application is given with reference to the accompanying drawings, and it is apparent that the described embodiments are only some of the embodiments of the present application and not exhaustive of all the embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
In the process of realizing the application, the inventor finds that the output frequency of the laser SLAM is generally lower than that of the RTK/IMU in the fusion positioning at present, so that the condition that the laser SLAM does not output in the process of outputting the RTK/IMU can occur, or the laser SLAM also can not output a positioning result due to the conditions of shielding, interference and the like, and at the moment, the positioning track can be jumped due to the fact that the laser SLAM is directly withdrawn, and the vehicle shakes or draws a dragon.
In view of the above problems, embodiments of the present application provide a method, an apparatus, a device, and a storage medium for fusion positioning of an autopilot vehicle, where the method includes: when fusion positioning is entered, determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode; the positioning mode comprises one of real-time differential positioning RTK, laser SLAM, vision auxiliary positioning VAL or vision SLAM; determining a predicted value of the switched positioning mode; determining a virtual observation point according to the current conversion count value and the predicted value; and inputting the virtual observation points into a filter, and performing fusion positioning. When the method is used for fusion positioning, the virtual observation point is determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter for fusion positioning, and the phenomenon that the positioning track jumps due to different output frequencies after the vehicle directly exits is avoided through the virtual observation point, so that the vehicle shakes or dragons are drawn is avoided.
Referring to fig. 1, the implementation process of the fusion positioning method for the automatic driving vehicle provided in this embodiment is as follows:
101, when fusion positioning is entered, determining a positioning mode after switching according to a positioning mode of a previous frame and a preset priority of the positioning mode.
The positioning mode includes one of RTK (Real-time differential positioning), laser SLAM (Simultaneous Localization and Mapping, synchronous positioning and mapping), VAL (vision aided positioning), or vision SLAM.
The VAL is matched with the high-precision map through the motion relation of the vehicle to achieve positioning, and the visual SLAM is matched with the self-built visual semantic map through the motion relation of the vehicle to achieve positioning.
In addition, each positioning mode has a corresponding priority, and the priorities of the positioning modes are as follows from high to low in sequence: RTK, laser SLAM, VAL, vision SLAM. If the numbers are used to indicate both the positioning method and the priority, the smaller the value of the numbers, the higher the priority, for example, the RTK positioning method may be indicated as 0, the laser SLAM positioning method as 1, the VAL positioning method as 2, and the visual SLAM positioning method as 3.
When the step is executed, the positioning mode of the previous frame can be determined as the positioning mode before switching, and then the positioning mode with the priority lower than the positioning mode before switching is selected as the positioning mode after switching according to the priority.
In specific implementation, a value of the parameter slam_flag_last, that is, a positioning manner of a previous frame, may be read. For example, slam_flag_last=0, indicating that the positioning mode of the previous frame is RTK, determining the next value of 0 (i.e. 1) according to the priority setting, and then the laser SLAM corresponding to 1 is the positioning mode after switching.
For convenience of description, in this embodiment, before-switching positioning mode is represented by before-switching, and after-switching positioning mode is represented by after-switching.
That is, if before_switching=0 (i.e., RTK), after_switching=1 (i.e., laser SLAM). If before_switching=1 (i.e. laser SLAM)), after_switching=2 (i.e. VAL). If before_switching=2 (i.e. VAL), after_switching=3 (i.e. visual SLAM).
It should be noted that, the execution condition of this step is to enter fusion positioning, and when determining whether to enter fusion positioning, the existing scheme is adopted, for example, monitoring the unavailable state of the RTK positioning signal, if the RTK positioning signal of the floating solution 52 appears, the positioning error is less than 0.5 m, and at this time, it is considered that the fusion positioning needs to be entered. Alternatively, if an RTK positioning signal for the single point solution 12 is present, it is stated that the positioning error is 1-5 meters, at which point it is deemed necessary to enter a fusion position. Still alternatively, if 32 occurs, the positioning error is indicated to be in the hundreds of meters, at which point it is deemed necessary to enter fusion positioning. The embodiment is not limited to a specific scheme for determining whether to enter fusion positioning.
In a specific implementation, the first operation of switching the positioning system is to initialize a transition count value, and the initial value of the transition count value is 300, which corresponds to 3s (seconds). The transition count value indicates the number of times virtual observation is required, and a transition to 0 indicates the end of virtual observation. For convenience of description, the present embodiment uses the sensor_change_cnt to represent the conversion count value, that is, the initial value of the sensor_change_cnt is 300.
102, determining a predicted value of the positioning mode after switching.
In this step, the predicted value of after_switching is determined by the existing method, and details are not described here. For convenience of description, the predicted value of the positioning mode after switching is represented by after_switching_new_pos in this embodiment.
And 103, determining a virtual observation point according to the current conversion count value and the predicted value.
In the specific implementation of this step, if the current transition count value is 0, the predicted value is determined as the virtual observation point. If the current conversion count value is greater than 0, the current conversion count value is decremented by 1. And determining the single-frame change displacement as the quotient of the current position difference and the state transition time of the switched positioning mode. And calculating the current observation value according to the single frame change displacement, the current conversion count value and the predicted value. And taking the current observed value as a virtual observed point.
For convenience of description, the present embodiment uses the sensor_ utm _prediction to represent the virtual observation point, that is, if the current sensor_change_cnt is 0, the sensor_ utm _prediction=after_switching_new_pos. If the current sensor_change_cnt is greater than 0 (i.e., positive), steps 201 to 203 are performed to obtain the sensor_ utm _present.
201, the current transition count value is decremented by 1.
I.e., sensor_change_cnt=sensor_change_cnt-1.
202, determining the single frame change displacement as the quotient of the current position difference and the state transition time of the switched positioning mode.
When the step 202 is implemented, the current position difference of the switched positioning modes is used to calculate the displacement of the single frame change, so before the step 202 is executed, the current position difference of each positioning mode is also determined, and the specific implementation process is shown in steps 301 to 302.
301, performing time synchronization and frequency expansion on the laser SLAM, VAL and visual SLAM.
The output frequencies of the laser SLAM, the VAL and the visual SLAM after frequency expansion are the same as the output frequency of the RTK.
The output frequency of the RTK is 100Hz, the output frequency of the laser SLAM is 5Hz, the output frequency of the VAL is 10Hz, and the output frequency of the visual SLAM is 10Hz. In performing step 301, the laser SLAM, VAL, vision SLAM is time synchronized and predicted with IMU (Inertial Measurement Unit ) and speedometer, expanding the frequency to 100Hz.
302, based on the information after the frequency expansion, predicting the current position difference of each positioning mode under a UTM (Universal Transverse Mercator Grid System, universal transverse ink card grid system) coordinate system.
In step 302, the information of each positioning mode is predicted to the current time, and the current position difference of each positioning mode under the UTM coordinate system is predicted based on the current GPS (Global Positioning System ).
The current position difference is a difference between a current position of each positioning mode in a UTM coordinate system of the universal cross ink card grid system and a current position of a GPS (Global Positioning System ) in the UTM coordinate system.
For example, if the current position of the laser SLAM in the UTM coordinate system is lidar_pos and the current position of the GPS in the UTM coordinate system is gps_pos, the current position difference of the laser SLAM in the UTM coordinate system is lidar_pos-gps_pos. If the current position of VAL under UTM coordinate system is val_pos and the current position of GPS under UTM coordinate system is gps_pos, the current position difference of VAL under UTM coordinate system is val_pos-gps_pos. If the current position of the visual SLAM under the UTM coordinate system is vsram_pos and the current position of the GPS under the UTM coordinate system is gps_pos, the current position difference of the visual SLAM under the UTM coordinate system is vsram_pos-gps_pos.
For convenience of description, in this embodiment, sensor_change_dp is used to represent a single frame CHANGE displacement, that is, a displacement that needs to be changed for each frame in the switching process, after_switching_pos-gps_pos is used to represent a current position difference of a positioning manner after switching, and sensor_change_time is used to represent a state transition TIME. Then, upon execution of step 202, sensor_change_dp=after_switching_pos-gps_pos/sensor_change_time is determined.
In addition, the state transition TIME sensor_change_time has an initial value of 300, i.e., sensor_change_time=300, corresponding to 3s. The following CHANGEs with the CHANGE of the single frame CHANGE displacement sensor_change_dp or the CHANGE of the vehicle speed (the CHANGE here is the CHANGE of the current value of sensor_change_time, that is, after the initial value of sensor_change_time is 300, the current value of sensor_change_time CHANGEs with the CHANGE of the sensor_change_dp or the CHANGE of the vehicle speed).
At a single frame CHANGE displacement amount sensor_change_dp >0.2m (meter) and sensor_change_dp is equal to or less than 0.4m, or a vehicle speed >40Km/h (hour) and a vehicle speed is equal to or less than 50Km/h, the state transition TIME is updated to 600, i.e., sensor_change_time=600, corresponding to 6s.
The shift amount sensor_change_dp >0.4m or the vehicle speed >40Km/h is changed in a single frame, and the state transition TIME is updated to 900, namely, SENSORS_CHANGE_TIME=900, corresponding to 9s.
203, calculating a current observation value according to the single frame change displacement, the current conversion count value and the predicted value, and taking the current observation value as a virtual observation point.
The implementation mode of the steps is as follows: and calculating the product of the current conversion count value and the single frame change displacement, determining the difference between the predicted value and the product as the current observed value, and taking the current observed value as the virtual observed point.
I.e. sensors utm _prediction =
after_switching_new_pos-sensor_change_cntsensor_change_dp。
104, inputting the virtual observation points into a filter, and performing fusion positioning.
Since fusion positioning is a process, the fusion positioning method for an autonomous vehicle according to the present embodiment is continuously and repeatedly performed. In each execution, step 101 and the initialization sensor_change_cnt are not executed, that is, only when the positioning mode is switched, step 101 and the initialization sensor_change_cnt are executed, and after switching to one positioning mode, when continuous fusion positioning is performed in the mode, steps 102 to 104 are only executed repeatedly.
According to the fusion positioning method for the automatic driving vehicle, when the fusion positioning is carried out, output of the positioning modes before and after switching is considered, virtual observation points are added, and jump of the positioning track caused by different output frequencies after the automatic driving vehicle directly exits is avoided, so that the vehicle shakes or draws a dragon.
The embodiment provides a fusion positioning method for an automatic driving vehicle, when fusion positioning is entered, determining a switched positioning mode according to a positioning mode of a previous frame and a preset priority of the positioning mode; the positioning mode comprises one of real-time differential positioning RTK, laser SLAM, vision auxiliary positioning VAL or vision SLAM; determining a predicted value of the switched positioning mode; determining a virtual observation point according to the current conversion count value and the predicted value; and inputting the virtual observation points into a filter, and performing fusion positioning. In the method provided by the embodiment, when the fusion positioning is performed, the virtual observation point is determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter for fusion positioning, and the jump of the positioning track caused by different output frequencies after the direct exit is avoided through the virtual observation point, so that the vehicle is rocked or dragons are drawn.
Based on the same inventive concept of the fusion positioning method of the automatic driving vehicle, this embodiment provides an automatic driving vehicle fusion positioning device, as shown in fig. 2, including:
the first determining module 201 is configured to determine, when the fusion positioning is entered, a positioning mode after switching according to a positioning mode of a previous frame and a preset priority of the positioning mode.
The positioning mode comprises one of real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or vision SLAM.
The second determining module 202 is configured to determine the predicted value of the positioning mode after switching determined by the first determining module 201.
The third determining module 203 is configured to determine a virtual observation point according to the current conversion count value and the predicted value determined by the second determining module 202.
And the fusion positioning module 204 is configured to input the virtual observation point determined by the third determining module 203 into a filter to perform fusion positioning.
Optionally, the priority of the positioning mode is from high to low: real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, visual auxiliary positioning VAL, visual SLAM.
Optionally, the third determining module 203 is further configured to determine the predicted value as the virtual observation point if the current transition count value is 0.
Optionally, the third determining module 203 is further configured to decrease the current switch count value by 1 if the current switch count value is greater than 0. And determining the single-frame change displacement as the quotient of the current position difference and the state transition time of the switched positioning mode. And calculating the current observation value according to the single frame change displacement, the current conversion count value and the predicted value. And taking the current observed value as a virtual observed point.
Optionally, the apparatus further comprises: the processing module is used for carrying out time synchronization and frequency expansion on the laser synchronous positioning and mapping SLAM, the vision auxiliary positioning VAL and the vision SLAM; the output frequencies of the laser synchronous positioning and mapping SLAM, the visual auxiliary positioning VAL and the visual SLAM after frequency expansion are the same as the output frequency of the real-time differential positioning RTK. Based on the information after the frequency expansion, predicting the difference between the current position of each positioning mode under the UTM coordinate system of the universal transverse ink card grid system and the current position of the GPS under the UTM coordinate system.
Optionally, the state transition time initial value is 300. The state transition time is updated to 600 when the single frame variation displacement amount is greater than 0.2 meters and less than or equal to 0.4 meters, or the vehicle speed is greater than 40km/h and less than or equal to 50 km/h. The state transition time is updated to 900 when the single frame variation displacement amount is greater than 0.4m or the vehicle speed is greater than 40 km/h.
Optionally, the third determining module 203 is further configured to calculate a product of the current conversion count value and the single frame change displacement. The difference between the predicted value and the product is determined as the current observed value.
When the device provided by the embodiment is used for fusion positioning, the virtual observation point can be determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter for fusion positioning, and the situation that the positioning track jumps due to different output frequencies after the device directly exits is avoided through the virtual observation point, so that the vehicle is rocked or dragons are drawn is avoided.
Based on the same inventive concept of the autopilot vehicle fusion positioning method, the present embodiment provides an electronic device, as shown in fig. 3, including: memory 301, processor 302, and computer programs.
Wherein a computer program is stored in the memory 301 and configured to be executed by the processor 302 to implement the above-described autonomous vehicle fusion localization method.
In particular, the method comprises the steps of,
when the fusion positioning is entered, the positioning mode after switching is determined according to the positioning mode of the previous frame and the preset priority of the positioning mode. The positioning mode comprises one of real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or vision SLAM.
And determining a predicted value of the positioning mode after switching.
And determining a virtual observation point according to the current conversion count value and the predicted value.
And inputting the virtual observation points into a filter, and performing fusion positioning.
Optionally, the priority of the positioning mode is from high to low: real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, visual auxiliary positioning VAL, visual SLAM.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is 0, the predicted value is determined as the virtual observation point.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is greater than 0, the current conversion count value is decremented by 1. And determining the single-frame change displacement as the quotient of the current position difference and the state transition time of the switched positioning mode. And calculating the current observation value according to the single frame change displacement, the current conversion count value and the predicted value. And taking the current observed value as a virtual observed point.
Optionally, before determining that the single frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode, the method further includes:
performing time synchronization and frequency expansion on the laser synchronous positioning and mapping SLAM, the vision auxiliary positioning VAL and the vision SLAM; the output frequencies of the laser synchronous positioning and mapping SLAM, the visual auxiliary positioning VAL and the visual SLAM after frequency expansion are the same as the output frequency of the real-time differential positioning RTK.
Based on the information after the frequency expansion, predicting the difference between the current position of each positioning mode under the UTM coordinate system of the universal transverse ink card grid system and the current position of the GPS under the UTM coordinate system.
Optionally, the state transition time initial value is 300.
The state transition time is updated to 600 when the single frame variation displacement amount is greater than 0.2 meters and less than or equal to 0.4 meters, or the vehicle speed is greater than 40km/h and less than or equal to 50 km/h.
The state transition time is updated to 900 when the single frame variation displacement amount is greater than 0.4m or the vehicle speed is greater than 40 km/h.
Optionally, calculating the current observed value according to the single frame change displacement, the current conversion count value and the predicted value includes:
and calculating the product of the current conversion count value and the single frame change displacement.
The difference between the predicted value and the product is determined as the current observed value.
In the electronic device provided in this embodiment, when the computer program is executed by the processor to perform fusion positioning, the virtual observation point is determined according to the predicted value of the switched positioning mode, the virtual observation point is input into the filter to perform fusion positioning, and through the virtual observation point, the situation that the positioning track jumps due to different output frequencies after direct exit, and further vehicle shake or dragon drawing is caused is avoided.
Based on the same inventive concept of the autopilot vehicle fusion positioning method, the present embodiment provides a computer-readable storage medium, and a computer program stored thereon. The computer program is executed by the processor to implement the above-described autonomous vehicle fusion localization method.
In particular, the method comprises the steps of,
when the fusion positioning is entered, the positioning mode after switching is determined according to the positioning mode of the previous frame and the preset priority of the positioning mode. The positioning mode comprises one of real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or vision SLAM.
And determining a predicted value of the positioning mode after switching.
And determining a virtual observation point according to the current conversion count value and the predicted value.
And inputting the virtual observation points into a filter, and performing fusion positioning.
Optionally, the priority of the positioning mode is from high to low: real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, visual auxiliary positioning VAL, visual SLAM.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is 0, the predicted value is determined as the virtual observation point.
Optionally, determining the virtual observation point according to the current conversion count value and the predicted value includes:
if the current conversion count value is greater than 0, the current conversion count value is decremented by 1. And determining the single-frame change displacement as the quotient of the current position difference and the state transition time of the switched positioning mode. And calculating the current observation value according to the single frame change displacement, the current conversion count value and the predicted value. And taking the current observed value as a virtual observed point.
Optionally, before determining that the single frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode, the method further comprises:
performing time synchronization and frequency expansion on the laser synchronous positioning and mapping SLAM, the vision auxiliary positioning VAL and the vision SLAM; the output frequencies of the laser synchronous positioning and mapping SLAM, the visual auxiliary positioning VAL and the visual SLAM after frequency expansion are the same as the output frequency of the real-time differential positioning RTK.
Based on the information after the frequency expansion, predicting the difference between the current position of each positioning mode under the UTM coordinate system of the universal transverse ink card grid system and the current position of the GPS under the UTM coordinate system.
Optionally, the state transition time initial value is 300.
The state transition time is updated to 600 when the single frame variation displacement amount is greater than 0.2 meters and less than or equal to 0.4 meters, or the vehicle speed is greater than 40km/h and less than or equal to 50 km/h.
The state transition time is updated to 900 when the single frame variation displacement amount is greater than 0.4m or the vehicle speed is greater than 40 km/h.
Optionally, calculating the current observed value according to the single frame change displacement, the current conversion count value and the predicted value includes:
and calculating the product of the current conversion count value and the single frame change displacement.
The difference between the predicted value and the product is determined as the current observed value.
The computer readable storage medium provided in this embodiment has a computer program executed by a processor to determine a virtual observation point according to a predicted value of a switched positioning mode when performing fusion positioning, and input the virtual observation point into a filter for fusion positioning, so that jump of a positioning track caused by different output frequencies after direct exit is avoided through the virtual observation point, and further vehicle shake or dragon drawing caused by different output frequencies is avoided.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The solutions in the embodiments of the present application may be implemented in various computer languages, for example, object-oriented programming language Java, and an transliterated scripting language JavaScript, etc.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. A method for fusion positioning of an autonomous vehicle, the method comprising:
when fusion positioning is entered, determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode; the positioning mode comprises real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or one of vision SLAM;
determining a predicted value of the switched positioning mode;
determining a virtual observation point according to the current conversion count value and the predicted value;
inputting the virtual observation point into a filter, and performing fusion positioning;
the determining a virtual observation point according to the current conversion count value and the predicted value comprises the following steps:
if the current conversion count value is greater than 0, the current conversion count value is reduced by 1; determining that the single-frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode; calculating a current observation value according to the single frame change displacement, the current conversion count value and the predicted value; taking the current observation value as a virtual observation point;
the calculating the current observed value according to the single frame change displacement, the current conversion count value and the predicted value comprises the following steps:
calculating the product of the current conversion count value and the single frame change displacement;
and determining the difference between the predicted value and the product as a current observed value.
2. The method of claim 1, wherein the positioning method priority is, in order from high to low: real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, visual auxiliary positioning VAL, visual SLAM.
3. The method of claim 1, wherein the determining a virtual observation point from the current transition count value and the predicted value comprises:
and if the current conversion count value is 0, determining the predicted value as a virtual observation point.
4. The method of claim 1, wherein before determining that the single frame change displacement is a quotient of a current position difference and a state transition time of the switched positioning mode, further comprising:
performing time synchronization and frequency expansion on the laser synchronous positioning and mapping SLAM, the vision auxiliary positioning VAL and the vision SLAM; the output frequencies of the laser synchronous positioning and mapping SLAM, the visual auxiliary positioning VAL and the visual SLAM after frequency expansion are the same as the output frequency of the real-time differential positioning RTK;
based on the information after the frequency expansion, predicting the difference between the current position of each positioning mode under the UTM coordinate system of the universal transverse ink card grid system and the current position of the GPS under the UTM coordinate system.
5. The method of claim 1, wherein the state transition time initial value is 300;
the state transition time is updated to 600 when the single frame variation displacement amount is greater than 0.2 meters and less than or equal to 0.4 meters, or the vehicle speed is greater than 40km/h and less than or equal to 50 km/h;
the state transition time is updated to 900 when the single frame variation displacement amount is greater than 0.4m or the vehicle speed is greater than 40 km/h.
6. An autonomous vehicle fusion locator device, the device comprising:
the first determining module is used for determining a switched positioning mode according to the positioning mode of the previous frame and a preset priority of the positioning mode when the fusion positioning is entered; the positioning mode comprises real-time differential positioning RTK, laser synchronous positioning and mapping SLAM, vision auxiliary positioning VAL or one of vision SLAM;
the second determining module is used for determining a predicted value of the switched positioning mode;
the third determining module is used for determining a virtual observation point according to the current conversion count value and the predicted value;
the fusion positioning module is used for inputting the virtual observation points into the filter to perform fusion positioning;
the determining a virtual observation point according to the current conversion count value and the predicted value comprises the following steps:
if the current conversion count value is greater than 0, the current conversion count value is reduced by 1; determining that the single-frame change displacement is the quotient of the current position difference and the state transition time of the switched positioning mode; calculating a current observation value according to the single frame change displacement, the current conversion count value and the predicted value; taking the current observation value as a virtual observation point;
the calculating the current observed value according to the single frame change displacement, the current conversion count value and the predicted value comprises the following steps:
calculating the product of the current conversion count value and the single frame change displacement;
and determining the difference between the predicted value and the product as a current observed value.
7. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon; the computer program being executed by a processor to implement the method of any of claims 1-5.
CN202410123304.0A 2024-01-30 2024-01-30 Fusion positioning method, device, equipment and storage medium for automatic driving vehicle Active CN117647254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410123304.0A CN117647254B (en) 2024-01-30 2024-01-30 Fusion positioning method, device, equipment and storage medium for automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410123304.0A CN117647254B (en) 2024-01-30 2024-01-30 Fusion positioning method, device, equipment and storage medium for automatic driving vehicle

Publications (2)

Publication Number Publication Date
CN117647254A CN117647254A (en) 2024-03-05
CN117647254B true CN117647254B (en) 2024-04-09

Family

ID=90043771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410123304.0A Active CN117647254B (en) 2024-01-30 2024-01-30 Fusion positioning method, device, equipment and storage medium for automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN117647254B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106840179A (en) * 2017-03-07 2017-06-13 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on multi-sensor information fusion
CN108535749A (en) * 2018-03-19 2018-09-14 千寻位置网络有限公司 Positioning Enhancement Method based on CORS and system, positioning system
CN109900265A (en) * 2019-03-15 2019-06-18 武汉大学 A kind of robot localization algorithm of camera/mems auxiliary Beidou
CN111998849A (en) * 2020-08-27 2020-11-27 湘潭大学 Differential dynamic positioning method based on inertial navigation system
CN115143952A (en) * 2022-07-12 2022-10-04 智道网联科技(北京)有限公司 Automatic driving vehicle positioning method and device based on visual assistance
CN116429090A (en) * 2023-04-27 2023-07-14 北京石头创新科技有限公司 Synchronous positioning and mapping method and device based on line laser and mobile robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019203202A1 (en) * 2019-03-08 2020-09-10 Zf Friedrichshafen Ag Localization system for a driverless vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106840179A (en) * 2017-03-07 2017-06-13 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on multi-sensor information fusion
CN108535749A (en) * 2018-03-19 2018-09-14 千寻位置网络有限公司 Positioning Enhancement Method based on CORS and system, positioning system
CN109900265A (en) * 2019-03-15 2019-06-18 武汉大学 A kind of robot localization algorithm of camera/mems auxiliary Beidou
CN111998849A (en) * 2020-08-27 2020-11-27 湘潭大学 Differential dynamic positioning method based on inertial navigation system
CN115143952A (en) * 2022-07-12 2022-10-04 智道网联科技(北京)有限公司 Automatic driving vehicle positioning method and device based on visual assistance
CN116429090A (en) * 2023-04-27 2023-07-14 北京石头创新科技有限公司 Synchronous positioning and mapping method and device based on line laser and mobile robot

Also Published As

Publication number Publication date
CN117647254A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
EP2498058B1 (en) Navigation apparatus
US7460952B2 (en) Navigation apparatus, and data processing method and computer program used therewith
EP2482039B1 (en) Navigation apparatus
US20090063048A1 (en) Navigation apparatus
JP5298904B2 (en) Vehicle travel information providing device
CN103033190A (en) Method and device for displaying realistic picture of directional signboard, as well as navigator
JP3736382B2 (en) NAVIGATION DEVICE AND IMAGE DISPLAY METHOD PROGRAM
CN102865870A (en) Navigation apparatus
JP4950624B2 (en) Automatic braking control device
CN117647254B (en) Fusion positioning method, device, equipment and storage medium for automatic driving vehicle
EP3470790A1 (en) Information processing device and travel control system
JP2008190906A (en) Map display device for vehicle
JP2009036651A (en) Navigation apparatus, navigation method and navigation program
JP2000283772A (en) Running position indication apparatus
JP2006138798A (en) On-vehicle navigation device and route guide method
JP2013205224A (en) In-vehicle device, in-vehicle system, information processing method and program
JP5881308B2 (en) Navigation device and intersection guide method
JP2011058960A (en) Method for poi positioning, method for poi information processing, and navigation system
JP2006242890A (en) Navigation system and program
JP2020140602A (en) Map data update system, travel probe information collection device, travel probe information providing device, and travel probe information collection method
JP4230935B2 (en) Navigation device, map matching method, and navigation program
CN117490705B (en) Vehicle navigation positioning method, system, device and computer readable medium
JP3773015B2 (en) Map drawing data creation method and navigation apparatus
JP2004184174A (en) On-vehicle navigation apparatus
JP2004085466A (en) On-vehicle navigation system, method of controlling display of poi information, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant