CN114894214A - Fusion positioning method and device for automatic driving vehicle and electronic equipment - Google Patents

Fusion positioning method and device for automatic driving vehicle and electronic equipment Download PDF

Info

Publication number
CN114894214A
CN114894214A CN202210664257.1A CN202210664257A CN114894214A CN 114894214 A CN114894214 A CN 114894214A CN 202210664257 A CN202210664257 A CN 202210664257A CN 114894214 A CN114894214 A CN 114894214A
Authority
CN
China
Prior art keywords
next moment
positioning
moment
error
compensated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210664257.1A
Other languages
Chinese (zh)
Inventor
李岩
费再慧
张海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202210664257.1A priority Critical patent/CN114894214A/en
Publication of CN114894214A publication Critical patent/CN114894214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a fusion positioning method and device for an automatic driving vehicle and electronic equipment, wherein the method comprises the following steps: when the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position at the current moment, a track deduction position at the next moment and a corresponding transverse position error; correcting the track deduction position at the next moment by using the transverse position error; compensating the corrected position of the next moment according to the fusion positioning position of the current moment, the track deduction position of the next moment and the corrected position of the next moment; acquiring preset vehicle speed error information, and determining the confidence of the compensated position at the next moment by combining the time interval between the current moment and the next moment; and inputting the compensated position and the corresponding confidence coefficient of the next moment as observed values into a filter for fusion positioning. The method and the device have the advantages that the longitudinal positioning is corrected based on the visual transverse positioning error, extra observation input is provided for the filter, the stability of the filter is guaranteed, and the positioning jump is avoided.

Description

Fusion positioning method and device for automatic driving vehicle and electronic equipment
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a fusion positioning method and device for an automatic driving vehicle, and an electronic device.
Background
The development of computer vision technology has enabled autonomous vehicles to be more localized using elemental information in the image. For example, in recent years, a vision SLAM (Simultaneous Localization And Mapping) method with a high applicability is used to perform vehicle relocation according to a point cloud map established by extracting feature points in an image And performing feature point tracking. And for example, the positioning is carried out based on the matching of road surface semantic elements in the image, such as lane lines, road surface arrows, stop lines and the like, with a high-precision map. Both positioning methods can make up for the problem of large positioning error caused by interference of high-precision positioning signals such as GNSS (Global Navigation Satellite System) and the like to a certain extent, and prevent the large accumulated error from being generated by self-vehicle positioning by providing effective observation information for a Kalman filter.
However, the above solutions have at least the following problems:
1) due to the influences of illumination, dynamic objects and the like, the feature point tracking loss is easily caused in the actual mapping and positioning of the visual SLAM method based on the feature points, so that effective positioning information cannot be output;
2) the storage and the introduction of the map need to occupy a large amount of resources, if the driving area of the automatic driving vehicle is large, a large-area point cloud map often has a large error, and unpredictable delay exists in feature point matching calculation, so that the problem of the two aspects is limited, and the visual SLAM method technology is only suitable for low-speed and closed scenes;
3) because the number of lane lines and stop lines in the road is small, and the lane lines can only provide correction information of transverse positioning errors, after the speed of the vehicle is greater than a certain threshold value, large jump can occur between two adjacent longitudinal positioning results, and the stability of the positioning result output by the Kalman filter is further influenced.
Disclosure of Invention
The embodiment of the application provides a fusion positioning method and device for an automatic driving vehicle and electronic equipment, so that the positioning stability of the automatic driving vehicle is improved, and the positioning jump is avoided.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a fusion positioning method for an autonomous vehicle, where the method includes:
under the condition that the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error, wherein the visual transverse positioning error comprises a transverse position error;
correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
compensating the corrected position at the next moment according to the fused positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
acquiring preset vehicle speed error information, and determining confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and inputting the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
Optionally, before acquiring the fusion positioning position of the autonomous vehicle at the current moment, the track derived position at the next moment and the corresponding visual lateral positioning error in the case that the high-precision positioning signal is in the unavailable state, the method further includes:
determining whether the fusion positioning position at the current moment enters a preset signal interference area;
if so, determining that the high-precision positioning signal is in an unavailable state;
otherwise, determining that the high-precision positioning signal is in an available state.
Optionally, the obtaining a fusion positioning position of the autonomous vehicle at the current time, and a track derived position and a corresponding visual transverse positioning error at the next time when the high-precision positioning signal is in an unavailable state includes:
acquiring a current road image corresponding to the automatic driving vehicle;
carrying out lane line recognition on the current road image by using a preset lane line recognition algorithm to obtain a lane line recognition result;
and acquiring local high-precision map data corresponding to the automatic driving vehicle, and determining the visual transverse positioning error by using lane line data in the local high-precision map data and a lane line identification result.
Optionally, the compensating the corrected position at the next moment according to the fused positioning position at the current moment, the track derived position at the next moment, and the corrected position at the next moment, and obtaining the compensated position at the next moment includes:
determining the distance between the fusion positioning position at the current moment and the track deduction position at the next moment as a first distance, and determining the distance between the fusion positioning position at the current moment and the corrected position at the next moment as a second distance;
determining a difference between the first distance and the second distance;
and compensating the corrected position at the next moment according to the difference value of the first distance and the second distance to obtain the compensated position at the next moment.
Optionally, the determining the distance between the fused positioning position at the current time and the track deduction position at the next time includes:
acquiring the actual vehicle speed and preset vehicle speed error information at the current moment;
compensating the actual vehicle speed at the current moment by using the preset vehicle speed error information to obtain the compensated vehicle speed at the current moment;
and determining the displacement between the current moment and the next moment according to the compensated vehicle speed at the current moment and the time interval between the current moment and the next moment, wherein the displacement is used as the distance between the fusion positioning position of the current moment and the track deduction position of the next moment.
Optionally, the obtaining of the preset vehicle speed error information includes:
under the condition that a high-precision positioning signal is in an available state, acquiring the actual speed of the automatic driving vehicle and the fused speed output by the Kalman filter;
and determining the preset vehicle speed error information according to the actual vehicle speed of the automatic driving vehicle and the fused speed output by the Kalman filter.
Optionally, the visual lateral positioning error further includes a heading angle error, and the inputting the compensated position at the next time and the confidence corresponding to the compensated position at the next time as observation information into a kalman filter for fusion positioning to obtain a fusion positioning result at the next time of the autonomous vehicle includes:
acquiring the compensated vehicle speed at the next moment;
and inputting the compensated vehicle speed and the course angle error at the next moment and the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into the Kalman filter together as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle.
In a second aspect, an embodiment of the present application further provides a fusion positioning apparatus for an autonomous vehicle, where the apparatus includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a fusion positioning position of an automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error under the condition that a high-precision positioning signal is in an unavailable state, and the visual transverse positioning error comprises a transverse position error;
the correcting unit is used for correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
the compensation unit is used for compensating the corrected position at the next moment according to the fusion positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
the first determining unit is used for acquiring preset vehicle speed error information and determining the confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and the fusion positioning unit is used for inputting the compensated position at the next moment and the confidence corresponding to the compensated position at the next moment into a Kalman filter as observation information to perform fusion positioning so as to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform any of the methods described above.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform any of the methods described above.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: according to the fusion positioning method of the automatic driving vehicle, under the condition that the high-precision positioning signal is in an unavailable state, a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error are obtained, wherein the visual transverse positioning error comprises a transverse position error; then, correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment; then according to the fusion positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment, compensating the corrected position at the next moment to obtain the compensated position at the next moment; acquiring preset vehicle speed error information, and determining confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment; and finally, inputting the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle. The fusion positioning method of the automatic driving vehicle, disclosed by the embodiment of the application, corrects longitudinal positioning based on the visual transverse positioning error, can provide extra observation input for the filter under the condition that a high-precision positioning signal is interfered, ensures the stability of the filter, reduces the accumulated positioning error and avoids the occurrence of positioning jump.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart illustrating a fusion positioning method for an autonomous vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating compensation of a corrected position according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a fusion positioning device of an autonomous vehicle according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the present application provides a fusion positioning method for an autonomous vehicle, as shown in fig. 1, which provides a flow schematic diagram of the fusion positioning method for an autonomous vehicle in the embodiment of the present application, where the method at least includes the following steps S110 to S150:
and step S110, under the condition that the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error, wherein the visual transverse positioning error comprises a transverse position error.
The fusion positioning method of the automatic driving vehicle is mainly applied to scenes with poor high-precision positioning information such as GNSS signal quality, under the condition that the high-precision positioning signals are interfered and are in an unavailable state, the fusion positioning of the automatic driving vehicle needs to be realized by means of a vision technology so as to ensure that the vehicle can keep running stably within a certain time.
Specifically, the fusion positioning position of the autonomous vehicle at the current time needs to be obtained first, where the fusion positioning position can be regarded as a relatively accurate position after the fusion processing has been completed at the current time, and because lane keeping is often required in an autonomous driving scene, that is, it is ensured that the vehicle runs on a lane center line, the fusion positioning position at the current time can be regarded as a position located on the lane center line.
In addition, in order to realize vehicle positioning at the next time, a track derived position of the vehicle at the next time and a corresponding visual transverse positioning error are acquired, wherein the track derived position is the position of the vehicle at the next time predicted according to the position of the vehicle at the current time, the vehicle speed information of the vehicle and the like, and the visual transverse positioning error refers to a position deviation of the vehicle in the transverse direction and specifically may include a position deviation and an angle deviation of the vehicle relative to a lane center line.
And step S120, correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment.
Because the track derived position at the next moment is only the result of preliminary prediction and has a certain positioning error, the positioning error generated in the transverse direction of the track derived position needs to be corrected by combining the transverse position error in the visual transverse positioning error, so that the corrected position at the next moment is obtained.
And step S130, compensating the corrected position at the next moment according to the fusion positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment.
The above steps only complete the correction of the transverse position error of the track derived position at the next moment, but in an actual positioning scene, there may also be a longitudinal position error in the corrected position at the next moment, that is, a position error generated in the vehicle traveling direction, so that the embodiment of the present application further performs the compensation and correction of the longitudinal positioning error on the corrected position at the next moment based on the fusion positioning position at the current moment, the track derived position at the next moment, and the corrected position at the next moment corrected by using the visual transverse positioning error, thereby obtaining the compensated position at the next moment.
Step S140, obtaining preset vehicle speed error information, and determining a confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment.
The compensated position at the next time obtained in the above step is not necessarily reliable, and the confidence level is mainly influenced by the vehicle speed error, for example, if the vehicle speed error is larger and the time interval is longer, the resulting track derived position error will be larger, resulting in lower confidence of the compensated position obtained based on the result. Therefore, the vehicle speed error information of the vehicle is determined in advance, and then the confidence coefficient of the compensated position at the next moment obtained in the previous step is determined by using the preset vehicle speed error information and combining the time interval between the current moment and the next moment.
And S150, inputting the compensated position at the next moment and the confidence corresponding to the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle.
Based on the compensated position and the corresponding confidence coefficient at the next moment obtained in the previous step, the compensated position and the corresponding confidence coefficient can be used as extra observation information to be input into a Kalman filter or an extended Kalman filter for fusion positioning processing, and the extra observation information is introduced, so that the Kalman filter can output a more stable fusion positioning result, stable positioning of the automatic driving vehicle under the conditions that positioning signals such as GNSS (global navigation satellite system) are interfered and the like is realized, and the condition of positioning jump is avoided.
The fusion positioning method of the automatic driving vehicle, disclosed by the embodiment of the application, corrects longitudinal positioning based on the visual transverse positioning error, can provide extra observation input for the filter under the condition that a high-precision positioning signal is interfered, ensures the stability of the filter, reduces the accumulated positioning error and avoids the occurrence of positioning jump.
In one embodiment of the present application, before acquiring the fusion positioning position of the autonomous vehicle at the current time, and the track derived position and the corresponding visual lateral positioning error at the next time, in case the high-precision positioning signal is in the unavailable state, the method further comprises: determining whether the fusion positioning position at the current moment enters a preset signal interference area; if so, determining that the high-precision positioning signal is in an unavailable state; otherwise, determining that the high-precision positioning signal is in an available state.
When determining whether the high-precision positioning signal is in an unavailable state, the embodiment of the application can determine in advance a position area where the high-precision positioning signal is easily interfered or has poor quality, for example, when an autonomous vehicle drives into areas such as a disk bridge and a tunnel, the GNSS positioning signal is easily interfered, and at the moment, the GNSS positioning signal is in the unavailable state and needs to be assisted in positioning by means of other positioning technologies such as vision.
Based on this, the embodiments of the present application may use the areas where the actual signals are interfered as a basis for determining whether the GNSS positioning signals are in an unavailable state, for example, the entry and exit position points of the signal interference area may be set on a map in advance, and when the current position of the autonomous vehicle is located between the entry and exit position points of the signal interference area, the GNSS positioning signals at this time may be considered as an unavailable state.
In an embodiment of the present application, the obtaining a fusion positioning position of the autonomous vehicle at the current time, and a track derived position and a corresponding visual lateral positioning error at the next time when the high-precision positioning signal is in an unavailable state includes: acquiring a current road image corresponding to the automatic driving vehicle; carrying out lane line recognition on the current road image by using a preset lane line recognition algorithm to obtain a lane line recognition result; and acquiring local high-precision map data corresponding to the automatic driving vehicle, and determining the visual transverse positioning error by using lane line data in the local high-precision map data and a lane line identification result.
When the visual transverse positioning error corresponding to the track deduction position at the next moment is obtained, the current road image corresponding to the automatic driving vehicle can be obtained firstly, then the lane line in the current road image is identified by using the preset lane line identification algorithm to obtain the lane line identification result, the preset lane line identification algorithm can be obtained based on the existing convolutional neural network model such as LaneNet training, and certainly, a person skilled in the art can flexibly select other models according to actual requirements without specific limitation.
And then, local high-precision map data corresponding to the automatic driving vehicle can be obtained by combining the current positioning of the automatic driving vehicle, and high-precision lane structural information is provided in the local high-precision map data, so that lane lines identified by the lane line identification model can be compared with corresponding lane lines in the local high-precision map data, including calculation of position deviation and angle deviation and the like, and thus, the positioning error of the visual positioning algorithm in the transverse direction can be obtained.
In an embodiment of the application, the compensating the corrected position at the next time according to the fused positioning position at the current time, the track derived position at the next time, and the corrected position at the next time, and obtaining the compensated position at the next time includes: determining the distance between the fusion positioning position at the current moment and the track deduction position at the next moment as a first distance, and determining the distance between the fusion positioning position at the current moment and the corrected position at the next moment as a second distance; determining a difference between the first distance and the second distance; and compensating the corrected position at the next moment according to the difference value of the first distance and the second distance to obtain the compensated position at the next moment.
To facilitate understanding of the embodiments of the present application, a schematic diagram of compensating the corrected position in the embodiments of the present application is provided as shown in fig. 2. Here mainly the fused positioning position P at the current moment k (labeled as position point A), track derived position P at the next time k+1 predict (marked as position point B) and corrected position P at the next time k+1 vision The fusion positioning position at the current moment is the positioning result after Kalman filtering fusion processing, so that the position point A is positioned on the central line of the lane, and the correction at the next moment is carried outSince the rear position is corrected by the lateral position error, the position point C is also located on the center line of the lane, and the three position points A, B, C form a right triangle after being connected with each other, where the connection line from the position point a to the position point C and the connection line from the position point B to the position point C are two right-angled sides, and the connection line from the position point a to the position point B is a hypotenuse.
According to the Pythagorean theorem, the hypotenuse is always larger than the right-angle side, so that the line P from the position point A to the position point B k P k+1 predict Will always be greater than the line P from position point A to position point C k P k+1 vision That is, the first distance is greater than the second distance in the embodiment of the present application, the difference between the first distance and the second distance is always greater than 0, and therefore the compensation value of the longitudinal positioning error is a positive value. The larger the visual lateral positioning error, i.e. the greater the distance from position point B to position point C, the larger the sloping edge will be, resulting in a larger compensation value calculated as above, i.e. a larger longitudinal positioning error.
Therefore, the estimation of the longitudinal positioning error is realized through the visual transverse positioning error, the longitudinal positioning error is compensated according to the estimation value, the phenomena of positioning jump and the like caused by loss or interference of high-precision positioning signals such as GNSS and the like are avoided, and the positioning stability of the automatic driving vehicle is improved.
In an embodiment of the application, the determining the distance between the fused positioning location at the current time and the track derived location at the next time includes: acquiring the actual vehicle speed and preset vehicle speed error information at the current moment; compensating the actual vehicle speed at the current moment by using the preset vehicle speed error information to obtain the compensated vehicle speed at the current moment; and determining the displacement between the current moment and the next moment according to the compensated vehicle speed at the current moment and the time interval between the current moment and the next moment, wherein the displacement is used as the distance between the fusion positioning position of the current moment and the track deduction position of the next moment.
According to the embodiment of the application, when the distance between the fusion positioning position at the current moment and the track deduction position at the next moment is determined, a track deduction algorithm can be adopted for calculation, the track deduction algorithm is mainly used for predicting the position of the vehicle at the next moment based on the vehicle position at the current moment, the vehicle speed information and the like, due to certain errors, error compensation can be carried out on the actual vehicle speed at the current moment of the vehicle by utilizing the vehicle speed error information determined in advance to obtain the compensated vehicle speed, and then the distance between the fusion positioning position at the current moment and the track deduction position at the next moment is calculated by combining the time interval between the two moments.
In an embodiment of the present application, the obtaining of the preset vehicle speed error information includes: under the condition that a high-precision positioning signal is in an available state, acquiring the actual speed of the automatic driving vehicle and the fused speed output by the Kalman filter; and determining the preset vehicle speed error information according to the actual vehicle speed of the automatic driving vehicle and the fused speed output by the Kalman filter.
When the preset vehicle speed error information is determined, the vehicle speed error can be determined under the condition that the GNSS positioning signal is in the available state, the actual vehicle speed of the automatic driving vehicle and the fused speed output by the Kalman filter at the corresponding moment are obtained firstly, the actual vehicle speed refers to the actually obtained chassis vehicle speed, and the fused speed output by the Kalman filter can be regarded as relatively accurate speed.
The embodiment of the application can specifically adopt two modes when calculating the vehicle speed error, wherein one mode can be an absolute error, namely directly subtracting the actual vehicle speed from the corresponding fused speed, and the other mode is a proportional error, namely subtracting the actual vehicle speed from the corresponding fused speed and then dividing the subtracted actual vehicle speed by the fused speed. Of course, the method for calculating the vehicle speed error is specifically adopted, and those skilled in the art can adaptively select the method according to the actual situation of the vehicle, and is not limited in particular.
In an embodiment of the present application, the visual lateral positioning error further includes a course angle error, and the confidence corresponding to the compensated position at the next time and the compensated position at the next time is input to a kalman filter as observation information to perform fusion positioning, and obtaining the fusion positioning result at the next time of the autonomous vehicle includes: acquiring the compensated vehicle speed at the next moment; and inputting the compensated vehicle speed and the course angle error at the next moment and the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into the Kalman filter together as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle.
The visual transverse positioning error of the embodiment of the application comprises a transverse position error and a course angle error, and in order to provide more observation information for the Kalman filter, the course angle error, the compensated vehicle speed and the like can be used as additional observation values to be input into the Kalman filter together for fusion positioning, so that the positioning stability of the Kalman filter is further improved.
According to the fusion positioning method for the automatic driving vehicle, the correction result of the vision transverse positioning error is utilized to further correct the longitudinal position, the confidence coefficient is calculated by utilizing the vehicle speed error information, and under the condition that GNSS signals are interfered (no matter the time is long), the longitudinally corrected position and the corresponding confidence coefficient are used as extra observation information to be input into the Kalman filter for fusion positioning, so that the positioning stability of the Kalman filter is ensured, meanwhile, the accumulated error is reduced, and the positioning jump is prevented.
The embodiment of the present application further provides a fusion positioning device 300 for an autonomous vehicle, as shown in fig. 3, which provides a schematic structural diagram of the fusion positioning device for an autonomous vehicle in the embodiment of the present application, where the device at least includes: an obtaining unit 310, a correcting unit 320, a compensating unit 330, a first determining unit 340 and a fusion positioning unit 350, wherein:
an obtaining unit 310, configured to obtain, when the high-precision positioning signal is in an unavailable state, a fusion positioning position of the autonomous vehicle at a current time, a track derived position at a next time, and a corresponding visual transverse positioning error, where the visual transverse positioning error includes a transverse position error;
a correcting unit 320, configured to correct the track derived position at the next time by using the lateral position error, so as to obtain a corrected position at the next time;
a compensation unit 330, configured to compensate the corrected position at the next time according to the fusion positioning position at the current time, the track derived position at the next time, and the corrected position at the next time, so as to obtain a compensated position at the next time;
the first determining unit 340 is configured to obtain preset vehicle speed error information, and determine a confidence corresponding to a compensated position at the next time according to the preset vehicle speed error information and a time interval between the current time and the next time;
and a fusion positioning unit 350, configured to input the compensated position at the next moment and a confidence corresponding to the compensated position at the next moment as observation information into a kalman filter for fusion positioning, so as to obtain a fusion positioning result at the next moment of the autonomous vehicle.
In one embodiment of the present application, the apparatus further comprises: the second determining unit is used for determining whether the fusion positioning position at the current moment enters a preset signal interference area; if so, determining that the high-precision positioning signal is in an unavailable state; otherwise, determining that the high-precision positioning signal is in an available state.
In an embodiment of the present application, the obtaining unit 310 is specifically configured to: acquiring a current road image corresponding to the automatic driving vehicle; carrying out lane line recognition on the current road image by using a preset lane line recognition algorithm to obtain a lane line recognition result; and acquiring local high-precision map data corresponding to the automatic driving vehicle, and determining the visual transverse positioning error by using lane line data in the local high-precision map data and a lane line identification result.
In an embodiment of the present application, the compensation unit 330 is specifically configured to: determining the distance between the fusion positioning position at the current moment and the track deduction position at the next moment as a first distance, and determining the distance between the fusion positioning position at the current moment and the corrected position at the next moment as a second distance; determining a difference between the first distance and the second distance; and compensating the corrected position at the next moment according to the difference value of the first distance and the second distance to obtain the compensated position at the next moment.
In an embodiment of the present application, the compensation unit 330 is specifically configured to: acquiring the actual vehicle speed and preset vehicle speed error information at the current moment; compensating the actual vehicle speed at the current moment by using the preset vehicle speed error information to obtain the compensated vehicle speed at the current moment; and determining the displacement between the current moment and the next moment according to the compensated vehicle speed at the current moment and the time interval between the current moment and the next moment, wherein the displacement is used as the distance between the fusion positioning position of the current moment and the track deduction position of the next moment.
In an embodiment of the present application, the first determining unit 340 is specifically configured to: under the condition that a high-precision positioning signal is in an available state, acquiring the actual speed of the automatic driving vehicle and the fused speed output by the Kalman filter; and determining the preset vehicle speed error information according to the actual vehicle speed of the automatic driving vehicle and the fused speed output by the Kalman filter.
In an embodiment of the present application, the visual lateral positioning error further includes a heading angle error, and the fused positioning unit 350 is specifically configured to: acquiring the compensated vehicle speed at the next moment; and inputting the compensated vehicle speed and the course angle error at the next moment and the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into the Kalman filter together as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle.
It can be understood that the fusion positioning device for an autonomous vehicle can implement the steps of the fusion positioning method for an autonomous vehicle provided in the foregoing embodiment, and the explanations related to the fusion positioning method for an autonomous vehicle are applicable to the fusion positioning device for an autonomous vehicle, and are not repeated herein.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 4, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form the fusion positioning device of the automatic driving vehicle on the logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
under the condition that the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error, wherein the visual transverse positioning error comprises a transverse position error;
correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
compensating the corrected position at the next moment according to the fused positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
acquiring preset vehicle speed error information, and determining confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and inputting the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
The method performed by the fusion positioning device of the autonomous vehicle disclosed in the embodiment of fig. 1 of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the fusion positioning device of the autonomous vehicle in fig. 1, and implement the functions of the fusion positioning device of the autonomous vehicle in the embodiment shown in fig. 1, which are not described herein again in this application embodiment.
Embodiments of the present application further provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method performed by the fusion positioning apparatus of an autonomous vehicle in the embodiment shown in fig. 1, and are specifically configured to perform:
under the condition that the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error, wherein the visual transverse positioning error comprises a transverse position error;
correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
compensating the corrected position at the next moment according to the fused positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
acquiring preset vehicle speed error information, and determining confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and inputting the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A fusion localization method of an autonomous vehicle, wherein the method comprises:
under the condition that the high-precision positioning signal is in an unavailable state, acquiring a fusion positioning position of the automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error, wherein the visual transverse positioning error comprises a transverse position error;
correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
compensating the corrected position at the next moment according to the fused positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
acquiring preset vehicle speed error information, and determining confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and inputting the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into a Kalman filter as observation information for fusion positioning to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
2. The method of claim 1, wherein prior to obtaining the current time fused position location of the autonomous vehicle, and the next time track derived position and corresponding visual lateral positioning error, in the event that the high-precision positioning signal is in an unavailable state, the method further comprises:
determining whether the fusion positioning position at the current moment enters a preset signal interference area;
if so, determining that the high-precision positioning signal is in an unavailable state;
otherwise, determining that the high-precision positioning signal is in an available state.
3. The method of claim 1, wherein obtaining the current time-of-day fused position of the autonomous vehicle in the event that the high-precision positioning signal is unavailable, and the next-time track derived position and corresponding visual lateral positioning error comprises:
acquiring a current road image corresponding to the automatic driving vehicle;
carrying out lane line recognition on the current road image by using a preset lane line recognition algorithm to obtain a lane line recognition result;
and acquiring local high-precision map data corresponding to the automatic driving vehicle, and determining the visual transverse positioning error by using lane line data in the local high-precision map data and a lane line identification result.
4. The method of claim 1, wherein the compensating the corrected position at the next time according to the fused positioning position at the current time, the track derived position at the next time, and the corrected position at the next time comprises:
determining the distance between the fusion positioning position at the current moment and the track deduction position at the next moment as a first distance, and determining the distance between the fusion positioning position at the current moment and the corrected position at the next moment as a second distance;
determining a difference between the first distance and the second distance;
and compensating the corrected position at the next moment according to the difference value of the first distance and the second distance to obtain the compensated position at the next moment.
5. The method of claim 4, wherein the determining the distance of the fused position fix at the current time from the track derived position at the next time comprises:
acquiring the actual vehicle speed and preset vehicle speed error information at the current moment;
compensating the actual vehicle speed at the current moment by using the preset vehicle speed error information to obtain the compensated vehicle speed at the current moment;
and determining the displacement between the current moment and the next moment according to the compensated vehicle speed at the current moment and the time interval between the current moment and the next moment, wherein the displacement is used as the distance between the fusion positioning position of the current moment and the track deduction position of the next moment.
6. The method of claim 1, wherein the obtaining preset vehicle speed error information comprises:
under the condition that a high-precision positioning signal is in an available state, acquiring the actual speed of the automatic driving vehicle and the fused speed output by the Kalman filter;
and determining the preset vehicle speed error information according to the actual vehicle speed of the automatic driving vehicle and the fused speed output by the Kalman filter.
7. The method according to claim 1, wherein the visual lateral positioning error further includes a heading angle error, and the obtaining the fusion positioning result at the next time of the autonomous vehicle by inputting the confidence degrees corresponding to the compensated position at the next time and the compensated position at the next time as observation information into a kalman filter for fusion positioning comprises:
acquiring the compensated vehicle speed at the next moment;
and inputting the compensated vehicle speed and the course angle error at the next moment and the confidence degrees corresponding to the compensated position at the next moment and the compensated position at the next moment into the Kalman filter together as observation information for fusion positioning to obtain a fusion positioning result at the next moment of the automatic driving vehicle.
8. A fusion positioning apparatus of an autonomous vehicle, wherein the apparatus comprises:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a fusion positioning position of an automatic driving vehicle at the current moment, a track deduction position at the next moment and a corresponding visual transverse positioning error under the condition that a high-precision positioning signal is in an unavailable state, and the visual transverse positioning error comprises a transverse position error;
the correcting unit is used for correcting the track deduction position at the next moment by using the transverse position error to obtain a corrected position at the next moment;
the compensation unit is used for compensating the corrected position at the next moment according to the fusion positioning position at the current moment, the track deduction position at the next moment and the corrected position at the next moment to obtain the compensated position at the next moment;
the first determining unit is used for acquiring preset vehicle speed error information and determining the confidence corresponding to the compensated position at the next moment according to the preset vehicle speed error information and the time interval between the current moment and the next moment;
and the fusion positioning unit is used for inputting the compensated position at the next moment and the confidence corresponding to the compensated position at the next moment into a Kalman filter as observation information to perform fusion positioning so as to obtain a fusion positioning result of the automatic driving vehicle at the next moment.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of claims 1 to 7.
10. A computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-7.
CN202210664257.1A 2022-06-13 2022-06-13 Fusion positioning method and device for automatic driving vehicle and electronic equipment Pending CN114894214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210664257.1A CN114894214A (en) 2022-06-13 2022-06-13 Fusion positioning method and device for automatic driving vehicle and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210664257.1A CN114894214A (en) 2022-06-13 2022-06-13 Fusion positioning method and device for automatic driving vehicle and electronic equipment

Publications (1)

Publication Number Publication Date
CN114894214A true CN114894214A (en) 2022-08-12

Family

ID=82728511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210664257.1A Pending CN114894214A (en) 2022-06-13 2022-06-13 Fusion positioning method and device for automatic driving vehicle and electronic equipment

Country Status (1)

Country Link
CN (1) CN114894214A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115184976A (en) * 2022-09-09 2022-10-14 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115390086A (en) * 2022-10-31 2022-11-25 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving, electronic equipment and storage medium
CN115950441A (en) * 2023-03-08 2023-04-11 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving vehicle and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115184976A (en) * 2022-09-09 2022-10-14 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115390086A (en) * 2022-10-31 2022-11-25 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving, electronic equipment and storage medium
CN115950441A (en) * 2023-03-08 2023-04-11 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving vehicle and electronic equipment

Similar Documents

Publication Publication Date Title
CN114894214A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115184976B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN113791435B (en) GNSS signal abnormal value detection method and device, electronic equipment and storage medium
CN110562251A (en) automatic driving method and device
CN115390086B (en) Fusion positioning method and device for automatic driving, electronic equipment and storage medium
CN115143952A (en) Automatic driving vehicle positioning method and device based on visual assistance
CN115507862A (en) Lane line positioning method and device, electronic device and storage medium
CN114777814A (en) Fusion positioning precision evaluation method, device and system based on vehicle road cloud
CN115390103A (en) Satellite positioning signal abnormity detection method and device and electronic equipment
CN114993333A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN114877900A (en) Automatic driving vehicle fusion positioning method for tunnel and related device
CN111127551A (en) Target detection method and device
CN115950441B (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115031755A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
CN113804214B (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN116164763A (en) Target course angle determining method and device, electronic equipment and storage medium
CN114739416A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115014332A (en) Laser SLAM mapping method and device, electronic equipment and computer readable storage medium
CN114910083A (en) Positioning method, positioning device, electronic apparatus, and storage medium
CN116481548B (en) Positioning method and device for automatic driving vehicle and electronic equipment
CN116559899B (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115128655B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN116295343A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN116027375A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination