CN110146074B - Real-time positioning method and device applied to automatic driving - Google Patents

Real-time positioning method and device applied to automatic driving Download PDF

Info

Publication number
CN110146074B
CN110146074B CN201810984483.1A CN201810984483A CN110146074B CN 110146074 B CN110146074 B CN 110146074B CN 201810984483 A CN201810984483 A CN 201810984483A CN 110146074 B CN110146074 B CN 110146074B
Authority
CN
China
Prior art keywords
sensor data
positioning
vehicle
position information
semantic feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810984483.1A
Other languages
Chinese (zh)
Other versions
CN110146074A (en
Inventor
杜志颖
韩永根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201810984483.1A priority Critical patent/CN110146074B/en
Priority to PCT/CN2018/113663 priority patent/WO2020042347A1/en
Publication of CN110146074A publication Critical patent/CN110146074A/en
Application granted granted Critical
Publication of CN110146074B publication Critical patent/CN110146074B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention discloses a real-time positioning method and a real-time positioning device applied to automatic driving, wherein the method comprises the following steps: the method includes the steps of acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle, recording a time point at which the inertial measurement sensor data is acquired, positioning the vehicle according to the inertial measurement sensor data and other sensor data when sensor data other than the inertial measurement sensor data is acquired at the time point, and positioning the vehicle according to the inertial measurement sensor data when other sensor data is not acquired at the time point. By implementing the embodiment of the invention, the time point of receiving the inertial measurement sensor data is taken as the time reference, and the positioning accuracy of the vehicle can be improved by positioning in combination with the sensor data obtained at the time point.

Description

Real-time positioning method and device applied to automatic driving
Technical Field
The invention relates to the technical field of automatic driving, in particular to a real-time positioning method and a real-time positioning device applied to automatic driving.
Background
In the conventional vehicle positioning method, a camera built in a vehicle captures image information, the captured image information is sent to a data processor, and then the data processor performs analysis processing to obtain a positioning result. However, in practice, it is found that the image information data is generally large, which takes a long time to transmit to the data processor, and in addition, after the image information data is transmitted to the data processor, the data processor also takes a long time to process the image information data, so that the vehicle position information at the current time obtained by positioning the vehicle by using the camera is actually the vehicle position information that is much earlier than the current time, and the positioning accuracy is generally poor.
Disclosure of Invention
The embodiment of the invention discloses a real-time positioning method and a real-time positioning device applied to automatic driving, which can improve the positioning precision of a vehicle.
The embodiment of the invention discloses a real-time positioning method applied to automatic driving in a first aspect, which comprises the following steps:
acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle, and recording the time point of acquiring the inertial measurement sensor data;
detecting whether other sensor data are acquired at the time point, wherein the other sensor data are sensor data except the inertial measurement sensor data;
if the other sensor data are acquired, positioning the vehicle according to the inertial measurement sensor data and the other sensor data to acquire a current positioning result of the vehicle;
and if the other sensor data are not acquired, positioning the vehicle according to the inertial measurement sensor data to acquire the current positioning result of the vehicle.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the positioning the vehicle according to the inertial measurement sensor data and obtaining the positioning result of the vehicle if the other sensor data is not obtained, the method further includes:
estimating the track route of the vehicle according to the last positioning result of the vehicle and the driving information of the vehicle when the last positioning result is obtained;
determining a next estimated positioning result of the vehicle according to the track route;
obtaining the positioning difference degree between the current positioning result of the vehicle and the next estimated positioning result of the vehicle;
judging whether the positioning difference degree is greater than a preset positioning difference degree;
and if the estimated positioning result is greater than the preset positioning difference degree, taking the next estimated positioning result of the vehicle as the current positioning result of the vehicle.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, if the positioning difference degree is greater than the preset positioning difference degree, the method further includes:
recording the current time, and acquiring a plurality of continuous historical positioning difference degrees before the current time;
determining the number of historical positioning difference degrees with values larger than the preset positioning difference degree from the plurality of historical positioning difference degrees;
judging whether the number is larger than a preset threshold value or not;
if the current time is greater than the preset threshold value, outputting mode switching prompt information, wherein the mode switching prompt information is used for prompting a user to input a mode switching instruction;
when the mode switching instruction is detected, switching the current driving mode of the vehicle to a manual driving mode.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, if the other sensor data is obtained, the locating the vehicle according to the inertial measurement sensor data and the other sensor data to obtain the current location result of the vehicle includes:
if the other sensor data is acquired and the other sensor data comprises image sensor data, acquiring a first positioning position according to the inertial measurement sensor data and sensor data except the image sensor data in the other sensor data;
mapping the first location to an autonomous navigation electronic map;
determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as a center and takes a preset length as a radius;
extracting a number of semantic features from the image sensor data;
matching in the target area to obtain first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle;
and obtaining the current positioning result of the vehicle according to the first position information and the second position information of the semantic features.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after obtaining, by matching in the target area, first location information corresponding to each semantic feature and second location information between each semantic feature and the vehicle, and before obtaining a current location result of the vehicle according to the first location information and the second location information of the semantic features, the method further includes:
controlling a laser radar sensor of the vehicle to emit a laser beam to each semantic feature according to the second position information of each semantic feature;
receiving a reflected beam of each of the semantic features reflected against the received laser beam;
acquiring laser radar sensor data according to the laser beam emitted aiming at each semantic feature and the emitted light beam reflected by the laser beam, wherein the laser radar sensor data at least comprises azimuth information, height information, speed information, attitude information and shape information of the corresponding semantic feature;
utilizing the laser radar sensor data to check second position information of semantic features corresponding to the laser radar sensor data to obtain a check result;
the obtaining a current positioning result of the vehicle according to the first position information and the second position information of the semantic features includes:
when the inspection result indicates that the second position information of the semantic features is valid, obtaining a current positioning result of the vehicle according to the first position information and the second position information of the semantic features.
The second aspect of the embodiment of the present invention discloses a real-time positioning device applied to automatic driving, which includes:
the recording unit is used for acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle and recording the time point of acquiring the inertial measurement sensor data;
the detection unit is used for detecting whether other sensor data are acquired at the time point, wherein the other sensor data are sensor data except the inertial measurement sensor data;
and the positioning unit is used for positioning the vehicle according to the inertial measurement sensor data and the other sensor data to obtain a current positioning result of the vehicle when the other sensor data is obtained, and positioning the vehicle according to the inertial measurement sensor data to obtain the current positioning result of the vehicle when the other sensor data is not obtained.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the real-time positioning device applied to automatic driving further includes:
the modeling unit is used for the positioning unit to position the vehicle according to the inertial measurement sensor data when the other sensor data is not acquired, estimating a track route of the vehicle according to a previous positioning result of the vehicle and the driving information of the vehicle when the previous positioning result is acquired after a current positioning result of the vehicle is acquired, and determining a next estimated positioning result of the vehicle according to the track route;
the first acquisition unit is used for acquiring the positioning difference degree of the current positioning result of the vehicle and the next estimated positioning result of the vehicle;
and the judging unit is used for judging whether the positioning difference degree is greater than a preset positioning difference degree or not, and taking the next estimated positioning result of the vehicle as the current positioning result of the vehicle when the positioning difference degree is greater than the preset positioning difference degree.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first obtaining unit is further configured to record a current time and obtain a plurality of continuous historical positioning difference degrees before the current time when the determining unit determines that the positioning difference degree is greater than the preset positioning difference degree;
the real-time positioning device applied to automatic driving further comprises:
the determining unit is used for determining the number of the historical positioning difference degrees of which the values are larger than the preset positioning difference degree from the plurality of historical positioning difference degrees;
the judging unit is further configured to judge whether the number is greater than a preset threshold, and when the number is greater than the preset threshold, output mode switching prompt information, where the mode switching prompt information is used to prompt a user to input a mode switching instruction;
and the mode switching unit is used for switching the current driving mode of the vehicle to a manual driving mode when the mode switching instruction is detected.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, when the other sensor data is acquired, the positioning unit is configured to position the vehicle according to the inertial measurement sensor data and the other sensor data, and a manner of acquiring a current positioning result of the vehicle specifically includes:
the positioning unit is used for acquiring a first positioning position according to the inertial measurement sensor data and sensor data except for the image sensor data in the other sensor data when the other sensor data is acquired and comprises the image sensor data; and mapping the first location to an autonomous navigation electronic map; determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as a center and takes a preset length as a radius; and extracting a number of semantic features from the image sensor data; matching in the target area to obtain first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle; and obtaining a current positioning result of the vehicle according to the first position information and the second position information of the semantic features.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the real-time positioning device applied to automatic driving further includes:
the control unit is used for controlling a laser radar sensor of the vehicle to emit laser beams to each semantic feature according to the second position information of each semantic feature after the positioning unit obtains the first position information corresponding to each semantic feature and the second position information between each semantic feature and the vehicle through matching in the target area and before the current positioning result of the vehicle is obtained according to the first position information and the second position information of the semantic features;
a receiving unit for receiving a reflected beam of each of the semantic features reflected by the received laser beam;
the second acquisition unit is used for acquiring laser radar sensor data according to the laser beam emitted by each semantic feature and the emitted light beam reflected by the laser beam, wherein the laser radar sensor data at least comprises azimuth information, height information, speed information, attitude information and shape information of the corresponding semantic feature; the laser radar sensor data is utilized to check second position information of semantic features corresponding to the laser radar sensor data to obtain a check result;
the positioning unit is configured to obtain a current positioning result of the vehicle according to the first location information and the second location information of the semantic features in a specific manner:
the positioning unit is used for obtaining the current positioning result of the vehicle according to the first position information and the second position information of the semantic features when the inspection result indicates that the second position information of the semantic features is effective.
A third aspect of an embodiment of the present invention discloses an electronic device, including:
the real-time positioning device applied to automatic driving is introduced in the second aspect of the invention;
a fourth aspect of the embodiments of the present invention discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect of the invention.
A fifth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program comprising instructions for carrying out some or all of the steps of any one of the methods of the first aspect of the present invention.
A sixth aspect of the embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A seventh aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
1. the method includes the steps of acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle, recording a time point at which the inertial measurement sensor data is acquired, positioning the vehicle according to the inertial measurement sensor data and other sensor data when sensor data other than the inertial measurement sensor data is acquired at the time point, and positioning the vehicle according to the inertial measurement sensor data when other sensor data is not acquired at the time point. Since the frequency of data acquisition by the inertial measurement sensor is higher than that of other sensors (such as an image sensor, a wheel speed sensor, a GPS sensor, and the like), in the embodiment of the present invention, the time point at which the inertial measurement sensor data is received is used as a time reference, and positioning is performed in combination with the sensor data obtained at the time point, so as to improve the positioning accuracy of the vehicle.
2. The current positioning result of the vehicle is verified by utilizing the next estimated positioning result, so that the positioning precision of the vehicle can be further improved, and the problem of reduced positioning precision caused by long-time positioning only by utilizing an inertial measurement sensor is solved.
3. When the image sensor data is acquired, the current positioning result of the vehicle is acquired by combining the first position information and the second position information of the semantic features in the target area, and firstly, the target area is acquired from the automatic driving navigation electronic map, so that the memory pressure of a data processor can be relieved, and the matching efficiency of the semantic features can be improved; and secondly, verifying the second position information of the semantic features by using a laser radar sensor so as to further improve the positioning precision of the vehicle.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a real-time positioning method applied to automatic driving according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating another real-time positioning method applied to automatic driving according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of another real-time positioning method applied to automatic driving according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a real-time positioning device applied to automatic driving according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another real-time positioning device applied to automatic driving according to an embodiment of the disclosure;
FIG. 6 is a schematic structural diagram of another real-time positioning device applied to automatic driving according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terms "comprises," "comprising," and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a real-time positioning method and a real-time positioning device applied to automatic driving, which can improve the positioning precision of a vehicle. The execution subject of the embodiment of the present invention is a real-time positioning device applied to automatic driving, and the details are described below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a real-time positioning method for automatic driving according to an embodiment of the present invention. As shown in fig. 1, the real-time positioning method applied to the autonomous driving may include the steps of:
101. inertial measurement sensor data is acquired from an inertial measurement sensor built in a vehicle, and a time point at which the inertial measurement sensor data is acquired is recorded.
In the embodiment of the present invention, an Inertial Measurement Unit (IMU) sends acquired Inertial measurement sensor data to a data processor, and the data processor may record a time point when acquiring the Inertial measurement sensor data, and set a timestamp of the Inertial measurement sensor data based on the time point. Similarly, when other sensors (image sensor, wheel speed sensor and GPS sensor) send the acquired data to the data processor, the data processor also sets the time stamp of the data of the other sensors according to the time when the data of the other sensors is received. Optionally, in the embodiment of the present invention, the data processor may be a data processor dedicated to the vehicle, that is, the data processor performs processing operations only on the sensor data of the vehicle, and further optionally, the data processor may also be a data processing module in the cloud server, and may receive the sensor data of any vehicle and perform processing operations on the sensor data.
Wherein, the operating frequency of the IMU is highest compared with the sampling frequency of the image sensor, the wheel speed sensor and the GPS sensor, and can reach 200HZ or higher.
102. Detecting whether other sensor data are acquired at the time point, if so, executing step 103; if not, step 104 is performed.
In an embodiment of the invention, the other sensor data is sensor data other than inertial measurement sensor data. It should be noted that the sensors (the inertial measurement sensor, the image sensor, the wheel speed sensor, and the GPS sensor) mentioned in the embodiments of the present invention are all sensors for vehicle positioning, and of course, the sensors on the vehicle may be provided with a biosensor for identity verification, a temperature sensor for detecting temperature, and the like, in addition to the sensors for vehicle positioning.
In the embodiment of the present invention, the image sensor may be a look-around camera and a forward-looking camera, wherein the look-around camera is mainly used for identifying obstacles in a short-distance scene, and the forward-looking camera is mainly used for identifying lane lines, traffic signs, obstacles, pedestrians, and the like in a medium-distance and long-distance scene. It should be noted that the sampling frequency of the look-around camera is the same as that of the look-ahead camera.
103. And positioning the vehicle according to the inertial measurement sensor data and the other sensor data to obtain the current positioning result of the vehicle.
In the embodiment of the present invention, if the data processor acquires the other sensor data in addition to the inertial measurement sensor data at the time, the data processor uses kalman filtering to fuse the acquired inertial measurement sensor data and the other sensor data to obtain the current positioning result of the vehicle.
104. And positioning the vehicle according to the data of the inertial measurement sensor to obtain the current positioning result of the vehicle.
In the embodiment of the invention, the IMU data mainly comprises the instantaneous acceleration and the rotation angle of the vehicle, and the data processor obtains the positioning result according to the instantaneous acceleration and the rotation angle as a relative positioning result relative to the last positioning result, so that the current positioning result of the vehicle is obtained by combining the last positioning result and the relative positioning result. It should be noted that the time interval between the time point when the data processor obtains the relative positioning result and the time point when the last positioning result is obtained is maintained at the level of μ s.
By implementing the method, the data processor takes the time point of receiving the inertial measurement sensor data with the highest sampling frequency as a time reference, and positioning is carried out by combining the sensor data obtained at the time point, so that the positioning precision of the vehicle can be improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another real-time positioning method applied to automatic driving according to an embodiment of the present invention. As shown in fig. 2, the real-time positioning method applied to the autonomous driving may include the steps of:
for detailed descriptions of step 201 to step 204, please refer to the detailed descriptions of step 101 to step 104 in the first embodiment, which is not described again in the embodiment of the present invention. After step 204 is completed, step 205 is executed.
205. And predicting the track route of the vehicle according to the last positioning result of the vehicle and the driving information of the vehicle when the last positioning result is obtained.
206. And determining the next estimated positioning result of the vehicle according to the estimated track route.
In an embodiment of the present invention, the driving information at least includes an instantaneous acceleration of the vehicle and pose information of the vehicle, wherein, according to a last positioning result of the vehicle and the driving information of the vehicle when the last positioning result is obtained, estimating a trajectory route of the vehicle may include estimating a driving direction, a driving speed and a driving acceleration of the vehicle within a certain short period of time after the last positioning result is obtained according to the instantaneous acceleration of the vehicle when the last positioning result is obtained, the pose information of the vehicle and a trajectory model; wherein the certain shorter time period is a time interval between obtaining a next positioning result and obtaining a previous positioning result; and obtaining a next estimated positioning result according to the last positioning result and the driving direction, the driving speed and the driving acceleration of the vehicle in a certain short time period after the last positioning result is obtained.
207. And obtaining the positioning difference degree between the current positioning result of the vehicle and the next estimated positioning result of the vehicle.
208. Judging whether the positioning difference degree is larger than a preset positioning difference degree, if so, executing step 209; if not, the flow is ended.
In the embodiment of the present invention, the positioning disparity includes at least a pose disparity and a position disparity of the vehicle, and correspondingly, the preset positioning disparity includes at least a preset pose disparity and a preset position disparity, and optionally, determining whether the positioning disparity is greater than the preset positioning disparity may include determining whether a pose disparity in the positioning disparity is greater than the preset pose disparity and determining whether a position disparity in the positioning disparity is greater than the preset position disparity, and if the pose disparity and the position disparity in the positioning disparity are greater than the preset pose disparity and the preset position disparity, respectively, determining that the positioning disparity is greater than the preset positioning disparity and performing step 209; if the pose difference degree of the positioning difference degrees is greater than the preset pose difference degree and the position difference degree of the positioning difference degrees is less than or equal to the preset position difference degree, determining that the positioning difference degree is greater than the preset positioning difference degree, and executing step 209; if the pose difference degree of the positioning difference degrees is less than or equal to a preset pose difference degree and the position difference degree of the positioning difference degrees is greater than a preset position difference degree, determining that the positioning difference degree is greater than the preset positioning difference degree, and executing step 209; if the pose difference degree in the positioning difference degrees is smaller than or equal to a preset pose difference degree and the position difference degree in the positioning difference degrees is smaller than or equal to a preset position difference degree, the positioning difference degree is determined to be smaller than or equal to the preset positioning difference degree, and then the process is finished.
209. And taking the next estimated positioning result of the vehicle as the current positioning result of the vehicle.
The step 205 to the step 209 may be executed to obtain a next predicted positioning result according to the driving information of the vehicle when the previous positioning result is obtained and the previous positioning result, and in a case that the positioning difference degree between the current positioning result of the vehicle and the next predicted positioning result of the vehicle is greater than the preset positioning difference degree, the next predicted positioning result of the vehicle is used as the current positioning result of the vehicle. Since the accuracy of the obtained positioning result is continuously decreased by performing the positioning only by using the IMU for a long time, the step 205 to the step 209 may be performed to solve the problem of the decreased positioning accuracy caused by performing the positioning only by using the IMU for a long time.
Optionally, in the embodiment of the present invention, after step 209 is executed, the current time may also be recorded, and a plurality of continuous historical positioning difference degrees before the current time are obtained; determining the number of historical positioning difference degrees with values larger than a preset positioning difference degree from the plurality of historical positioning difference degrees; judging whether the number is greater than a preset threshold value, and outputting mode switching prompt information when the number is greater than the preset threshold value, wherein the mode switching prompt information is used for prompting a user to input a mode switching instruction; when the mode switching instruction is detected, the current driving mode of the vehicle is switched to the manual driving mode.
In the embodiment of the invention, because the positioning of the vehicle is real-time, when the vehicle is in the automatic driving mode, the steps 201 to 209 are continuously and repeatedly executed, so that a plurality of continuous historical positioning difference degrees are generated, wherein the historical positioning difference degrees which are greater than the preset positioning difference degree may exist in the generated historical positioning difference degrees, if the number of the historical positioning difference degrees which are greater than the preset positioning difference degree in the historical positioning difference degrees is greater than the preset threshold value, the positioning precision of the current IMU is indicated to reach the lower limit of the positioning precision, and at this time, mode switching prompt information for prompting a user to input a mode switching instruction is output, so that the user can be timely informed to switch the current driving mode into the manual driving mode, and the risk that the vehicle deviates from the preset driving route can be effectively reduced.
By implementing the method, the positioning accuracy of the vehicle can be improved, the problem of low positioning accuracy caused by long-time positioning only by using the IMU can be solved, and the risk of the vehicle deviating from the preset driving route can be effectively reduced.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flow chart of another real-time positioning method applied to automatic driving according to an embodiment of the present invention. As shown in fig. 3, the real-time positioning method applied to the autonomous driving may include the steps of:
301. inertial measurement sensor data is acquired from an inertial measurement sensor built in a vehicle, and a time point at which the inertial measurement sensor data is acquired is recorded.
302. Detecting whether other sensor data are acquired at the time point, if so, executing step 304; if not, step 303 is performed.
303. And positioning the vehicle according to the data of the inertial measurement sensor to obtain the current positioning result of the vehicle.
For detailed descriptions of steps 301 to 303, please refer to the description in the first embodiment, which is not repeated herein.
304. If the other sensor data includes image sensor data, a first positioning position is obtained based on the inertial measurement sensor data and sensor data other than the image sensor data among the other sensor data.
305. And mapping the first positioning position to an automatic driving navigation electronic map.
306. And determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as the center and takes a preset length as a radius.
If the data processor is a data processor dedicated to the vehicle, the electronic map for automatic driving navigation may be stored in the cloud server, or may be stored in the data processor dedicated to the vehicle. If the automatic driving navigation electronic map can be stored in the cloud server, the memory pressure and the cost of the data processor can be reduced; if the automatic driving navigation electronic map is stored in the data processor special for the vehicle, the efficiency of obtaining the target area by the data processor can be improved.
If the electronic map for automatic driving navigation is stored in the cloud server, optionally, the mapping the first positioning location to the electronic map for automatic driving navigation by the data processor may include: the data processor sends the first positioning position with the equipment identification to the cloud server, after the cloud server receives the first positioning position, the cloud server searches the first positioning position from the automatic driving navigation electronic map for marking, obtains a target area which takes the first positioning position as a center and takes a preset length as a radius from the automatic driving navigation electronic map, and sends the obtained target area to the data processor according to the equipment identification carried by the first positioning position.
It should be noted that the preset length may be set by the data processor or the server, and the embodiment of the present invention is not limited thereto.
In the embodiment of the invention, if the preset length is set by the data processor, the data processor can set the preset length according to the running information of the current vehicle and the average historical access duration; the running information of the current vehicle at least comprises the instantaneous speed and the instantaneous acceleration of the current vehicle, and the average historical access time length is the average time length of the target area acquired by the data processor within a certain preset time length. According to the method, the data processor sets the preset length according to the current driving information of the vehicle and the average historical access time, so that the accuracy and timeliness of the vehicle for acquiring the target area can be guaranteed.
If the preset length is set by the server, optionally, the data processor may send the running information of the current vehicle and the average historical access duration while sending the first positioning position to the server, and the server may set the preset duration according to the running information and the average historical access duration. By implementing the method, the accuracy and timeliness of obtaining the target area can be ensured, and the power consumption of equipment can be reduced.
In the embodiment of the present invention, the step 304 to the step 306 may be executed to obtain the target area in real time according to the first positioning position of the vehicle, and this implementation may reduce the memory pressure of the data processor.
307. Several semantic features are extracted from the image sensor data.
The semantic features in step 307 may be red street lamps, guideboards, or road barriers.
308. And matching in the target area to obtain first position information corresponding to each semantic feature in the plurality of semantic features and second position information between each semantic feature and the vehicle.
Optionally, in the embodiment of the present invention, obtaining, by matching in the target area, first location information corresponding to each semantic feature in the plurality of semantic features, and second location information between each semantic feature and the vehicle may include:
and obtaining first position information corresponding to each semantic feature in the plurality of semantic features and second position information between each semantic feature and the vehicle by matching the geometric relations of the plurality of semantic features in the target area.
309. And controlling a laser radar sensor of the vehicle to emit a laser beam to each semantic feature according to the second position information of each semantic feature.
310. A reflected beam reflected by each semantic feature for the received laser beam is received.
311. And obtaining laser radar sensor data according to the laser beam emitted aiming at each semantic feature and the emitted light beam reflected by the laser beam, wherein the laser radar sensor data at least comprises azimuth information, height information, speed information, attitude information and shape information of the corresponding semantic feature.
In an embodiment of the present invention, the corresponding semantic features are the plurality of semantic features.
312. And utilizing the laser radar sensor data to check second position information of the semantic features corresponding to the laser radar sensor data to obtain a check result.
In an embodiment of the present invention, the corresponding semantic features may be the plurality of semantic features, and the second location information of the semantic features corresponding to the laser radar sensor data is second location information corresponding to each of the plurality of semantic features.
Wherein, above-mentioned laser radar sensor data still includes the time difference of launching the laser beam and receiving the transmission beam to every semantic feature, and is optional, utilizes above-mentioned laser radar sensor data, inspects the second position information of the semantic feature that corresponds to above-mentioned laser radar sensor data and obtains the inspection result, can include:
performing three-dimensional modeling by using the laser radar sensor data to obtain a three-dimensional image aiming at each semantic feature and relative position information of each semantic feature relative to the vehicle;
acquiring an identifier of each semantic feature from the target area;
judging whether the three-dimensional image of each semantic feature is matched with the identifier of the semantic feature in the target area;
if the semantic features are matched with the first position information, judging whether the relative position information of each semantic feature is the same as the corresponding second position information;
if the two semantic features are the same, a verification result indicating that the second position information of each semantic feature is valid is obtained.
313. And when the inspection result indicates that the second position information of the semantic features is effective, acquiring the current positioning result of the vehicle according to the first position information and the second position information of the semantic features.
It should be noted that, when the check result indicates that the second position information of each semantic feature is valid, the semantic feature used for obtaining the current positioning result of the vehicle may be all semantic features of the above semantic features, or may be a part of semantic features, and the embodiment of the present invention is not limited thereto.
After the second position information between each semantic feature of the plurality of semantic features and the vehicle is obtained by matching in the target area, the obtained lidar sensor data may be used to check the second position information between each semantic feature and the vehicle to ensure the second position information in steps 307 to 313. By implementing the method, under the condition of ensuring that the second position information is effective, the current positioning result of the vehicle is obtained according to the first position information and the second position information of the semantic features, and the positioning precision of the vehicle can be improved.
By implementing the method, the positioning precision of the vehicle can be improved, the memory pressure, the cost and the power consumption of the data processor can be reduced, the timeliness of obtaining the target area can be improved, and the positioning precision of the vehicle can be further improved.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of a real-time positioning device applied to automatic driving according to an embodiment of the present invention. As shown in fig. 4, the real-time locating device applied to the automatic driving may include:
the recording unit 401 is configured to acquire inertial measurement sensor data from an inertial measurement sensor built in a vehicle, and record a time point at which the inertial measurement sensor data is acquired.
A detecting unit 402, configured to detect whether other sensor data is acquired at the time point, where the other sensor data is sensor data other than the inertial measurement sensor data.
A positioning unit 403, configured to, when other sensor data is obtained, position the vehicle according to the inertial measurement sensor data and the other sensor data to obtain a current positioning result of the vehicle, and when the other sensor data is not obtained, position the vehicle according to the inertial measurement sensor data to obtain a current positioning result of the vehicle.
In an embodiment of the present invention, the manner that the positioning unit 403 is configured to, when acquiring other sensor data, position the vehicle according to the inertial measurement sensor data and the other sensor data may specifically be that the positioning unit 403 is configured to, when acquiring other sensor data, fuse the acquired inertial measurement sensor data and the other sensor data by using kalman filtering to obtain a current positioning result of the vehicle.
By implementing the real-time positioning device applied to automatic driving, the time point of receiving the inertial measurement sensor data with the highest sampling frequency is taken as a time reference, and positioning is carried out by combining the sensor data obtained at the time point, so that the positioning precision of the vehicle can be improved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another real-time positioning device applied to automatic driving according to an embodiment of the present invention. The real-time locating device for automatic driving shown in fig. 5 is optimized from the real-time locating device for automatic driving shown in fig. 4, and as shown in fig. 5, the real-time locating device for automatic driving may further include:
a modeling unit 404, configured to, when the positioning unit 403 does not obtain the other sensor data, position the vehicle according to the inertial measurement sensor data, obtain a current positioning result of the vehicle, estimate a trajectory route of the vehicle according to a previous positioning result of the vehicle and the driving information of the vehicle when the previous positioning result is obtained, and determine a next estimated positioning result of the vehicle according to the trajectory route.
In an embodiment of the present invention, the driving information at least includes an instantaneous acceleration of the vehicle and pose information of the vehicle, wherein the modeling unit 404 is configured to estimate the trajectory route of the vehicle according to a last positioning result of the vehicle and the driving information of the vehicle when the last positioning result is obtained, specifically, the manner of estimating the trajectory route of the vehicle may be: the modeling unit 404 is configured to estimate a driving direction, a driving speed, and a driving acceleration of the vehicle within a certain short time period after the last positioning result is obtained according to the instantaneous acceleration of the vehicle, the pose information of the vehicle, and the trajectory model when the last positioning result is obtained; wherein the certain shorter time period is a time interval between obtaining a next positioning result and obtaining a previous positioning result; and obtaining a next estimated positioning result according to the last positioning result and the driving direction, the driving speed and the driving acceleration of the vehicle in a certain short time period after the last positioning result is obtained.
The first obtaining unit 405 is configured to obtain a positioning difference between a current positioning result of the vehicle and a next estimated positioning result of the vehicle.
The determining unit 406 is configured to determine whether the positioning difference is greater than a preset positioning difference, and when the positioning difference is greater than the preset positioning difference, take a next estimated positioning result of the vehicle as a current positioning result of the vehicle.
In this embodiment of the present invention, the positioning difference at least includes a pose difference and a position difference of the vehicle, and correspondingly, the preset positioning difference at least includes a preset pose difference and a preset position difference, and optionally, the manner for the determining unit 406 to determine whether the positioning difference is greater than the preset positioning difference may specifically be: a determining unit 406, configured to determine whether a pose difference degree of the positioning difference degrees is greater than a preset pose difference degree, and determine whether a position difference degree of the positioning difference degrees is greater than a preset position difference degree, and if the pose difference degree and the position difference degree of the positioning difference degrees are greater than the preset pose difference degree and the preset position difference degree, respectively, determine that the positioning difference degree is greater than the preset positioning difference degree; if the position difference degree in the positioning difference degrees is greater than a preset position difference degree and the position difference degree in the positioning difference degrees is less than or equal to the preset position difference degree, determining that the positioning difference degree is greater than the preset positioning difference degree; if the position difference degree in the positioning difference degrees is smaller than or equal to a preset position difference degree and the position difference degree in the positioning difference degrees is larger than the preset position difference degree, determining that the positioning difference degree is larger than the preset positioning difference degree; and if the pose difference degree in the positioning difference degrees is less than or equal to a preset pose difference degree and the position difference degree in the positioning difference degrees is less than or equal to a preset position difference degree, determining that the positioning difference degree is less than or equal to the preset positioning difference degree.
Optionally, in an embodiment of the present invention, the first obtaining unit 405 is further configured to record a current time and obtain a plurality of continuous historical positioning difference degrees before the current time when the determining unit 406 determines that the positioning difference degree is greater than a preset positioning difference degree.
Wherein, this be applied to real-time positioner of autopilot still includes:
a determining unit 407, configured to determine, from the plurality of historical positioning differences, the number of historical positioning differences whose value is greater than a preset positioning difference;
the determining unit 406 is further configured to determine whether the number of the historical positioning difference degrees whose values are greater than the preset positioning difference degree is greater than a preset threshold, and output a mode switching prompt message when the number of the historical positioning difference degrees is greater than the preset threshold, where the mode switching prompt message is used to prompt a user to input a mode switching instruction.
In the embodiment of the present invention, since the positioning of the vehicle is real-time, when the vehicle is in the automatic driving mode, the operations performed by the units are repeatedly performed, so that a plurality of continuous historical positioning differences are generated, wherein historical positioning differences larger than the preset positioning difference may exist in the generated historical positioning differences, the determining unit 406 determines that the number of the historical positioning differences larger than the preset positioning difference in the historical positioning differences is larger than the preset threshold, which indicates that the positioning accuracy of the current IMU has reached the lower limit of the positioning accuracy, and at this time, outputs the mode switching prompt message for prompting the user to input the mode switching command, so as to timely notify the user to switch the current driving mode to the manual driving mode, and effectively reduce the risk that the vehicle deviates from the preset driving route.
A mode switching unit 408 for switching the current driving mode of the vehicle to the manual driving mode when the mode switching instruction is detected.
By implementing the real-time positioning device applied to automatic driving, the positioning precision of the vehicle can be improved, the problem of reduction of the positioning precision caused by long-time positioning only by utilizing the IMU can be solved, and the risk of deviation of the vehicle from the preset driving route can be effectively reduced.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another real-time positioning device applied to automatic driving according to an embodiment of the present invention. The real-time positioning device applied to automatic driving shown in fig. 6 is optimized by the real-time positioning device applied to automatic driving shown in fig. 5, as shown in fig. 6, where the positioning unit 403 of the real-time positioning device applied to automatic driving is configured to, when the other sensor data is obtained, position the vehicle according to the inertial measurement sensor data and the other sensor data, and obtain the current positioning result of the vehicle specifically may be:
a positioning unit 403, configured to obtain a first positioning position according to the inertial measurement sensor data and sensor data other than the image sensor data in the other sensor data when the other sensor data is obtained and the other sensor data includes the image sensor data; and mapping the first location to an autonomous navigation electronic map; determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as the center and takes the preset length as the radius; extracting a plurality of semantic features from the image sensor data; matching in the target area to obtain first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle; and obtaining the current positioning result of the vehicle according to the first position information and the second position information of the semantic features.
In the embodiment of the present invention, for the detailed description of the preset length, please refer to the description in the third embodiment, which is not repeated herein. In addition, when the current positioning result of the vehicle is obtained according to the first position information and the second position information of the semantic features, the semantic features may be part or all of the above semantic features, and the embodiment of the present invention is not limited.
Optionally, in an embodiment of the present invention, the real-time positioning device applied to automatic driving may further include:
and a control unit 409, configured to, after the positioning unit 403 obtains first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle through matching in the target area, and before a current positioning result of the vehicle is obtained according to the first position information and the second position information of each semantic feature, control a laser radar sensor of the vehicle to emit a laser beam to each semantic feature according to the second position information of each semantic feature.
Optionally, in the embodiment of the present invention, a manner that the control unit 409 is used to obtain, in the target area, first position information corresponding to each semantic feature in the plurality of semantic features through matching, and second position information between each semantic feature and the vehicle may specifically be:
and the control unit 409 is configured to obtain, in the target area, first position information corresponding to each semantic feature in the plurality of semantic features and second position information between each semantic feature and the vehicle by matching the geometric relationships of the plurality of semantic features.
A receiving unit 410 for receiving a reflected beam of each semantic feature reflected for the received laser beam.
A second obtaining unit 411, configured to obtain lidar sensor data according to the laser beam emitted for each semantic feature and the emitted light beam reflected by the laser beam, where the lidar sensor data at least includes azimuth information, altitude information, speed information, attitude information, and shape information corresponding to the semantic feature; and utilizing the laser radar sensor data to check the second position information of the semantic features corresponding to the laser radar sensor data to obtain a check result.
Optionally, the second obtaining unit 411 is configured to use the laser radar sensor data to check the second position information of the semantic features corresponding to the laser radar sensor data to obtain a check result in a specific manner that:
a second obtaining unit 411, configured to perform three-dimensional modeling by using the lidar sensor data to obtain a three-dimensional image of each semantic feature and relative position information of each semantic feature with respect to the vehicle; acquiring an identifier of each semantic feature from the target area; judging whether the three-dimensional image of each semantic feature is matched with the identifier of the semantic feature in the target area; if the semantic features are matched with the first position information, judging whether the relative position information of each semantic feature is the same as the corresponding second position information; if the two semantic features are the same, a verification result indicating that the second position information of each semantic feature is valid is obtained.
The above positioning unit 403 is configured to obtain the current positioning result of the vehicle according to the first location information and the second location information of each semantic feature specifically as follows:
the positioning unit 403 is configured to obtain a current positioning result of the vehicle according to the first position information and the second position information of the semantic features when the inspection result indicates that the second position information of each semantic feature is valid.
By implementing the real-time positioning device applied to automatic driving, the positioning precision of the vehicle can be improved, the memory pressure, the cost and the power consumption of the data processor can be reduced, the timeliness of obtaining a target area can be improved, and the positioning precision of the vehicle can be further improved.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic equipment can further comprise the real-time positioning device applied to automatic driving, which is described in any one of the fourth embodiment to the sixth embodiment. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute any one of the real-time positioning methods applied to the automatic driving in fig. 1 to 3.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the real-time positioning methods applied to automatic driving in figures 1-3.
The embodiment of the invention discloses a computer program product, which enables a computer to execute any one of the real-time positioning methods applied to automatic driving in figures 1-3 when the computer program product runs on the computer.
The embodiment of the invention discloses an application publishing platform which is used for publishing a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute any one of the real-time positioning methods applied to automatic driving in figures 1-3.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The above detailed description is provided for the real-time positioning method and apparatus applied to automatic driving according to the embodiments of the present invention, and the principle and implementation manner of the present invention are described in this document by applying specific examples, and the size of the step numbers in the specific examples does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention. The units described as separate parts may or may not be physically separate, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. If the integrated unit is implemented as a software functional unit and sold or used as a stand-alone product, it may be stored in a memory accessible to a computer. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The above description of the embodiments is only intended to facilitate the understanding of the method of the invention and its core ideas; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A real-time positioning method applied to automatic driving is characterized by comprising the following steps:
acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle, and recording the time point of acquiring the inertial measurement sensor data;
detecting whether other sensor data are acquired at the time point, wherein the other sensor data are sensor data except the inertial measurement sensor data;
if the other sensor data are acquired, positioning the vehicle according to the inertial measurement sensor data and the other sensor data to acquire a current positioning result of the vehicle;
if the other sensor data are not acquired, positioning the vehicle according to the inertial measurement sensor data to acquire a current positioning result of the vehicle;
wherein, if the other sensor data is obtained, the vehicle is positioned according to the inertial measurement sensor data and the other sensor data, and a current positioning result of the vehicle is obtained, and the method comprises the following steps:
if the other sensor data is acquired and the other sensor data comprises image sensor data, acquiring a first positioning position according to the inertial measurement sensor data and sensor data except the image sensor data in the other sensor data;
mapping the first location to an autonomous navigation electronic map;
determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as a center and takes a preset length as a radius; the preset length is set according to the running information of the current vehicle and the average historical access time length, wherein the running information of the current vehicle at least comprises the instantaneous speed and the instantaneous acceleration of the current vehicle, and the average historical access time length is the average time length of a target area acquired by a data processor in the preset time length;
extracting a number of semantic features from the image sensor data;
matching in the target area to obtain first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle;
controlling a laser radar sensor of the vehicle to emit a laser beam to each semantic feature according to the second position information of each semantic feature;
receiving a reflected beam of each of the semantic features reflected against the received laser beam;
acquiring laser radar sensor data according to the laser beam emitted aiming at each semantic feature and the emitted light beam reflected by the laser beam, wherein the laser radar sensor data at least comprises azimuth information, height information, speed information, attitude information and shape information of the corresponding semantic feature;
utilizing the laser radar sensor data to check second position information of semantic features corresponding to the laser radar sensor data to obtain a check result;
when the inspection result indicates that the second position information of the semantic feature is valid, obtaining a current positioning result of the vehicle according to the first position information and the second position information of the semantic feature;
wherein the lidar sensor data further comprises a time difference between transmitting a laser beam and receiving a transmitted beam for each semantic feature;
correspondingly, the step of checking the second position information of the semantic features corresponding to the lidar sensor data by using the lidar sensor data to obtain a checking result includes:
performing three-dimensional modeling by using the laser radar sensor data to obtain a three-dimensional image aiming at each semantic feature and relative position information of each semantic feature relative to the vehicle;
acquiring an identifier of each semantic feature from the target area;
judging whether the three-dimensional image of each semantic feature is matched with the identifier of each semantic feature in the target area;
if the semantic features are matched with the first position information, judging whether the relative position information of each semantic feature is the same as the corresponding second position information;
if the two semantic features are the same, a verification result indicating that the second position information of each semantic feature is valid is obtained.
2. The method of claim 1, wherein if the other sensor data is not obtained, locating the vehicle based on the inertial measurement sensor data, and after obtaining the location of the vehicle, the method further comprises:
estimating the track route of the vehicle according to the last positioning result of the vehicle and the driving information of the vehicle when the last positioning result is obtained;
determining a next estimated positioning result of the vehicle according to the track route;
obtaining the positioning difference degree between the current positioning result of the vehicle and the next estimated positioning result of the vehicle;
judging whether the positioning difference degree is greater than a preset positioning difference degree;
and if the estimated positioning result is greater than the preset positioning difference degree, taking the next estimated positioning result of the vehicle as the current positioning result of the vehicle.
3. The method of claim 2, wherein if the degree of positioning difference is greater than the preset degree of positioning difference, the method further comprises:
recording the current time, and acquiring a plurality of continuous historical positioning difference degrees before the current time;
determining the number of historical positioning difference degrees with values larger than the preset positioning difference degree from the plurality of historical positioning difference degrees;
judging whether the number is larger than a preset threshold value or not;
if the current time is greater than the preset threshold value, outputting mode switching prompt information, wherein the mode switching prompt information is used for prompting a user to input a mode switching instruction;
when the mode switching instruction is detected, switching the current driving mode of the vehicle to a manual driving mode.
4. A real-time positioning device for autonomous driving, comprising:
the recording unit is used for acquiring inertial measurement sensor data from an inertial measurement sensor built in a vehicle and recording the time point of acquiring the inertial measurement sensor data;
the detection unit is used for detecting whether other sensor data are acquired at the time point, wherein the other sensor data are sensor data except the inertial measurement sensor data;
the positioning unit is used for positioning the vehicle according to the inertial measurement sensor data and the other sensor data to obtain a current positioning result of the vehicle when the other sensor data are obtained, and positioning the vehicle according to the inertial measurement sensor data to obtain the current positioning result of the vehicle when the other sensor data are not obtained;
the positioning unit is configured to, when the other sensor data is acquired, position the vehicle according to the inertial measurement sensor data and the other sensor data, and the manner of acquiring the current positioning result of the vehicle specifically includes:
the positioning unit is used for acquiring a first positioning position according to the inertial measurement sensor data and sensor data except for the image sensor data in the other sensor data when the other sensor data is acquired and comprises the image sensor data; and mapping the first location to an autonomous navigation electronic map; determining a target area from the automatic driving navigation electronic map, wherein the target area is an area which takes the first positioning position as a center and takes a preset length as a radius; and extracting a number of semantic features from the image sensor data; matching in the target area to obtain first position information corresponding to each semantic feature and second position information between each semantic feature and the vehicle; obtaining a current positioning result of the vehicle according to the first position information and the second position information of the semantic features;
the preset length is set according to the running information of the current vehicle and the average historical access time length, wherein the running information of the current vehicle at least comprises the instantaneous speed and the instantaneous acceleration of the current vehicle, and the average historical access time length is the average time length of a target area obtained by a data processor in the preset time length;
wherein, real-time positioner who is applied to autopilot still includes:
the control unit is used for controlling a laser radar sensor of the vehicle to emit laser beams to each semantic feature according to the second position information of each semantic feature after the positioning unit obtains the first position information corresponding to each semantic feature and the second position information between each semantic feature and the vehicle through matching in the target area and before the current positioning result of the vehicle is obtained according to the first position information and the second position information of the semantic features;
a receiving unit for receiving a reflected beam of each of the semantic features reflected by the received laser beam;
the second acquisition unit is used for acquiring laser radar sensor data according to the laser beam emitted by each semantic feature and the emitted light beam reflected by the laser beam, wherein the laser radar sensor data at least comprises azimuth information, height information, speed information, attitude information and shape information of the corresponding semantic feature; the laser radar sensor data is utilized to check second position information of semantic features corresponding to the laser radar sensor data to obtain a check result;
the positioning unit is configured to obtain a current positioning result of the vehicle according to the first location information and the second location information of the semantic features in a specific manner:
the positioning unit is used for obtaining the current positioning result of the vehicle according to the first position information and the second position information of the semantic features when the inspection result indicates that the second position information of the semantic features is effective;
wherein the lidar sensor data further comprises a time difference between transmitting a laser beam and receiving a transmitted beam for each semantic feature;
correspondingly, the specific way that the second obtaining unit is used for detecting the second position information of the semantic features corresponding to the lidar sensor data by using the lidar sensor data to obtain the detection result is as follows:
the second acquisition unit is used for carrying out three-dimensional modeling by utilizing the laser radar sensor data to obtain a three-dimensional image aiming at each semantic feature and relative position information of each semantic feature relative to the vehicle;
acquiring an identifier of each semantic feature from the target area;
judging whether the three-dimensional image of each semantic feature is matched with the identifier of each semantic feature in the target area;
if the semantic features are matched with the first position information, judging whether the relative position information of each semantic feature is the same as the corresponding second position information;
if the two semantic features are the same, a verification result indicating that the second position information of each semantic feature is valid is obtained.
5. The real-time locating device applied to automatic driving according to claim 4, further comprising:
the modeling unit is used for the positioning unit to position the vehicle according to the inertial measurement sensor data when the other sensor data is not acquired, estimating a track route of the vehicle according to a previous positioning result of the vehicle and the driving information of the vehicle when the previous positioning result is acquired after a current positioning result of the vehicle is acquired, and determining a next estimated positioning result of the vehicle according to the track route;
the first acquisition unit is used for acquiring the positioning difference degree of the current positioning result of the vehicle and the next estimated positioning result of the vehicle;
and the judging unit is used for judging whether the positioning difference degree is greater than a preset positioning difference degree or not, and taking the next estimated positioning result of the vehicle as the current positioning result of the vehicle when the positioning difference degree is greater than the preset positioning difference degree.
6. The real-time positioning device applied to automatic driving of claim 5, wherein the first obtaining unit is further configured to record a current time and obtain a plurality of continuous historical positioning difference degrees before the current time when the determining unit determines that the positioning difference degree is greater than the preset positioning difference degree;
the real-time positioning device applied to automatic driving further comprises:
the determining unit is used for determining the number of the historical positioning difference degrees of which the values are larger than the preset positioning difference degree from the plurality of historical positioning difference degrees;
the judging unit is further configured to judge whether the number is greater than a preset threshold, and when the number is greater than the preset threshold, output mode switching prompt information, where the mode switching prompt information is used to prompt a user to input a mode switching instruction;
and the mode switching unit is used for switching the current driving mode of the vehicle to a manual driving mode when the mode switching instruction is detected.
CN201810984483.1A 2018-08-28 2018-08-28 Real-time positioning method and device applied to automatic driving Active CN110146074B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810984483.1A CN110146074B (en) 2018-08-28 2018-08-28 Real-time positioning method and device applied to automatic driving
PCT/CN2018/113663 WO2020042347A1 (en) 2018-08-28 2018-11-02 Real-time positioning method and device applied to automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810984483.1A CN110146074B (en) 2018-08-28 2018-08-28 Real-time positioning method and device applied to automatic driving

Publications (2)

Publication Number Publication Date
CN110146074A CN110146074A (en) 2019-08-20
CN110146074B true CN110146074B (en) 2021-06-15

Family

ID=67588348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810984483.1A Active CN110146074B (en) 2018-08-28 2018-08-28 Real-time positioning method and device applied to automatic driving

Country Status (2)

Country Link
CN (1) CN110146074B (en)
WO (1) WO2020042347A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112747754A (en) * 2019-10-30 2021-05-04 北京初速度科技有限公司 Fusion method, device and system of multi-sensor data
CN111753639A (en) * 2020-05-06 2020-10-09 上海欧菲智能车联科技有限公司 Perception map generation method and device, computer equipment and storage medium
CN111721299B (en) * 2020-06-30 2022-07-19 上海汽车集团股份有限公司 Real-time positioning time synchronization method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2690463A1 (en) * 2011-03-24 2014-01-29 Hokuyo Automatic Co. Ltd. Signal processing device of scanning-type distance measurement device, signal processing method, and scanning-type distance measurement device
JP2017138196A (en) * 2016-02-03 2017-08-10 株式会社デンソー Position correction device, navigation system, and automatic driving system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036865B2 (en) * 2012-09-12 2015-05-19 International Business Machines Corporation Location determination for an object using visual data
JP6248559B2 (en) * 2013-11-13 2017-12-20 株式会社デンソー Vehicle trajectory calculation device
US9182237B2 (en) * 2013-12-06 2015-11-10 Novatel Inc. Navigation system with rapid GNSS and inertial initialization
CN106052683A (en) * 2016-05-25 2016-10-26 速感科技(北京)有限公司 Robot motion attitude estimating method
CN106093994B (en) * 2016-05-31 2019-03-29 山东大学 A kind of multi-source joint positioning method based on adaptive weighted mixing Kalman filtering
CN107817503B (en) * 2016-09-14 2018-12-21 北京百度网讯科技有限公司 Motion compensation process and device applied to laser point cloud data
CN108254775A (en) * 2016-12-29 2018-07-06 联创汽车电子有限公司 Onboard navigation system and its implementation
CN107941212B (en) * 2017-11-14 2020-07-28 杭州德泽机器人科技有限公司 Vision and inertia combined positioning method
CN107976697B (en) * 2017-11-30 2021-05-28 中国铁路总公司 Train safety positioning method and system based on Beidou/GPS combination
CN108387243A (en) * 2018-03-09 2018-08-10 迪比(重庆)智能科技研究院有限公司 Intelligent vehicle mounted terminal based on the Big Dipper and GPS dual-mode

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2690463A1 (en) * 2011-03-24 2014-01-29 Hokuyo Automatic Co. Ltd. Signal processing device of scanning-type distance measurement device, signal processing method, and scanning-type distance measurement device
JP2017138196A (en) * 2016-02-03 2017-08-10 株式会社デンソー Position correction device, navigation system, and automatic driving system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-Time Autonomous Ground Vehicle Navigation in Heterogeneous Environments Using a 3D LiDAR;Pfrunder Andreas,et.al;《2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》;20171231;第2601-2608页 *
自动驾驶汽车高精定位导航技术路线分析;赵佳等;《客车技术与研究》;20180822;第40卷(第4期);第8-10页 *

Also Published As

Publication number Publication date
CN110146074A (en) 2019-08-20
WO2020042347A1 (en) 2020-03-05

Similar Documents

Publication Publication Date Title
CN110873568B (en) High-precision map generation method and device and computer equipment
US10077054B2 (en) Tracking objects within a dynamic environment for improved localization
CN107063713B (en) Test method and device applied to unmanned automobile
CN112665556B (en) Generating a three-dimensional map of a scene using passive and active measurements
EP3644294B1 (en) Vehicle information storage method, vehicle travel control method, and vehicle information storage device
CN109709961B (en) Road obstacle detection method and device and automatic driving automobile
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
JP6666075B2 (en) Method and apparatus for determining a lane identifier of a road
US9409570B2 (en) Method and apparatus for predicting most probable path of vehicle travel and vehicle control loss preview
US10369993B2 (en) Method and device for monitoring a setpoint trajectory to be traveled by a vehicle for being collision free
CN110146074B (en) Real-time positioning method and device applied to automatic driving
US11861754B2 (en) Vehicle terminal device, service server, method, computer program, computer readable recording medium for providing driving related guidance service
US10876842B2 (en) Method for determining, with the aid of landmarks, an attitude of a vehicle moving in an environment in an at least partially automated manner
CN110189546B (en) Vehicle positioning method and vehicle positioning system
CN109795500B (en) Vehicle control device, vehicle control method, and storage medium
CN110888434A (en) Automatic driving method, device, computer equipment and computer readable storage medium
CN113553304A (en) Data storage system for automatic driving
CN108986463B (en) Road condition information processing method and device and electronic equipment
US10532750B2 (en) Method, device and system for wrong-way driver detection
CN109765886B (en) Target track identification method followed by vehicle
CN110446106B (en) Method for identifying front camera file, electronic equipment and storage medium
JP6933069B2 (en) Pathfinding device
CN115123232A (en) Driving support device, driving support method, and storage medium
CN116499488B (en) Target fusion method, device, vehicle and storage medium
US20190145783A1 (en) Method, device and system for wrong-way driver detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220303

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Patentee after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing

Patentee before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.